Deep CNNs with Rotational Filters for Rotation Invariant Character Recognition

Abstract : This paper explores the use of parallel columns of convolutional layers with tied weights presented to each column in a layer at different rotations, to create a rotation invariant deep convolutional network (CNN). Results of the columns are combined using a winner takes all pooling method to produce approximate rotation invariance, with the approximation improving with smaller rotation increments between parallel columns. Results of applying invariant deep CNN to the MNIST and the CHARS74K rotated test data showed great mprovement over traditional deep CNN with a 52.32% increase in accuracy on the MNIST dataset and a 36.44% accuracy increase on the CHARS74K dataset. This paper also introduces a Caffe implementation of the method for use with object recognition research.


Download a copy of this paper


Original publication available from IEEE

Please Cite: Bibtex | Endnote