Unsupervised Learning of Invariant Representations in Hierarchical Architectures

11/17/2013
by   Fabio Anselmi, et al.
0

The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n →∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → 1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a "good" representation for supervised learning, characterized by small sample complexity (n). We consider the case of visual object recognition though the theory applies to other domains. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translations, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and unique (discriminative) signature can be computed for each image patch, I, in terms of empirical distributions of the dot-products between I and a set of templates stored during unsupervised learning. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such estimates. Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts. The theory extends existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition---and that this representation may be continuously learned in an unsupervised way during development and visual experience.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 22

research
04/01/2014

A Deep Representation for Invariance And Music Classification

Representations in the auditory cortex might be based on mechanisms simi...
research
12/08/2017

Transformational Sparse Coding

A fundamental problem faced by object recognition systems is that object...
research
09/12/2014

Unsupervised learning of clutter-resistant visual representations from natural videos

Populations of neurons in inferotemporal cortex (IT) maintain an explici...
research
10/27/2014

Maximally Informative Hierarchical Representations of High-Dimensional Data

We consider a set of probabilistic functions of some input variables as ...
research
08/01/2018

Subitizing with Variational Autoencoders

Numerosity, the number of objects in a set, is a basic property of a giv...
research
09/07/2015

Hierarchical Deep Learning Architecture For 10K Objects Classification

Evolution of visual object recognition architectures based on Convolutio...
research
06/05/2016

View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

The primate brain contains a hierarchy of visual areas, dubbed the ventr...

Please sign up or login with your details

Forgot password? Click here to reset