Unsupervised Learning of Invariance Transformations

07/24/2023
by   Aleksandar Vučković, et al.
0

The need for large amounts of training data in modern machine learning is one of the biggest challenges of the field. Compared to the brain, current artificial algorithms are much less capable of learning invariance transformations and employing them to extrapolate knowledge from small sample sets. It has recently been proposed that the brain might encode perceptual invariances as approximate graph symmetries in the network of synaptic connections. Such symmetries may arise naturally through a biologically plausible process of unsupervised Hebbian learning. In the present paper, we illustrate this proposal on numerical examples, showing that invariance transformations can indeed be recovered from the structure of recurrent synaptic connections which form within a layer of feature detector neurons via a simple Hebbian learning rule. In order to numerically recover the invariance transformations from the resulting recurrent network, we develop a general algorithmic framework for finding approximate graph automorphisms. We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.

READ FULL TEXT
research
11/11/2021

Does the Brain Infer Invariance Transformations from Graph Symmetries?

The invariance of natural objects under perceptual changes is possibly e...
research
11/13/2019

Learning Non-Parametric Invariances from Data with Permanent Random Connectomes

One of the fundamental problems in supervised classification and in mach...
research
08/10/2013

Learning Features and their Transformations by Spatial and Temporal Spherical Clustering

Learning features invariant to arbitrary transformations in the data is ...
research
11/14/2020

Using noise to probe recurrent neural network structure and prune synapses

Many networks in the brain are sparsely connected, and the brain elimina...
research
02/09/2021

More Is More – Narrowing the Generalization Gap by Adding Classification Heads

Overfit is a fundamental problem in machine learning in general, and in ...
research
09/25/2017

Robust Associative Memories Naturally Occuring From Recurrent Hebbian Networks Under Noise

The brain is a noisy system subject to energy constraints. These facts a...
research
10/11/2017

Subsampling large graphs and invariance in networks

Specify a randomized algorithm that, given a very large graph or network...

Please sign up or login with your details

Forgot password? Click here to reset