Feedforward Initialization for Fast Inference of Deep Generative Networks is biologically plausible

06/06/2016
by   Yoshua Bengio, et al.
0

We consider deep multi-layered generative models such as Boltzmann machines or Hopfield nets in which computation (which implements inference) is both recurrent and stochastic, but where the recurrence is not to model sequential structure, only to perform computation. We find conditions under which a simple feedforward computation is a very good initialization for inference, after the input units are clamped to observed values. It means that after the feedforward initialization, the recurrent network is very close to a fixed point of the network dynamics, where the energy gradient is 0. The main condition is that consecutive layers form a good auto-encoder, or more generally that different groups of inputs into the unit (in particular, bottom-up inputs on one hand, top-down inputs on the other hand) are consistent with each other, producing the same contribution into the total weighted sum of inputs. In biological terms, this would correspond to having each dendritic branch correctly predicting the aggregate input from all the dendritic branches, i.e., the soma potential. This is consistent with the prediction that the synaptic weights into dendritic branches such as those of the apical and basal dendrites of pyramidal cells are trained to minimize the prediction error made by the dendritic branch when the target is the somatic activity. Whereas previous work has shown how to achieve fast negative phase inference (when the model is unclamped) in a predictive recurrent model, this contribution helps to achieve fast positive phase inference (when the target output is clamped) in such recurrent neural models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2022

Interneurons accelerate learning dynamics in recurrent neural networks for statistical adaptation

Early sensory systems in the brain rapidly adapt to fluctuating input st...
research
03/26/2020

Going in circles is the way forward: the role of recurrence in visual inference

Biological visual systems exhibit abundant recurrent connectivity. State...
research
06/26/2023

Scaling and Resizing Symmetry in Feedforward Networks

Weights initialization in deep neural networks have a strong impact on t...
research
10/16/2018

Biologically Plausible Online Principal Component Analysis Without Recurrent Neural Dynamics

Artificial neural networks that learn to perform Principal Component Ana...
research
04/05/2022

Hybrid Predictive Coding: Inferring, Fast and Slow

Predictive coding is an influential model of cortical neural activity. I...
research
09/28/2020

Variational Temporal Deep Generative Model for Radar HRRP Target Recognition

We develop a recurrent gamma belief network (rGBN) for radar automatic t...
research
05/06/2015

Classification of Occluded Objects using Fast Recurrent Processing

Recurrent neural networks are powerful tools for handling incomplete dat...

Please sign up or login with your details

Forgot password? Click here to reset