Deriving Differential Target Propagation from Iterating Approximate Inverses

07/29/2020
by   Yoshua Bengio, et al.
3

We show that a particular form of target propagation, i.e., relying on learned inverses of each layer, which is differential, i.e., where the target is a small perturbation of the forward propagation, gives rise to an update rule which corresponds to an approximate Gauss-Newton gradient-based optimization, without requiring the manipulation or inversion of large matrices. What is interesting is that this is more biologically plausible than back-propagation yet may turn out to implicitly provide a stronger optimization procedure. Extending difference target propagation, we consider several iterative calculations based on local auto-encoders at each layer in order to achieve more precise inversions for more accurate target propagation and we show that these iterative procedures converge exponentially fast if the auto-encoding function minus the identity function has a Lipschitz constant smaller than one, i.e., the auto-encoder is coarsely succeeding at performing an inversion. We also propose a way to normalize the changes at each layer to take into account the relative influence of each layer on the output, so that larger weight changes are done on more influential layers, like would happen in ordinary back-propagation with gradient descent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/29/2014

How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation

We propose to exploit reconstruction as a layer-local training signal f...
research
12/23/2014

Difference Target Propagation

Back-propagation has been the workhorse of recent successes of deep lear...
research
02/16/2016

Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation

We introduce Equilibrium Propagation, a learning framework for energy-ba...
research
06/11/2020

GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Traditional backpropagation of error, though a highly successful algorit...
research
03/17/2023

A Two-Step Rule for Backpropagation

We present a simplified computational rule for the back-propagation form...
research
12/08/2012

Hybrid Optimized Back propagation Learning Algorithm For Multi-layer Perceptron

Standard neural network based on general back propagation learning using...
research
11/11/2022

Metaphors We Learn By

Gradient based learning using error back-propagation (“backprop”) is a w...

Please sign up or login with your details

Forgot password? Click here to reset