Feedback alignment in deep convolutional networks

12/12/2018
by   Theodore H. Moskovitz, et al.
0

Ongoing studies have identified similarities between neural representations in biological networks and in deep artificial neural networks. This has led to renewed interest in developing analogies between the backpropagation learning algorithm used to train artificial networks and the synaptic plasticity rules operative in the brain. These efforts are challenged by biologically implausible features of backpropagation, one of which is a reliance on symmetric forward and backward synaptic weights. A number of methods have been proposed that do not rely on weight symmetry but, thus far, these have failed to scale to deep convolutional networks and complex data. We identify principal obstacles to the scalability of such algorithms and introduce several techniques to mitigate them. We demonstrate that a modification of the feedback alignment method that enforces a weaker form of weight symmetry, one that requires agreement of weight sign but not magnitude, can achieve performance competitive with backpropagation. Our results complement those of Bartunov et al. (2018) and Xiao et al. (2018b) and suggest that mechanisms that promote alignment of feedforward and feedback weights are critical for learning in deep networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/10/2019

Using Weight Mirrors to Improve Feedback Alignment

Current algorithms for deep learning probably cannot run in the brain be...
research
06/10/2021

Convergence and Alignment of Gradient Descent with Random Back Propagation Weights

Stochastic gradient descent with backpropagation is the workhorse of art...
research
04/10/2019

Deep Learning without Weight Transport

Current algorithms for deep learning probably cannot run in the brain be...
research
02/28/2020

Two Routes to Scalable Credit Assignment without Weight Symmetry

The neural plausibility of backpropagation has long been disputed, prima...
research
10/28/2022

Meta-Learning Biologically Plausible Plasticity Rules with Random Feedback Pathways

Backpropagation is widely used to train artificial neural networks, but ...
research
02/10/2023

Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization

"Forward-only" algorithms, which train neural networks while avoiding a ...
research
08/30/2021

Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms

Backpropagation is the default algorithm for training deep neural networ...

Please sign up or login with your details

Forgot password? Click here to reset