Deep supervised learning using local errors

11/17/2017
by   Hesham Mostafa, et al.
0

Error backpropagation is a highly effective mechanism for learning high-quality hierarchical features in deep networks. Updating the features or weights in one layer, however, requires waiting for the propagation of error signals from higher layers. Learning using delayed and non-local errors makes it hard to reconcile backpropagation with the learning mechanisms observed in biological neural networks as it requires the neurons to maintain a memory of the input long enough until the higher-layer errors arrive. In this paper, we propose an alternative learning mechanism where errors are generated locally in each layer using fixed, random auxiliary classifiers. Lower layers could thus be trained independently of higher layers and training could either proceed layer by layer, or simultaneously in all layers using local error information. We address biological plausibility concerns such as weight symmetry requirements and show that the proposed learning mechanism based on fixed, broad, and random tuning of each neuron to the classification categories outperforms the biologically-motivated feedback alignment learning technique on the MNIST, CIFAR10, and SVHN datasets, approaching the performance of standard backpropagation. Our approach highlights a potential biological mechanism for the supervised, or task-dependent, learning of feature hierarchies. In addition, we show that it is well suited for learning deep networks in custom hardware where it can drastically reduce memory traffic and data communication overheads.

READ FULL TEXT
research
06/10/2021

Convergence and Alignment of Gradient Descent with Random Back Propagation Weights

Stochastic gradient descent with backpropagation is the workhorse of art...
research
10/26/2022

Multi-level Data Representation For Training Deep Helmholtz Machines

A vast majority of the current research in the field of Machine Learning...
research
06/15/2017

Hardware-efficient on-line learning through pipelined truncated-error backpropagation in binary-state networks

Artificial neural networks (ANNs) trained using backpropagation are powe...
research
11/21/2022

Learning on tree architectures outperforms a convolutional feedforward network

Advanced deep learning architectures consist of tens of fully connected ...
research
09/30/2021

Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks

We develop biologically plausible training mechanisms for self-supervise...
research
06/26/2018

Unsupervised Learning by Competing Hidden Units

It is widely believed that the backpropagation algorithm is essential fo...
research
05/29/2023

Deep Predictive Coding with Bi-directional Propagation for Classification and Reconstruction

This paper presents a new learning algorithm, termed Deep Bi-directional...

Please sign up or login with your details

Forgot password? Click here to reset