Training DNNs in O(1) memory with MEM-DFA using Random Matrices

12/21/2020
by   Tien Chu, et al.
0

This work presents a method for reducing memory consumption to a constant complexity when training deep neural networks. The algorithm is based on the more biologically plausible alternatives of the backpropagation (BP): direct feedback alignment (DFA) and feedback alignment (FA), which use random matrices to propagate error. The proposed method, memory-efficient direct feedback alignment (MEM-DFA), uses higher independence of layers in DFA and allows avoiding storing at once all activation vectors, unlike standard BP, FA, and DFA. Thus, our algorithm's memory usage is constant regardless of the number of layers in a neural network. The method increases the computational cost only by a constant factor of one extra forward pass. The MEM-DFA, BP, FA, and DFA were evaluated along with their memory profiles on MNIST and CIFAR-10 datasets on various neural network models. Our experiments agree with our theoretical results and show a significant decrease in the memory cost of MEM-DFA compared to the other algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/06/2019

Efficient Convolutional Neural Network Training with Direct Feedback Alignment

There were many algorithms to substitute the back-propagation (BP) in th...
research
12/14/2022

Directional Direct Feedback Alignment: Estimating Backpropagation Paths for Efficient Learning on Neural Processors

The error Backpropagation algorithm (BP) is a key method for training de...
research
09/23/2022

Hebbian Deep Learning Without Feedback

Recent approximations to backpropagation (BP) have mitigated many of BP'...
research
06/23/2020

Extension of Direct Feedback Alignment to Convolutional and Recurrent Neural Network for Bio-plausible Deep Learning

Throughout this paper, we focus on the improvement of the direct feedbac...
research
11/24/2020

The dynamics of learning with feedback alignment

Direct Feedback Alignment (DFA) is emerging as an efficient and biologic...
research
06/08/2021

Credit Assignment Through Broadcasting a Global Error Vector

Backpropagation (BP) uses detailed, unit-specific feedback to train deep...
research
06/11/2019

Principled Training of Neural Networks with Direct Feedback Alignment

The backpropagation algorithm has long been the canonical training metho...

Please sign up or login with your details

Forgot password? Click here to reset