Efficient Convolutional Neural Network Training with Direct Feedback Alignment

01/06/2019
by   Donghyeon Han, et al.
0

There were many algorithms to substitute the back-propagation (BP) in the deep neural network (DNN) training. However, they could not become popular because their training accuracy and the computational efficiency were worse than BP. One of them was direct feedback alignment (DFA), but it showed low training performance especially for the convolutional neural network (CNN). In this paper, we overcome the limitation of the DFA algorithm by combining with the conventional BP during the CNN training. To improve the training stability, we also suggest the feedback weight initialization method by analyzing the patterns of the fixed random matrices in the DFA. Finally, we propose the new training algorithm, binary direct feedback alignment (BDFA) to minimize the computational cost while maintaining the training accuracy compared with the DFA. In our experiments, we use the CIFAR-10 and CIFAR-100 dataset to simulate the CNN learning from the scratch and apply the BDFA to the online learning based object tracking application to examine the training in the small dataset environment. Our proposed algorithms show better performance than conventional BP in both two different training tasks especially when the dataset is small.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/21/2020

Training DNNs in O(1) memory with MEM-DFA using Random Matrices

This work presents a method for reducing memory consumption to a constan...
research
06/23/2020

Extension of Direct Feedback Alignment to Convolutional and Recurrent Neural Network for Bio-plausible Deep Learning

Throughout this paper, we focus on the improvement of the direct feedbac...
research
12/14/2022

Directional Direct Feedback Alignment: Estimating Backpropagation Paths for Efficient Learning on Neural Processors

The error Backpropagation algorithm (BP) is a key method for training de...
research
11/08/2018

Biologically-plausible learning algorithms can scale to large datasets

The backpropagation (BP) algorithm is often thought to be biologically i...
research
09/03/2019

Learning without feedback: Direct random target projection as a feedback-alignment algorithm with layerwise feedforward training

While the backpropagation of error algorithm allowed for a rapid rise in...
research
01/08/2022

PocketNN: Integer-only Training and Inference of Neural Networks via Direct Feedback Alignment and Pocket Activations in Pure C++

Standard deep learning algorithms are implemented using floating-point r...
research
05/14/2022

BackLink: Supervised Local Training with Backward Links

Empowered by the backpropagation (BP) algorithm, deep neural networks ha...

Please sign up or login with your details

Forgot password? Click here to reset