Extension of Direct Feedback Alignment to Convolutional and Recurrent Neural Network for Bio-plausible Deep Learning

06/23/2020
by   Donghyeon Han, et al.
0

Throughout this paper, we focus on the improvement of the direct feedback alignment (DFA) algorithm and extend the usage of the DFA to convolutional and recurrent neural networks (CNNs and RNNs). Even though the DFA algorithm is biologically plausible and has a potential of high-speed training, it has not been considered as the substitute for back-propagation (BP) due to the low accuracy in the CNN and RNN training. In this work, we propose a new DFA algorithm for BP-level accurate CNN and RNN training. Firstly, we divide the network into several modules and apply the DFA algorithm within the module. Second, the DFA with the sparse backward weight is applied. It comes with a form of dilated convolution in the CNN case, and in a form of sparse matrix multiplication in the RNN case. Additionally, the error propagation method of CNN becomes simpler through the group convolution. Finally, hybrid DFA increases the accuracy of the CNN and RNN training to the BP-level while taking advantage of the parallelism and hardware efficiency of the DFA algorithm.

READ FULL TEXT

page 6

page 11

page 14

page 15

page 16

research
01/06/2019

Efficient Convolutional Neural Network Training with Direct Feedback Alignment

There were many algorithms to substitute the back-propagation (BP) in th...
research
12/21/2020

Training DNNs in O(1) memory with MEM-DFA using Random Matrices

This work presents a method for reducing memory consumption to a constan...
research
12/14/2022

Directional Direct Feedback Alignment: Estimating Backpropagation Paths for Efficient Learning on Neural Processors

The error Backpropagation algorithm (BP) is a key method for training de...
research
04/13/2016

Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex

We discuss relations between Residual Networks (ResNet), Recurrent Neura...
research
05/14/2022

BackLink: Supervised Local Training with Backward Links

Empowered by the backpropagation (BP) algorithm, deep neural networks ha...
research
09/23/2022

Hebbian Deep Learning Without Feedback

Recent approximations to backpropagation (BP) have mitigated many of BP'...
research
12/09/2022

Decomposing a Recurrent Neural Network into Modules for Enabling Reusability and Replacement

Can we take a recurrent neural network (RNN) trained to translate betwee...

Please sign up or login with your details

Forgot password? Click here to reset