A Robust Backpropagation-Free Framework for Images

06/03/2022
by   Timothy Zee, et al.
0

While current deep learning algorithms have been successful for a wide variety of artificial intelligence (AI) tasks, including those involving structured image data, they present deep neurophysiological conceptual issues due to their reliance on the gradients computed by backpropagation of errors (backprop) to obtain synaptic weight adjustments; hence are biologically implausible. We present a more biologically plausible approach, the error-kernel driven activation alignment (EKDAA) algorithm, to train convolution neural networks (CNNs) using locally derived error transmission kernels and error maps. We demonstrate the efficacy of EKDAA by performing the task of visual-recognition on the Fashion MNIST, CIFAR-10 and SVHN benchmarks as well as conducting blackbox robustness tests on adversarial examples derived from these datasets. Furthermore, we also present results for a CNN trained using a non-differentiable activation function. All recognition results nearly matches that of backprop and exhibit greater adversarial robustness compared to backprop.

READ FULL TEXT

page 8

page 15

research
07/15/2019

Iterative temporal differencing with random synaptic feedback weights support error backpropagation for deep learning

This work shows that a differentiable activation function is not necessa...
research
08/09/2019

On the Adversarial Robustness of Neural Networks without Weight Transport

Neural networks trained with backpropagation, the standard algorithm of ...
research
07/12/2018

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures

The backpropagation of error algorithm (BP) is often said to be impossib...
research
03/05/2018

Conducting Credit Assignment by Aligning Local Representations

The use of back-propagation and its variants to train deep networks is o...
research
04/10/2019

Using Weight Mirrors to Improve Feedback Alignment

Current algorithms for deep learning probably cannot run in the brain be...
research
10/13/2020

Investigating the Scalability and Biological Plausibility of the Activation Relaxation Algorithm

The recently proposed Activation Relaxation (AR) algorithm provides a si...
research
07/13/2021

Tourbillon: a Physically Plausible Neural Architecture

In a physical neural system, backpropagation is faced with a number of o...

Please sign up or login with your details

Forgot password? Click here to reset