Gradient target propagation

10/19/2018
by   Tiago de Souza Farias, et al.
0

We report a learning rule for neural networks that computes how much each neuron should contribute to minimize a giving cost function via the estimation of its target value. By theoretical analysis, we show that this learning rule contains backpropagation, Hebian learning, and additional terms. We also give a general technique for weights initialization. Our results are at least as good as those obtained with backpropagation. The neural networks are trained and tested in three problems: MNIST, MNIST-Fashion, and CIFAR-10 datasets. The associated code is available at https://github.com/tiago939/target.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2021

PyTorch-Hebbian: facilitating local learning in a deep learning framework

Recently, unsupervised local learning, based on Hebb's idea that change ...
research
08/29/2016

About Learning in Recurrent Bistable Gradient Networks

Recurrent Bistable Gradient Networks are attractor based neural networks...
research
06/28/2023

Time Regularization in Optimal Time Variable Learning

Recently, optimal time variable learning in deep neural networks (DNNs) ...
research
08/09/2019

On the Adversarial Robustness of Neural Networks without Weight Transport

Neural networks trained with backpropagation, the standard algorithm of ...
research
03/17/2023

A Two-Step Rule for Backpropagation

We present a simplified computational rule for the back-propagation form...
research
01/30/2019

Direct Feedback Alignment with Sparse Connections for Local Learning

Recent advances in deep neural networks (DNNs) owe their success to trai...
research
06/19/2017

meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting

We propose a simple yet effective technique for neural network learning....

Please sign up or login with your details

Forgot password? Click here to reset