Target Propagation via Regularized Inversion

12/02/2021
by   Vincent Roulet, et al.
0

Target Propagation (TP) algorithms compute targets instead of gradients along neural networks and propagate them backward in a way that is similar yet different than gradient back-propagation (BP). The idea was first presented as a perturbative alternative to back-propagation that may achieve greater accuracy in gradient evaluation when training multi-layer neural networks (LeCun et al., 1989). However, TP has remained more of a template algorithm with many variations than a well-identified algorithm. Revisiting insights of LeCun et al., (1989) and more recently of Lee et al. (2015), we present a simple version of target propagation based on regularized inversion of network layers, easily implementable in a differentiable programming framework. We compare its computational complexity to the one of BP and delineate the regimes in which TP can be attractive compared to BP. We show how our TP can be used to train recurrent neural networks with long sequences on various sequence modeling problems. The experimental results underscore the importance of regularization in TP in practice.

READ FULL TEXT
research
03/02/2017

Belief Propagation in Conditional RBMs for Structured Prediction

Restricted Boltzmann machines (RBMs) and conditional RBMs (CRBMs) are po...
research
08/18/2023

Tensor-Compressed Back-Propagation-Free Training for (Physics-Informed) Neural Networks

Backward propagation (BP) is widely used to compute the gradients in neu...
research
06/21/2019

Fully Decoupled Neural Network Learning Using Delayed Gradients

Using the back-propagation (BP) to train neural networks requires a sequ...
research
07/23/2019

BPPSA: Scaling Back-propagation by Parallel Scan Algorithm

In an era when the performance of a single compute device plateaus, soft...
research
01/11/2023

Rig Inversion by Training a Differentiable Rig Function

Rig inversion is the problem of creating a method that can find the rig ...
research
07/23/2019

Scaling Back-propagation by Parallel Scan Algorithm

In an era when the performance of a single compute device plateaus, soft...
research
05/20/2015

A Max-Sum algorithm for training discrete neural networks

We present an efficient learning algorithm for the problem of training n...

Please sign up or login with your details

Forgot password? Click here to reset