Layer-wise Feedback Propagation

08/23/2023
by   Leander Weber, et al.
1

In this paper, we present Layer-wise Feedback Propagation (LFP), a novel training approach for neural-network-like predictors that utilizes explainability, specifically Layer-wise Relevance Propagation(LRP), to assign rewards to individual connections based on their respective contributions to solving a given task. This differs from traditional gradient descent, which updates parameters towards anestimated loss minimum. LFP distributes a reward signal throughout the model without the need for gradient computations. It then strengthens structures that receive positive feedback while reducingthe influence of structures that receive negative feedback. We establish the convergence of LFP theoretically and empirically, and demonstrate its effectiveness in achieving comparable performance to gradient descent on various models and datasets. Notably, LFP overcomes certain limitations associated with gradient-based methods, such as reliance on meaningful derivatives. We further investigate how the different LRP-rules can be extended to LFP, what their effects are on training, as well as potential applications, such as training models with no meaningful derivatives, e.g., step-function activated Spiking Neural Networks (SNNs), or for transfer learning, to efficiently utilize existing knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2019

On the convergence of gradient descent for two layer neural networks

It has been shown that gradient descent can yield the zero training loss...
research
05/23/2019

Blockwise Adaptivity: Faster Training and Better Generalization in Deep Learning

Stochastic methods with coordinate-wise adaptive stepsize (such as RMSpr...
research
10/11/2022

Component-Wise Natural Gradient Descent – An Efficient Neural Network Optimization

Natural Gradient Descent (NGD) is a second-order neural network training...
research
06/28/2016

Alternating Back-Propagation for Generator Network

This paper proposes an alternating back-propagation algorithm for learni...
research
05/30/2022

Agnostic Physics-Driven Deep Learning

This work establishes that a physical system can perform statistical lea...
research
03/25/2021

Training Neural Networks Using the Property of Negative Feedback to Inverse a Function

With high forward gain, a negative feedback system has the ability to pe...
research
02/13/2023

Gradient-Based Automated Iterative Recovery for Parameter-Efficient Tuning

Pretrained large language models (LLMs) are able to solve a wide variety...

Please sign up or login with your details

Forgot password? Click here to reset