Optimization without Backpropagation

09/13/2022
by   Gabriel Belouze, et al.
0

Forward gradients have been recently introduced to bypass backpropagation in autodifferentiation, while retaining unbiased estimators of true gradients. We derive an optimality condition to obtain best approximating forward gradients, which leads us to mathematical insights that suggest optimization in high dimension is challenging with forward gradients. Our extensive experiments on test functions support this claim.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2022

Gradients without Backpropagation

Using backpropagation to compute gradients of objective functions for op...
research
05/26/2023

Emergent representations in networks trained with the Forward-Forward algorithm

The Backpropagation algorithm, widely used to train neural networks, has...
research
12/06/2020

Representaciones del aprendizaje reutilizando los gradientes de la retropropagacion

This work proposes an algorithm for taking advantage of backpropagation ...
research
06/12/2023

Can Forward Gradient Match Backpropagation?

Forward Gradients - the idea of using directional derivatives in forward...
research
10/18/2022

Exact Gradient Computation for Spiking Neural Networks Through Forward Propagation

Spiking neural networks (SNN) have recently emerged as alternatives to t...
research
04/23/2021

GuideBP: Guiding Backpropagation Through Weaker Pathways of Parallel Logits

Convolutional neural networks often generate multiple logits and use sim...
research
05/24/2019

Memorized Sparse Backpropagation

Neural network learning is typically slow since backpropagation needs to...

Please sign up or login with your details

Forgot password? Click here to reset