Memorized Sparse Backpropagation

05/24/2019
by   Zhiyuan Zhang, et al.
0

Neural network learning is typically slow since backpropagation needs to compute full gradients and backpropagate them across multiple layers. Despite its success of existing work in accelerating propagation through sparseness, the relevant theoretical characteristics remain unexplored and we empirically find that they suffer from the loss of information contained in unpropagated gradients. To tackle these problems, in this work, we present a unified sparse backpropagation framework and provide a detailed analysis of its theoretical characteristics. Analysis reveals that when applied to a multilayer perceptron, our framework essentially performs gradient descent using an estimated gradient similar enough to the true gradient, resulting in convergence in probability under certain conditions. Furthermore, a simple yet effective algorithm named memorized sparse backpropagation (MSBP) is proposed to remedy the problem of information loss by storing unpropagated gradients in memory for the next learning. The experiments demonstrate that the proposed MSBP is able to effectively alleviate the information loss in traditional sparse backpropagation while achieving comparable acceleration.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2020

ZORB: A Derivative-Free Backpropagation Algorithm for Neural Networks

Gradient descent and backpropagation have enabled neural networks to ach...
research
02/17/2022

Gradients without Backpropagation

Using backpropagation to compute gradients of objective functions for op...
research
04/23/2021

GuideBP: Guiding Backpropagation Through Weaker Pathways of Parallel Logits

Convolutional neural networks often generate multiple logits and use sim...
research
06/04/2018

Backdrop: Stochastic Backpropagation

We introduce backdrop, a flexible and simple-to-implement method, intuit...
research
02/18/2015

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

Large multilayer neural networks trained with backpropagation have recen...
research
09/13/2022

Optimization without Backpropagation

Forward gradients have been recently introduced to bypass backpropagatio...
research
07/20/2023

AdjointDPM: Adjoint Sensitivity Method for Gradient Backpropagation of Diffusion Probabilistic Models

Existing customization methods require access to multiple reference exam...

Please sign up or login with your details

Forgot password? Click here to reset