Reverse Back Propagation to Make Full Use of Derivative

02/13/2022
by   Weiming Xiong, et al.
0

The development of the back-propagation algorithm represents a landmark in neural networks. We provide an approach that conducts the back-propagation again to reverse the traditional back-propagation process to optimize the input loss at the input end of a neural network for better effects without extra costs during the inference time. Then we further analyzed its principles and advantages and disadvantages, reformulated the weight initialization strategy for our method. And experiments on MNIST, CIFAR10, and CIFAR100 convinced our approaches could adapt to a larger range of learning rate and learn better than vanilla back-propagation.

READ FULL TEXT
research
03/15/2020

Stochastic gradient descent with random learning rate

We propose to optimize neural networks with a uniformly-distributed rand...
research
10/15/2019

Reverse derivative categories

The reverse derivative is a fundamental operation in machine learning an...
research
11/29/2018

Uncertainty propagation in neural networks for sparse coding

A novel method to propagate uncertainty through the soft-thresholding no...
research
02/27/2018

Train Feedfoward Neural Network with Layer-wise Adaptive Rate via Approximating Back-matching Propagation

Stochastic gradient descent (SGD) has achieved great success in training...
research
01/26/2021

Reverse Derivative Ascent: A Categorical Approach to Learning Boolean Circuits

We introduce Reverse Derivative Ascent: a categorical analogue of gradie...
research
05/15/2021

Bilevel Programming and Deep Learning: A Unifying View on Inference Learning Methods

In this work we unify a number of inference learning methods, that are p...
research
03/29/2021

Online Defense of Trojaned Models using Misattributions

This paper proposes a new approach to detecting neural Trojans on Deep N...

Please sign up or login with your details

Forgot password? Click here to reset