Gradient Regularization Improves Accuracy of Discriminative Models

12/28/2017
by   Dániel Varga, et al.
0

Regularizing the gradient norm of the output of a neural network with respect to its inputs is a powerful technique, first proposed by Drucker & LeCun (1991) who named it Double Backpropagation. The idea has been independently rediscovered several times since then, most often with the goal of making models robust against adversarial sampling. This paper presents evidence that gradient regularization can consistently and significantly improve classification accuracy on vision tasks, especially when the amount of training data is small. We introduce our regularizers as members of a broader class of Jacobian-based regularizers, and compare them theoretically and empirically. A straightforward objection against minimizing the gradient norm at the training points is that a locally optimal solution, where the model has small gradients at the training points, may possibly contain large changes at other regions. We demonstrate through experiments on real and synthetic tasks that stochastic gradient descent is unable to find these locally optimal but globally unproductive solutions. Instead, it is forced to find solutions that generalize well.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2019

The Complexity of Finding Stationary Points with Stochastic Gradient Descent

We study the iteration complexity of stochastic gradient descent (SGD) f...
research
06/24/2016

Sampling-based Gradient Regularization for Capturing Long-Term Dependencies in Recurrent Neural Networks

Vanishing (and exploding) gradients effect is a common problem for recur...
research
10/09/2020

Reparametrizing gradient descent

In this work, we propose an optimization algorithm which we call norm-ad...
research
06/22/2021

Adapting Stepsizes by Momentumized Gradients Improves Optimization and Generalization

Adaptive gradient methods, such as Adam, have achieved tremendous succes...
research
06/09/2022

Explicit Regularization in Overparametrized Models via Noise Injection

Injecting noise within gradient descent has several desirable features. ...
research
12/01/2019

Borrowing From the Future: An Attempt to Address Double Sampling

For model-free reinforcement learning, the main difficulty of stochastic...
research
06/12/2023

Can Forward Gradient Match Backpropagation?

Forward Gradients - the idea of using directional derivatives in forward...

Please sign up or login with your details

Forgot password? Click here to reset