DeepAI AI Chat
Log In Sign Up

Reparametrizing gradient descent

10/09/2020
by   David Sprunger, et al.
0

In this work, we propose an optimization algorithm which we call norm-adapted gradient descent. This algorithm is similar to other gradient-based optimization algorithms like Adam or Adagrad in that it adapts the learning rate of stochastic gradient descent at each iteration. However, rather than using statistical properties of observed gradients, norm-adapted gradient descent relies on a first-order estimate of the effect of a standard gradient descent update step, much like the Newton-Raphson method in many dimensions. Our algorithm can also be compared to quasi-Newton methods, but we seek roots rather than stationary points. Seeking roots can be justified by the fact that for models with sufficient capacity measured by nonnegative loss functions, roots coincide with global optima. This work presents several experiments where we have used our algorithm; in these results, it appears norm-adapted descent is particularly strong in regression settings but is also capable of training classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/27/2018

Gradient descent revisited via an adaptive online learning rate

Any gradient descent optimization requires to choose a learning rate. Wi...
06/14/2021

Smart Gradient – An Adaptive Technique for Improving Gradient Estimation

Computing the gradient of a function provides fundamental information ab...
09/20/2021

Generalized Optimization: A First Step Towards Category Theoretic Learning Theory

The Cartesian reverse derivative is a categorical generalization of reve...
12/28/2017

Gradient Regularization Improves Accuracy of Discriminative Models

Regularizing the gradient norm of the output of a neural network with re...
06/02/2020

Acceleration of Descent-based Optimization Algorithms via Carathéodory's Theorem

We propose a new technique to accelerate algorithms based on Gradient De...
09/28/2020

Escaping Saddle-Points Faster under Interpolation-like Conditions

In this paper, we show that under over-parametrization several standard ...
07/07/2021

An algorithmic view of ℓ_2 regularization and some path-following algorithms

We establish an equivalence between the ℓ_2-regularized solution path fo...