Exploiting Uncertainty of Loss Landscape for Stochastic Optimization

05/30/2019
by   Vineeth S. Bhaskara, et al.
0

We introduce novel variants of momentum by incorporating the variance of the stochastic loss function. The variance characterizes the confidence or uncertainty of the local features of the averaged loss surface across the i.i.d. subsets of the training data defined by the mini-batches. We show two applications of the gradient of the variance of the loss function. First, as a bias to the conventional momentum update to encourage conformity of the local features of the loss function (e.g. local minima) across mini-batches to improve generalization and the cumulative training progress made per epoch. Second, as an alternative direction for "exploration" in the parameter space, especially, for non-convex objectives, that exploits both the optimistic and pessimistic views of the loss function in the face of uncertainty. We also introduce a novel data-driven stochastic regularization technique through the parameter update rule that is model-agnostic and compatible with arbitrary architectures. We further establish connections to probability distributions over loss functions and the REINFORCE policy gradient update with baseline in RL. Finally, we incorporate the new variants of momentum proposed into Adam, and empirically show that our methods improve the rate of convergence of training based on our experiments on the MNIST and CIFAR-10 datasets.

READ FULL TEXT
research
05/30/2022

Last-iterate convergence analysis of stochastic momentum methods for neural networks

The stochastic momentum method is a commonly used acceleration technique...
research
09/12/2018

On the Stability and Convergence of Stochastic Gradient Descent with Momentum

While momentum-based methods, in conjunction with the stochastic gradien...
research
02/26/2021

On the Generalization of Stochastic Gradient Descent with Momentum

While momentum-based methods, in conjunction with stochastic gradient de...
research
07/02/2023

Towards Unbiased Exploration in Partial Label Learning

We consider learning a probabilistic classifier from partially-labelled ...
research
07/18/2018

Convergence guarantees for RMSProp and ADAM in non-convex optimization and their comparison to Nesterov acceleration on autoencoders

RMSProp and ADAM continue to be extremely popular algorithms for trainin...
research
06/05/2019

A Tunable Loss Function for Classification

Recently, a parametrized class of loss functions called α-loss, α∈ [1,∞]...

Please sign up or login with your details

Forgot password? Click here to reset