Visual interpretation of the robustness of Non-Negative Associative Gradient Projection Points over function minimizers in mini-batch sampled loss functions

03/20/2019
by   Dominic Kafka, et al.
0

Mini-batch sub-sampling is likely here to stay, due to growing data demands, memory-limited computational resources such as graphical processing units (GPUs), and the dynamics of on-line learning. Sampling a new mini-batch at every loss evaluation brings a number of benefits, but also one significant drawback: The loss function becomes discontinuous. These discontinuities are generally not problematic when using fixed learning rates or learning rate schedules typical of subgradient methods. However, they hinder attempts to directly minimize the loss function by solving for critical points, since function minimizers find spurious minima induced by discontinuities, while critical points may not even exist. Therefore, finding function minimizers and critical points in stochastic optimization is ineffective. As a result, attention has been given to reducing the effect of these discontinuities by means such as gradient averaging or adaptive and dynamic sampling. This paper offers an alternative paradigm: Recasting the optimization problem to rather find Non-Negative Associated Gradient Projection Points (NN-GPPs). In this paper, we demonstrate the NN-GPP interpretation of gradient information is more robust than critical points or minimizers, being less susceptible to sub-sampling induced variance and eliminating spurious function minimizers. We conduct a visual investigation, where we compare function value and gradient information for a variety of popular activation functions as applied to a simple neural network training problem. Based on the improved description offered by NN-GPPs over minimizers to identify true optima, in particular when using smooth activation functions with high curvature characteristics, we postulate that locating NN-GPPs can contribute significantly to automating neural network training.

READ FULL TEXT
research
02/23/2020

Investigating the interaction between gradient-only line searches and different activation functions

Gradient-only line searches (GOLS) adaptively determine step sizes along...
research
11/04/2020

Which Minimizer Does My Neural Network Converge To?

The loss surface of an overparameterized neural network (NN) possesses m...
research
01/15/2020

Resolving learning rates adaptively by locating Stochastic Non-Negative Associated Gradient Projection Points using line searches

Learning rates in stochastic neural network training are currently deter...
research
05/23/2021

GOALS: Gradient-Only Approximations for Line Searches Towards Robust and Consistent Training of Deep Neural Networks

Mini-batch sub-sampling (MBSS) is favored in deep neural network trainin...
research
06/29/2020

Gradient-only line searches to automatically determine learning rates for a variety of stochastic training algorithms

Gradient-only and probabilistic line searches have recently reintroduced...
research
09/15/2019

Empirical study towards understanding line search approximations for training neural networks

Choosing appropriate step sizes is critical for reducing the computation...
research
05/30/2019

Exploiting Uncertainty of Loss Landscape for Stochastic Optimization

We introduce novel variants of momentum by incorporating the variance of...

Please sign up or login with your details

Forgot password? Click here to reset