Do We Need Zero Training Loss After Achieving Zero Training Error?

02/20/2020
by   Takashi Ishida, et al.
0

Overparameterized deep networks have the capacity to memorize training data with zero training error. Even after memorization, the training loss continues to approach zero, making the model overconfident and the test performance degraded. Since existing regularizers do not directly aim to avoid zero training loss, they often fail to maintain a moderate level of training loss, ending up with a too small or too large loss. We propose a direct solution called flooding that intentionally prevents further reduction of the training loss when it reaches a reasonably small value, which we call the flooding level. Our approach makes the loss float around the flooding level by doing mini-batched gradient descent as usual but gradient ascent if the training loss is below the flooding level. This can be implemented with one line of code, and is compatible with any stochastic optimizer and other regularizers. With flooding, the model will continue to "random walk" with the same non-zero training loss, and we expect it to drift into an area with a flat loss landscape that leads to better generalization. We experimentally show that flooding improves performance and as a byproduct, induces a double descent curve of the test loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2021

Jitter: Random Jittering Loss Function

Regularization plays a vital role in machine learning optimization. One ...
research
12/04/2020

When does gradient descent with logistic loss find interpolating two-layer networks?

We study the training of finite-width two-layer smoothed ReLU networks f...
research
09/21/2022

Deep Double Descent via Smooth Interpolation

Overparameterized deep networks are known to be able to perfectly fit th...
research
02/18/2022

Geometric Regularization from Overparameterization explains Double Descent and other findings

The volume of the distribution of possible weight configurations associa...
research
12/05/2018

Uncertainty Sampling is Preconditioned Stochastic Gradient Descent on Zero-One Loss

Uncertainty sampling, a popular active learning algorithm, is used to re...
research
05/30/2019

Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience

The ability of overparameterized deep networks to generalize well has be...
research
05/20/2023

Loss Spike in Training Neural Networks

In this work, we study the mechanism underlying loss spikes observed dur...

Please sign up or login with your details

Forgot password? Click here to reset