Log In Sign Up

Early Stopping in Deep Networks: Double Descent and How to Eliminate it

by   Reinhard Heckel, et al.

Over-parameterized models, in particular deep networks, often exhibit a double descent phenomenon, where as a function of model size, error first decreases, increases, and decreases at last. This intriguing double descent behavior also occurs as a function of training epochs, and has been conjectured to arise because training epochs control the model complexity. In this paper, we show that such epoch-wise double descent arises for a different reason: It is caused by a superposition of two or more bias-variance tradeoffs that arise because different parts of the network are learned at different times, and eliminating this by proper scaling of stepsizes can significantly improve the early stopping performance. We show this analytically for i) linear regression, where differently scaled features give rise to a superposition of bias-variance tradeoffs, and for ii) a two-layer neural network, where the first and second layers each govern a bias-variance tradeoff. Inspired by this theory, we study a five-layer convolutional network empirically and show that eliminating epoch-wise double descent through adjusting stepsizes of different layers improves the early stopping performance significantly.


page 1

page 2

page 3

page 4


Regularization-wise double descent: Why it occurs and how to eliminate it

The risk of overparameterized models, in particular deep neural networks...

More Data Can Hurt for Linear Regression: Sample-wise Double Descent

In this expository note we describe a surprising phenomenon in overparam...

When and how epochwise double descent happens

Deep neural networks are known to exhibit a `double descent' behavior as...

Contrasting random and learned features in deep Bayesian linear regression

Understanding how feature learning affects generalization is among the f...

Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model

Recent evidence has shown the existence of a so-called double-descent an...

Kernel regression in high dimension: Refined analysis beyond double descent

In this paper, we provide a precise characterize of generalization prope...

On the geometry of generalization and memorization in deep neural networks

Understanding how large neural networks avoid memorizing training data i...