Early Stopping in Deep Networks: Double Descent and How to Eliminate it

07/20/2020
by   Reinhard Heckel, et al.
29

Over-parameterized models, in particular deep networks, often exhibit a double descent phenomenon, where as a function of model size, error first decreases, increases, and decreases at last. This intriguing double descent behavior also occurs as a function of training epochs, and has been conjectured to arise because training epochs control the model complexity. In this paper, we show that such epoch-wise double descent arises for a different reason: It is caused by a superposition of two or more bias-variance tradeoffs that arise because different parts of the network are learned at different times, and eliminating this by proper scaling of stepsizes can significantly improve the early stopping performance. We show this analytically for i) linear regression, where differently scaled features give rise to a superposition of bias-variance tradeoffs, and for ii) a two-layer neural network, where the first and second layers each govern a bias-variance tradeoff. Inspired by this theory, we study a five-layer convolutional network empirically and show that eliminating epoch-wise double descent through adjusting stepsizes of different layers improves the early stopping performance significantly.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2022

Regularization-wise double descent: Why it occurs and how to eliminate it

The risk of overparameterized models, in particular deep neural networks...
research
12/16/2019

More Data Can Hurt for Linear Regression: Sample-wise Double Descent

In this expository note we describe a surprising phenomenon in overparam...
research
08/26/2021

When and how epochwise double descent happens

Deep neural networks are known to exhibit a `double descent' behavior as...
research
08/05/2023

ApproBiVT: Lead ASR Models to Generalize Better Using Approximated Bias-Variance Tradeoff Guided Early Stopping and Checkpoint Averaging

The conventional recipe for Automatic Speech Recognition (ASR) models is...
research
05/25/2023

Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic Phenomenon

In this paper, we studied two identically-trained neural networks (i.e. ...
research
10/22/2021

Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model

Recent evidence has shown the existence of a so-called double-descent an...
research
03/01/2022

Contrasting random and learned features in deep Bayesian linear regression

Understanding how feature learning affects generalization is among the f...

Please sign up or login with your details

Forgot password? Click here to reset