On Optimal Early Stopping: Over-informative versus Under-informative Parametrization

02/20/2022
by   Ruoqi Shen, et al.
0

Early stopping is a simple and widely used method to prevent over-training neural networks. We develop theoretical results to reveal the relationship between the optimal early stopping time and model dimension as well as sample size of the dataset for certain linear models. Our results demonstrate two very different behaviors when the model dimension exceeds the number of features versus the opposite scenario. While most previous works on linear models focus on the latter setting, we observe that the dimension of the model often exceeds the number of features arising from data in common deep learning tasks and propose a model to study this setting. We demonstrate experimentally that our theoretical results on optimal early stopping time corresponds to the training process of deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2015

NYTRO: When Subsampling Meets Early Stopping

Early stopping is a well known approach to reduce the time complexity fo...
research
11/29/2021

Being Patient and Persistent: Optimizing An Early Stopping Strategy for Deep Learning in Profiled Attacks

The absence of an algorithm that effectively monitors deep learning mode...
research
03/28/2017

Early Stopping without a Validation Set

Early stopping is a widely used technique to prevent poor generalization...
research
01/27/2023

Conformal inference is (almost) free for neural networks trained with early stopping

Early stopping based on hold-out data is a popular regularization techni...
research
06/09/2020

Learning to Stop While Learning to Predict

There is a recent surge of interest in designing deep architectures base...
research
09/06/2022

The Cost of Sequential Adaptation

Possibility of early stopping or interim sample size re-estimation lead ...
research
06/10/2021

Early-stopped neural networks are consistent

This work studies the behavior of neural networks trained with the logis...

Please sign up or login with your details

Forgot password? Click here to reset