-
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
Background: Deep learning models are typically trained using stochastic ...
read it
-
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Neural networks have great success in many machine learning applications...
read it
-
High-dimensional dynamics of generalization error in neural networks
We perform an average case analysis of the generalization dynamics of la...
read it
-
A Comparative Analysis of the Optimization and Generalization Property of Two-layer Neural Network and Random Feature Models Under Gradient Descent Dynamics
A fairly comprehensive analysis is presented for the gradient descent dy...
read it
-
Benign Overfitting and Noisy Features
Modern machine learning often operates in the regime where the number of...
read it
-
The Quenching-Activation Behavior of the Gradient Descent Dynamics for Two-layer Neural Network Models
A numerical and phenomenological study of the gradient descent (GD) algo...
read it
The Slow Deterioration of the Generalization Error of the Random Feature Model
The random feature model exhibits a kind of resonance behavior when the number of parameters is close to the training sample size. This behavior is characterized by the appearance of large generalization gap, and is due to the occurrence of very small eigenvalues for the associated Gram matrix. In this paper, we examine the dynamic behavior of the gradient descent algorithm in this regime. We show, both theoretically and experimentally, that there is a dynamic self-correction mechanism at work: The larger the eventual generalization gap, the slower it develops, both because of the small eigenvalues. This gives us ample time to stop the training process and obtain solutions with good generalization property.
READ FULL TEXT
Comments
There are no comments yet.