On the generalization of learning algorithms that do not converge

08/16/2022
by   Nisha Chandramoorthy, et al.
0

Generalization analyses of deep learning typically assume that the training converges to a fixed point. But, recent results indicate that in practice, the weights of deep neural networks optimized with stochastic gradient descent often oscillate indefinitely. To reduce this discrepancy between theory and practice, this paper focuses on the generalization of neural networks whose training dynamics do not necessarily converge to fixed points. Our main contribution is to propose a notion of statistical algorithmic stability (SAS) that extends classical algorithmic stability to non-convergent algorithms and to study its connection to generalization. This ergodic-theoretic approach leads to new insights when compared to the traditional optimization and learning theory perspectives. We prove that the stability of the time-asymptotic behavior of a learning algorithm relates to its generalization and empirically demonstrate how loss dynamics can provide clues to generalization performance. Our findings provide evidence that networks that "train stably generalize better" even when the training continues indefinitely and the weights do not converge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2021

Stability and Generalization of the Decentralized Stochastic Gradient Descent

The stability and generalization of stochastic gradient-based methods pr...
research
10/27/2020

Toward Better Generalization Bounds with Locally Elastic Stability

Classical approaches in learning theory are often seen to yield very loo...
research
12/05/2012

On the Convergence Properties of Optimal AdaBoost

AdaBoost is one of the most popular machine-learning algorithms. It is s...
research
07/15/2022

Stable Invariant Models via Koopman Spectra

Weight-tied models have attracted attention in the modern development of...
research
10/23/2017

Stability and Generalization of Learning Algorithms that Converge to Global Optima

We establish novel generalization bounds for learning algorithms that co...
research
07/08/2016

Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks

It is known that training deep neural networks, in particular, deep conv...
research
10/19/2020

A Contour Stochastic Gradient Langevin Dynamics Algorithm for Simulations of Multi-modal Distributions

We propose an adaptively weighted stochastic gradient Langevin dynamics ...

Please sign up or login with your details

Forgot password? Click here to reset