Train faster, generalize better: Stability of stochastic gradient descent

09/03/2015
by   Moritz Hardt, et al.
0

We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2021

Stability and Generalization of the Decentralized Stochastic Gradient Descent

The stability and generalization of stochastic gradient-based methods pr...
research
04/21/2018

Stability of the Stochastic Gradient Method for an Approximated Large Scale Kernel Machine

In this paper we measured the stability of stochastic gradient method (S...
research
03/21/2022

A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima

Non-convex loss functions arise frequently in modern machine learning, a...
research
11/25/2021

Time-independent Generalization Bounds for SGLD in Non-convex Settings

We establish generalization error bounds for stochastic gradient Langevi...
research
02/26/2018

Analysis of Langevin Monte Carlo via convex optimization

In this paper, we provide new insights on the Unadjusted Langevin Algori...
research
08/11/2023

Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregation

Deep neural networks are vulnerable to universal adversarial perturbatio...
research
09/11/2022

Git Re-Basin: Merging Models modulo Permutation Symmetries

The success of deep learning is thanks to our ability to solve certain m...

Please sign up or login with your details

Forgot password? Click here to reset