SGD Generalizes Better Than GD (And Regularization Doesn't Help)

02/01/2021
by   Idan Amir, et al.
0

We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model. While for SGD it is well-known that O(1/ϵ^2) iterations suffice for obtaining a solution with ϵ excess expected risk, we show that with the same number of steps GD may overfit and emit a solution with Ω(1) generalization error. Moreover, we show that in fact Ω(1/ϵ^4) iterations are necessary for GD to match the generalization performance of SGD, which is also tight due to recent work by Bassily et al. (2020). We further discuss how regularizing the empirical risk minimized by GD essentially does not change the above result, and revisit the concepts of stability, implicit bias and the role of the learning algorithm in generalization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2021

Never Go Full Batch (in Stochastic Convex Optimization)

We study the generalization performance of full-batch optimization algor...
research
02/27/2022

Benign Underfitting of Stochastic Gradient Descent

We study to what extent may stochastic gradient descent (SGD) be underst...
research
03/13/2020

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study

The notion of implicit bias, or implicit regularization, has been sugges...
research
09/29/2021

Stochastic Training is Not Necessary for Generalization

It is widely believed that the implicit regularization of stochastic gra...
research
03/23/2023

The Probabilistic Stability of Stochastic Gradient Descent

A fundamental open problem in deep learning theory is how to define and ...
research
05/27/2023

The Implicit Regularization of Dynamical Stability in Stochastic Gradient Descent

In this paper, we study the implicit regularization of stochastic gradie...
research
11/29/2022

Disentangling the Mechanisms Behind Implicit Regularization in SGD

A number of competing hypotheses have been proposed to explain why small...

Please sign up or login with your details

Forgot password? Click here to reset