Generalization Properties and Implicit Regularization for Multiple Passes SGM

05/26/2016
by   Junhong Lin, et al.
0

We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions. We show that, in the absence of penalizations or constraints, the stability and approximation properties of the algorithm can be controlled by tuning either the step-size or the number of passes over the data. In this view, these parameters can be seen to control a form of implicit regularization. Numerical results complement the theoretical findings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/04/2018

Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization

Stochastic descent methods (of the gradient and mirror varieties) have b...
research
05/28/2016

Optimal Rates for Multi-pass Stochastic Gradient Methods

We analyze the learning properties of the stochastic gradient method whe...
research
07/12/2023

Learning Stochastic Dynamical Systems as an Implicit Regularization with Graph Neural Networks

Stochastic Gumbel graph networks are proposed to learn high-dimensional ...
research
06/17/2020

Implicit regularization for convex regularizers

We study implicit regularization for over-parameterized linear models, w...
research
07/17/2018

Learning with SGD and Random Features

Sketching and stochastic gradient methods are arguably the most common t...
research
03/13/2020

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study

The notion of implicit bias, or implicit regularization, has been sugges...
research
09/11/2019

Implicit Regularization for Optimal Sparse Recovery

We investigate implicit regularization schemes for gradient descent meth...

Please sign up or login with your details

Forgot password? Click here to reset