Learning with incremental iterative regularization

by   Lorenzo Rosasco, et al.
Istituto Italiano di Tecnologia

Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal consistency, i.e. almost sure convergence of the risk, as well as sharp finite sample bounds for the iterates. Our results are a step towards understanding the effect of multiple epochs in stochastic gradient techniques in machine learning and rely on integrating statistical and optimization results.


page 1

page 2

page 3

page 4


Optimal Rates for Multi-pass Stochastic Gradient Methods

We analyze the learning properties of the stochastic gradient method whe...

Iterative Regularization for Learning with Convex Loss Functions

We consider the problem of supervised learning with convex loss function...

Iterate averaging as regularization for stochastic gradient descent

We propose and analyze a variant of the classic Polyak-Ruppert averaging...

Learning with SGD and Random Features

Sketching and stochastic gradient methods are arguably the most common t...

A Consistent Regularization Approach for Structured Prediction

We propose and analyze a regularization approach for structured predicti...

PaloBoost: An Overfitting-robust TreeBoost with Out-of-Bag Sample Regularization Techniques

Stochastic Gradient TreeBoost is often found in many winning solutions i...

From inexact optimization to learning via gradient concentration

Optimization was recently shown to control the inductive bias in a learn...

Please sign up or login with your details

Forgot password? Click here to reset