Learning with incremental iterative regularization

04/30/2014
by   Lorenzo Rosasco, et al.
MIT
Istituto Italiano di Tecnologia
0

Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal consistency, i.e. almost sure convergence of the risk, as well as sharp finite sample bounds for the iterates. Our results are a step towards understanding the effect of multiple epochs in stochastic gradient techniques in machine learning and rely on integrating statistical and optimization results.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/28/2016

Optimal Rates for Multi-pass Stochastic Gradient Methods

We analyze the learning properties of the stochastic gradient method whe...
03/31/2015

Iterative Regularization for Learning with Convex Loss Functions

We consider the problem of supervised learning with convex loss function...
02/22/2018

Iterate averaging as regularization for stochastic gradient descent

We propose and analyze a variant of the classic Polyak-Ruppert averaging...
07/17/2018

Learning with SGD and Random Features

Sketching and stochastic gradient methods are arguably the most common t...
05/24/2016

A Consistent Regularization Approach for Structured Prediction

We propose and analyze a regularization approach for structured predicti...
07/22/2018

PaloBoost: An Overfitting-robust TreeBoost with Out-of-Bag Sample Regularization Techniques

Stochastic Gradient TreeBoost is often found in many winning solutions i...
06/09/2021

From inexact optimization to learning via gradient concentration

Optimization was recently shown to control the inductive bias in a learn...

Please sign up or login with your details

Forgot password? Click here to reset