DeepAI AI Chat
Log In Sign Up

Learning with SGD and Random Features

by   Luigi Carratino, et al.
Università di Genova

Sketching and stochastic gradient methods are arguably the most common tech- niques to derive efficient large-scale learning algorithms. In this paper, we investigate their application in the context of nonparametric statistical learning. More precisely, we study the estimator defined by stochastic gradients with mini batches and ran- dom features. The latter can be seen as a form of nonlinear sketching and used to define approximate kernel methods. The estimator we consider is not explicitly penalized/constrained and regularization is implicit. Indeed, our study highlight how different parameters, such as the number of features, iterations, step-size and mini- batch size control the learning properties of the solutions. We do this by deriving optimal finite sample bounds, under standard assumptions. The obtained results are corroborated and illustrated by numerical experiments.


Optimal Rates for Multi-pass Stochastic Gradient Methods

We analyze the learning properties of the stochastic gradient method whe...

Optimal Rates for Learning with Nyström Stochastic Gradient Methods

In the setting of nonparametric regression, we propose and study a combi...

The Implicit Regularization of Stochastic Gradient Flow for Least Squares

We study the implicit regularization of mini-batch stochastic gradient d...

Generalization Properties and Implicit Regularization for Multiple Passes SGM

We study the generalization properties of stochastic gradient methods fo...

Learning with incremental iterative regularization

Within a statistical learning setting, we propose and study an iterative...

Statistical Inference with Stochastic Gradient Algorithms

Tuning of stochastic gradient algorithms (SGAs) for optimization and sam...

Optimal mini-batch and step sizes for SAGA

Recently it has been shown that the step sizes of a family of variance r...