Implicit Regularization Properties of Variance Reduced Stochastic Mirror Descent

04/29/2022
by   Yiling Luo, et al.
0

In machine learning and statistical data analysis, we often run into objective function that is a summation: the number of terms in the summation possibly is equal to the sample size, which can be enormous. In such a setting, the stochastic mirror descent (SMD) algorithm is a numerically efficient method – each iteration involving a very small subset of the data. The variance reduction version of SMD (VRSMD) can further improve SMD by inducing faster convergence. On the other hand, algorithms such as gradient descent and stochastic gradient descent have the implicit regularization property that leads to better performance in terms of the generalization errors. Little is known on whether such a property holds for VRSMD. We prove here that the discrete VRSMD estimator sequence converges to the minimum mirror interpolant in the linear regression. This establishes the implicit regularization property for VRSMD. As an application of the above result, we derive a model estimation accuracy result in the setting when the true model is sparse. We use numerical examples to illustrate the empirical power of VRSMD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2020

Linear Convergence and Implicit Regularization of Generalized Mirror Descent with Time-Dependent Mirrors

The following questions are fundamental to understanding the properties ...
research
01/29/2023

Implicit Regularization for Group Sparsity

We study the implicit regularization of gradient descent towards structu...
research
03/17/2020

The Implicit Regularization of Stochastic Gradient Flow for Least Squares

We study the implicit regularization of mini-batch stochastic gradient d...
research
02/23/2023

Sharpness-Aware Minimization: An Implicit Regularization Perspective

Sharpness-Aware Minimization (SAM) is a recent optimization framework ai...
research
02/01/2023

Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression

In deep learning, often the training process finds an interpolator (a so...
research
03/29/2017

On Convergence Property of Implicit Self-paced Objective

Self-paced learning (SPL) is a new methodology that simulates the learni...
research
05/30/2019

Implicit Regularization of Accelerated Methods in Hilbert Spaces

We study learning properties of accelerated gradient descent methods for...

Please sign up or login with your details

Forgot password? Click here to reset