Stochastic gradient descent methods for estimation with large data sets

09/22/2015
by   Dustin Tran, et al.
0

We develop methods for parameter estimation in settings with large-scale data sets, where traditional methods are no longer tenable. Our methods rely on stochastic approximations, which are computationally efficient as they maintain one iterate as a parameter estimate, and successively update that iterate based on a single data point. When the update is based on a noisy gradient, the stochastic approximation is known as standard stochastic gradient descent, which has been fundamental in modern applications with large data sets. Additionally, our methods are numerically stable because they employ implicit updates of the iterates. Intuitively, an implicit update is a shrinked version of a standard one, where the shrinkage factor depends on the observed Fisher information at the corresponding data point. This shrinkage prevents numerical divergence of the iterates, which can be caused either by excess noise or outliers. Our sgd package in R offers the most extensive and robust implementation of stochastic gradient descent methods. We demonstrate that sgd dominates alternative software in runtime for several estimation problems with massive data sets. Our applications include the wide class of generalized linear models as well as M-estimation for robust regression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2015

Towards stability and optimality in stochastic gradient descent

Iterative procedures for parameter estimation based on stochastic gradie...
research
10/21/2019

Non-Gaussianity of Stochastic Gradient Noise

What enables Stochastic Gradient Descent (SGD) to achieve better general...
research
10/04/2015

Implicit stochastic approximation

The need to carry out parameter estimation from massive data has reinvig...
research
02/17/2023

(S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability

In this paper, we investigate the impact of stochasticity and large step...
research
05/21/2019

Time-Smoothed Gradients for Online Forecasting

Here, we study different update rules in stochastic gradient descent (SG...
research
06/04/2021

Fluctuation-dissipation Type Theorem in Stochastic Linear Learning

The fluctuation-dissipation theorem (FDT) is a simple yet powerful conse...
research
02/24/2016

Online Dual Coordinate Ascent Learning

The stochastic dual coordinate-ascent (S-DCA) technique is a useful alte...

Please sign up or login with your details

Forgot password? Click here to reset