Stochastic Proximal Gradient Algorithm with Minibatches. Application to Large Scale Learning Models

03/30/2020
by   Andrei Patrascu, et al.
0

Stochastic optimization lies at the core of most statistical learning models. The recent great development of stochastic algorithmic tools focused significantly onto proximal gradient iterations, in order to find an efficient approach for nonsmooth (composite) population risk functions. The complexity of finding optimal predictors by minimizing regularized risk is largely understood for simple regularizations such as ℓ_1/ℓ_2 norms. However, more complex properties desired for the predictor necessitates highly difficult regularizers as used in grouped lasso or graph trend filtering. In this chapter we develop and analyze minibatch variants of stochastic proximal gradient algorithm for general composite objective functions with stochastic nonsmooth components. We provide iteration complexity for constant and variable stepsize policies obtaining that, for minibatch size N, after O(1/Nϵ) iterations ϵ-suboptimality is attained in expected quadratic distance to optimal solution. The numerical tests on ℓ_2-regularized SVMs and parametric sparse representation problems confirm the theoretical behaviour and surpasses minibatch SGD performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2019

Stochastic proximal splitting algorithm for stochastic composite minimization

Supported by the recent contributions in multiple branches, the first-or...
research
07/08/2019

A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization

In this paper, we introduce a new approach to develop stochastic optimiz...
research
12/19/2017

Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs

A regularized optimization problem over a large unstructured graph is st...
research
02/11/2015

Proximal Algorithms in Statistics and Machine Learning

In this paper we develop proximal methods for statistical learning. Prox...
research
12/03/2015

Kalman-based Stochastic Gradient Method with Stop Condition and Insensitivity to Conditioning

Modern proximal and stochastic gradient descent (SGD) methods are believ...
research
12/04/2018

A probabilistic incremental proximal gradient method

In this paper, we propose a probabilistic optimization method, named pro...
research
07/02/2020

A fully data-driven approach to minimizing CVaR for portfolio of assets via SGLD with discontinuous updating

A new approach in stochastic optimization via the use of stochastic grad...

Please sign up or login with your details

Forgot password? Click here to reset