Accelerated, Optimal, and Parallel: Some Results on Model-Based Stochastic Optimization

01/07/2021
by   Karan Chadha, et al.
17

We extend the Approximate-Proximal Point (aProx) family of model-based methods for solving stochastic convex optimization problems, including stochastic subgradient, proximal point, and bundle methods, to the minibatch and accelerated setting. To do so, we propose specific model-based algorithms and an acceleration scheme for which we provide non-asymptotic convergence guarantees, which are order-optimal in all problem-dependent constants and provide linear speedup in minibatch size, while maintaining the desirable robustness traits (e.g. to stepsize) of the aProx family. Additionally, we show improved convergence rates and matching lower bounds identifying new fundamental constants for "interpolation" problems, whose importance in statistical machine learning is growing; this, for example, gives a parallelization strategy for alternating projections. We corroborate our theoretical results with empirical testing to demonstrate the gains accurate modeling, acceleration, and minibatching provide.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2011

Randomized Smoothing for Stochastic Optimization

We analyze convergence rates of stochastic optimization procedures for n...
research
06/06/2021

Minibatch and Momentum Model-based Methods for Stochastic Non-smooth Non-convex Optimization

Stochastic model-based methods have received increasing attention lately...
research
06/03/2019

A Generic Acceleration Framework for Stochastic Composite Optimization

In this paper, we introduce various mechanisms to obtain accelerated fir...
research
04/01/2017

Stochastic L-BFGS: Improved Convergence Rates and Practical Acceleration Strategies

We revisit the stochastic limited-memory BFGS (L-BFGS) algorithm. By pro...
research
02/12/2021

Parameter-free Locally Accelerated Conditional Gradients

Projection-free conditional gradient (CG) methods are the algorithms of ...
research
06/04/2017

Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

We develop a family of reformulations of an arbitrary consistent linear ...
research
05/17/2020

From Proximal Point Method to Nesterov's Acceleration

The proximal point method (PPM) is a fundamental method in optimization ...

Please sign up or login with your details

Forgot password? Click here to reset