Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize

10/28/2021
by   Ryan D'Orazio, et al.
3

We investigate the convergence of stochastic mirror descent (SMD) in relatively smooth and smooth convex optimization. In relatively smooth convex optimization we provide new convergence guarantees for SMD with a constant stepsize. For smooth convex optimization we propose a new adaptive stepsize scheme – the mirror stochastic Polyak stepsize (mSPS). Notably, our convergence results in both settings do not make bounded gradient assumptions or bounded variance assumptions, and we show convergence to a neighborhood that vanishes under interpolation. mSPS generalizes the recently proposed stochastic Polyak stepsize (SPS) (Loizou et al., 2021) to mirror descent and remains both practical and efficient for modern machine learning applications while inheriting the benefits of mirror descent. We complement our results with experiments across various supervised learning tasks and different instances of SMD, demonstrating the effectiveness of mSPS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2018

Fastest Rates for Stochastic Mirror Descent Methods

Relative smoothness - a notion introduced by Birnbaum et al. (2011) and ...
research
08/09/2015

A Linearly-Convergent Stochastic L-BFGS Algorithm

We propose a new stochastic L-BFGS algorithm and prove a linear converge...
research
11/13/2020

Convex Optimization with an Interpolation-based Projection and its Application to Deep Learning

Convex optimizers have known many applications as differentiable layers ...
research
02/23/2022

Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance

We study stochastic convex optimization under infinite noise variance. S...
research
02/26/2018

Analysis of Langevin Monte Carlo via convex optimization

In this paper, we provide new insights on the Unadjusted Langevin Algori...
research
11/07/2016

Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks

Modern convolutional networks, incorporating rectifiers and max-pooling,...
research
02/07/2022

Finite-Sum Optimization: A New Perspective for Convergence to a Global Solution

Deep neural networks (DNNs) have shown great success in many machine lea...

Please sign up or login with your details

Forgot password? Click here to reset