Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization under Infinite Noise Variance

02/23/2022
by   Nuri Mert Vural, et al.
0

We study stochastic convex optimization under infinite noise variance. Specifically, when the stochastic gradient is unbiased and has uniformly bounded (1+κ)-th moment, for some κ∈ (0,1], we quantify the convergence rate of the Stochastic Mirror Descent algorithm with a particular class of uniformly convex mirror maps, in terms of the number of iterations, dimensionality and related geometric parameters of the optimization problem. Interestingly this algorithm does not require any explicit gradient clipping or normalization, which have been extensively used in several recent empirical and theoretical works. We complement our convergence results with information-theoretic lower bounds showing that no other algorithm using only stochastic first-order oracles can achieve improved rates. Our results have several interesting consequences for devising online/streaming stochastic approximation algorithms for problems arising in robust statistics and machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2013

Optimal rates for zero-order convex optimization: the power of two function evaluations

We consider derivative-free algorithms for stochastic and non-stochastic...
research
07/04/2023

Accelerated stochastic approximation with state-dependent noise

We consider a class of stochastic smooth convex optimization problems un...
research
09/17/2018

Zeroth-order (Non)-Convex Stochastic Optimization via Conditional Gradient and Gradient Updates

In this paper, we propose and analyze zeroth-order stochastic approximat...
research
05/15/2015

Algorithmic Connections Between Active Learning and Stochastic Convex Optimization

Interesting theoretical associations have been established by recent pap...
research
10/28/2021

Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize

We investigate the convergence of stochastic mirror descent (SMD) in rel...
research
09/20/2021

Sharp global convergence guarantees for iterative nonconvex optimization: A Gaussian process perspective

We consider a general class of regression models with normally distribut...
research
11/20/2017

Unbiased Simulation for Optimizing Stochastic Function Compositions

In this paper, we introduce an unbiased gradient simulation algorithms f...

Please sign up or login with your details

Forgot password? Click here to reset