High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize

04/06/2022
by   Ali Kavis, et al.
0

In this paper, we propose a new, simplified high probability analysis of AdaGrad for smooth, non-convex problems. More specifically, we focus on a particular accelerated gradient (AGD) template (Lan, 2020), through which we recover the original AdaGrad and its variant with averaging, and prove a convergence rate of 𝒪 (1/ √(T)) with high probability without the knowledge of smoothness and variance. We use a particular version of Freedman's concentration bound for martingale difference sequences (Kakade Tewari, 2008) which enables us to achieve the best-known dependence of log (1 / δ ) on the probability margin δ. We present our analysis in a modular way and obtain a complementary 𝒪 (1 / T) convergence rate in the deterministic setting. To the best of our knowledge, this is the first high probability result for AdaGrad with a truly adaptive scheme, i.e., completely oblivious to the knowledge of smoothness and uniform variance bound, which simultaneously has best-known dependence of log( 1/ δ). We further prove noise adaptation property of AdaGrad under additional noise assumptions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2023

Breaking the Lower Bound with (Little) Structure: Acceleration in Non-Convex Stochastic Optimization with Heavy-Tailed Noise

We consider the stochastic optimization problem with smooth but not nece...
research
03/22/2021

Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n)

The sharpest known high probability generalization bounds for uniformly ...
research
02/28/2023

High Probability Convergence of Stochastic Gradient Methods

In this work, we describe a generic approach to show convergence with hi...
research
06/28/2021

High-probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails

We consider non-convex stochastic optimization using first-order algorit...
research
01/09/2023

Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation

The stochastic proximal point (SPP) methods have gained recent attention...
research
08/28/2023

Improving Oblivious Reconfigurable Networks with High Probability

Oblivious Reconfigurable Networks (ORNs) use rapidly reconfiguring switc...
research
11/20/2019

A Tale of Two-Timescale Reinforcement Learning with the Tightest Finite-Time Bound

Policy evaluation in reinforcement learning is often conducted using two...

Please sign up or login with your details

Forgot password? Click here to reset