Convergence Rates for Stochastic Approximation on a Boundary

08/15/2022
by   Kody Law, et al.
0

We analyze the behavior of projected stochastic gradient descent focusing on the case where the optimum is on the boundary of the constraint set and the gradient does not vanish at the optimum. Here iterates may in expectation make progress against the objective at each step. When this and an appropriate moment condition on noise holds, we prove that the convergence rate to the optimum of the constrained stochastic gradient descent will be different and typically be faster than the unconstrained stochastic gradient descent algorithm. Our results argue that the concentration around the optimum is exponentially distributed rather than normally distributed, which typically determines the limiting convergence in the unconstrained case. The methods that we develop rely on a geometric ergodicity proof. This extends a result on Markov chains by Hajek (1982) to the area of stochastic approximation algorithms. As examples, we show how the results apply to linear programming and tabular reinforcement learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2021

Accelerated Almost-Sure Convergence Rates for Nonconvex Stochastic Gradient Descent using Stochastic Learning Rates

Large-scale optimization problems require algorithms both effective and ...
research
04/15/2014

Optimizing the CVaR via Sampling

Conditional Value at Risk (CVaR) is a prominent risk measure that is bei...
research
06/15/2020

Tight Nonparametric Convergence Rates for Stochastic Gradient Descent under the Noiseless Linear Model

In the context of statistical supervised learning, the noiseless linear ...
research
06/14/2018

Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors

We consider stochastic gradient descent for binary classification proble...
research
05/04/2018

Analysis of nonsmooth stochastic approximation: the differential inclusion approach

In this paper we address the convergence of stochastic approximation whe...
research
02/20/2021

Convergence Rates of Stochastic Gradient Descent under Infinite Noise Variance

Recent studies have provided both empirical and theoretical evidence ill...
research
02/12/2022

Formalization of a Stochastic Approximation Theorem

Stochastic approximation algorithms are iterative procedures which are u...

Please sign up or login with your details

Forgot password? Click here to reset