Accelerating Optimization and Reinforcement Learning with Quasi-Stochastic Approximation

09/30/2020
by   Shuhang Chen, et al.
0

The ODE method has been a workhorse for algorithm design and analysis since the introduction of the stochastic approximation. It is now understood that convergence theory amounts to establishing robustness of Euler approximations for ODEs, while theory of rates of convergence requires finer analysis. This paper sets out to extend this theory to quasi-stochastic approximation, based on algorithms in which the "noise" is based on deterministic signals. The main results are obtained under minimal assumptions: the usual Lipschitz conditions for ODE vector fields, and it is assumed that there is a well defined linearization near the optimal parameter θ^*, with Hurwitz linearization matrix A^*. The main contributions are summarized as follows: (i) If the algorithm gain is a_t=g/(1+t)^ρ with g>0 and ρ∈(0,1), then the rate of convergence of the algorithm is 1/t^ρ. There is also a well defined "finite-t" approximation: a_t^-1{Θ_t-θ^*}=Y̅+Ξ^I_t+o(1) where Y̅∈ℝ^d is a vector identified in the paper, and {Ξ^I_t} is bounded with zero temporal mean. (ii) With gain a_t = g/(1+t) the results are not as sharp: the rate of convergence 1/t holds only if I + g A^* is Hurwitz. (iii) Based on the Ruppert-Polyak averaging of stochastic approximation, one would expect that a convergence rate of 1/t can be obtained by averaging: Θ^RP_T=1/T∫_0^T Θ_t dt where the estimates {Θ_t} are obtained using the gain in (i). The preceding sharp bounds imply that averaging results in 1/t convergence rate if and only if Y̅= 0. This condition holds if the noise is additive, but appears to fail in general. (iv) The theory is illustrated with applications to gradient-free optimization and policy gradient algorithms for reinforcement learning.

READ FULL TEXT
research
09/17/2018

Zap Meets Momentum: Stochastic Approximation Algorithms with Optimal Convergence Rate

There are two well known Stochastic Approximation techniques that are kn...
research
02/25/2020

Can speed up the convergence rate of stochastic gradient methods to O(1/k^2) by a gradient averaging strategy?

In this paper we consider the question of whether it is possible to appl...
research
09/06/2023

The Curse of Memory in Stochastic Approximation: Extended Version

Theory and application of stochastic approximation (SA) has grown within...
research
08/28/2020

ROOT-SGD: Sharp Nonasymptotics and Asymptotic Efficiency in a Single Algorithm

The theory and practice of stochastic optimization has focused on stocha...
research
10/27/2021

The ODE Method for Asymptotic Statistics in Stochastic Approximation and Reinforcement Learning

The paper concerns convergence and asymptotic statistics for stochastic ...
research
10/27/2021

A Law of Iterated Logarithm for Multi-Agent Reinforcement Learning

In Multi-Agent Reinforcement Learning (MARL), multiple agents interact w...
research
11/08/2021

Strong convergence rate of Euler-Maruyama approximations in temporal-spatial Hölder-norms

Classical approximation results for stochastic differential equations an...

Please sign up or login with your details

Forgot password? Click here to reset