Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

02/02/2019
by   Belhal Karimi, et al.
0

Stochastic approximation (SA) is a key method used in statistical learning. Recently, its non-asymptotic convergence analysis has been considered in many papers. However, most of the prior analyses are made under restrictive assumptions such as unbiased gradient estimates and convex objective function, which significantly limit their applications to sophisticated tasks such as online and reinforcement learning. These restrictions are all essentially relaxed in this work. In particular, we analyze a general SA scheme to minimize a non-convex, smooth objective function. We consider update procedure whose drift term depends on a state-dependent Markov chain and the mean field is not necessarily of gradient type, covering approximate second-order method and allowing asymptotic bias for the one-step updates. We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2022

A Gradient Smoothed Functional Algorithm with Truncated Cauchy Random Perturbations for Stochastic Optimization

In this paper, we present a stochastic gradient algorithm for minimizing...
research
05/27/2020

Convergence Analysis of Riemannian Stochastic Approximation Schemes

This paper analyzes the convergence for a large class of Riemannian stoc...
research
04/07/2021

Finite-Sample Analysis for Two Time-scale Non-linear TDC with General Smooth Function Approximation

Temporal-difference learning with gradient correction (TDC) is a two tim...
research
10/24/2021

Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis

Distributionally robust optimization (DRO) is a widely-used approach to ...
research
06/05/2019

Global Optimality Guarantees For Policy Gradient Methods

Policy gradients methods are perhaps the most widely used class of reinf...
research
09/14/2016

Stochastic Heavy Ball

This paper deals with a natural stochastic optimization procedure derive...
research
05/29/2020

Long term dynamics of the subgradient method for Lipschitz path differentiable functions

We consider the long-term dynamics of the vanishing stepsize subgradient...

Please sign up or login with your details

Forgot password? Click here to reset