Continuous-time Models for Stochastic Optimization Algorithms

10/05/2018
by   Antonio Orvieto, et al.
0

We propose a new continuous-time formulation for first-order stochastic optimization algorithms such as mini-batch gradient descent and variance reduced techniques. We exploit this continuous-time model, together with a simple Lyapunov analysis as well as tools from stochastic calculus, in order to derive convergence bounds for various types of non-convex functions. We contrast these bounds to their known equivalent in discrete-time as well as derive new bounds. Our model also includes SVRG, for which we derive a linear convergence rate for the class of weakly quasi-convex and quadratically growing functions.

READ FULL TEXT
research
10/11/2017

Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem

Stochastic gradient descent in continuous time (SGDCT) provides a comput...
research
05/30/2022

Non-convex online learning via algorithmic equivalence

We study an algorithmic equivalence technique between nonconvex gradient...
research
05/05/2022

Identifiability, the KL property in metric spaces, and subgradient curves

Identifiability, and the closely related idea of partial smoothness, uni...
research
07/02/2019

The Role of Memory in Stochastic Optimization

The choice of how to retain information about past gradients dramaticall...
research
05/25/2022

A systematic approach to Lyapunov analyses of continuous-time models in convex optimization

First-order methods are often analyzed via their continuous-time models,...
research
05/26/2022

Predictor-corrector algorithms for stochastic optimization under gradual distribution shift

Time-varying stochastic optimization problems frequently arise in machin...
research
11/18/2021

Second-Order Mirror Descent: Convergence in Games Beyond Averaging and Discounting

In this paper, we propose a second-order extension of the continuous-tim...

Please sign up or login with your details

Forgot password? Click here to reset