Backward error analysis and the qualitative behaviour of stochastic optimization algorithms: Application to stochastic coordinate descent

by   Stefano Di Giovacchino, et al.

Stochastic optimization methods have been hugely successful in making large-scale optimization problems feasible when computing the full gradient is computationally prohibitive. Using the theory of modified equations for numerical integrators, we propose a class of stochastic differential equations that approximate the dynamics of general stochastic optimization methods more closely than the original gradient flow. Analyzing a modified stochastic differential equation can reveal qualitative insights about the associated optimization method. Here, we study mean-square stability of the modified equation in the case of stochastic coordinate descent.


A Latent Variational Framework for Stochastic Optimization

This paper provides a unifying theoretical framework for stochastic opti...

A Bayesian Variational Framework for Stochastic Optimization

This work proposes a theoretical framework for stochastic optimization a...

SVGD: A Virtual Gradients Descent Method for Stochastic Optimization

Inspired by dynamic programming, we propose Stochastic Virtual Gradient ...

Error estimates of the backward Euler-Maruyama method for multi-valued stochastic differential equations

In this paper, we derive error estimates of the backward Euler-Maruyama ...

Stochastic differential theory of cricket

A new formalism for analyzing the progression of cricket game using Stoc...

Non-local Optimization: Imposing Structure on Optimization Problems by Relaxation

In stochastic optimization, particularly in evolutionary computation and...

SEBOOST - Boosting Stochastic Learning Using Subspace Optimization Techniques

We present SEBOOST, a technique for boosting the performance of existing...

Please sign up or login with your details

Forgot password? Click here to reset