SRKCD: a stabilized Runge-Kutta method for stochastic optimization

01/30/2022
by   Tony Stillfjord, et al.
0

We introduce a family of stochastic optimization methods based on the Runge-Kutta-Chebyshev (RKC) schemes. The RKC methods are explicit methods originally designed for solving stiff ordinary differential equations by ensuring that their stability regions are of maximal size.In the optimization context, this allows for larger step sizes (learning rates) and better robustness compared to e.g. the popular stochastic gradient descent method. Our main contribution is a convergence proof for essentially all stochastic Runge-Kutta optimization methods. This shows convergence in expectation with an optimal sublinear rate under standard assumptions of strong convexity and Lipschitz-continuous gradients. For non-convex objectives, we get convergence to zero in expectation of the gradients. The proof requires certain natural conditions on the Runge-Kutta coefficients, and we further demonstrate that the RKC schemes satisfy these. Finally, we illustrate the improved stability properties of the methods in practice by performing numerical experiments on both a small-scale test example and on a problem arising from an image classification application in machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2019

The importance of better models in stochastic optimization

Standard stochastic optimization methods are brittle, sensitive to steps...
research
03/23/2020

Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling

Owing to their stability and convergence speed, extragradient methods ha...
research
06/17/2021

Sub-linear convergence of a tamed stochastic gradient descent method in Hilbert space

In this paper, we introduce the tamed stochastic gradient descent method...
research
12/20/2017

Statistical Inference for the Population Landscape via Moment Adjusted Stochastic Gradients

Modern statistical inference tasks often require iterative optimization ...
research
11/11/2020

Non-local Optimization: Imposing Structure on Optimization Problems by Relaxation

In stochastic optimization, particularly in evolutionary computation and...
research
06/14/2023

Contraction Rate Estimates of Stochastic Gradient Kinetic Langevin Integrators

In previous work, we introduced a method for determining convergence rat...

Please sign up or login with your details

Forgot password? Click here to reset