Stochastic gradient algorithms from ODE splitting perspective

04/19/2020
by   Daniil Merkulov, et al.
19

We present a different view on stochastic optimization, which goes back to the splitting schemes for approximate solutions of ODE. In this work, we provide a connection between stochastic gradient descent approach and first-order splitting scheme for ODE. We consider the special case of splitting, which is inspired by machine learning applications and derive a new upper bound on the global splitting error for it. We present, that the Kaczmarz method is the limit case of the splitting scheme for the unit batch SGD for linear least squares problem. We support our findings with systematic empirical studies, which demonstrates, that a more accurate solution of local problems leads to the stepsize robustness and provides better convergence in time and iterations on the softmax regression problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2018

Byzantine Stochastic Gradient Descent

This paper studies the problem of distributed stochastic optimization in...
research
06/17/2021

Sub-linear convergence of a tamed stochastic gradient descent method in Hilbert space

In this paper, we introduce the tamed stochastic gradient descent method...
research
09/03/2019

Parameter Estimation in the Hermitian and Skew-Hermitian Splitting Method Using Gradient Iterations

This paper presents enhancement strategies for the Hermitian and skew-He...
research
03/23/2020

Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting

We propose signed splitting steepest descent (S3D), which progressively ...
research
08/22/2013

Online and stochastic Douglas-Rachford splitting method for large scale machine learning

Online and stochastic learning has emerged as powerful tool in large sca...
research
10/06/2019

Splitting Steepest Descent for Growing Neural Architectures

We develop a progressive training approach for neural networks which ada...
research
08/17/2020

Stochastic Optimization Forests

We study conditional stochastic optimization problems, where we leverage...

Please sign up or login with your details

Forgot password? Click here to reset