Stochastic Learning under Random Reshuffling

03/21/2018
by   Bicheng Ying, et al.
0

In empirical risk optimization, it has been observed that stochastic gradient implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data uniformly. Recent works have pursued justifications for this behavior by examining the convergence rate of the learning process under diminishing step-sizes. This work focuses on the constant step-size case. In this case, convergence is guaranteed to a small neighborhood of the optimizer albeit at a linear rate. The analysis establishes analytically that random reshuffling outperforms uniform sampling by showing explicitly that iterates approach a smaller neighborhood of size O(μ^2) around the minimizer rather than O(μ). Furthermore, we derive an analytical expression for the steady-state mean-square-error performance of the algorithm, which helps clarify in greater detail the differences between sampling with and without replacement. We also explain the periodic behavior that is observed in random reshuffling implementations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2017

A Robust Variable Step Size Fractional Least Mean Square (RVSS-FLMS) Algorithm

In this paper, we propose an adaptive framework for the variable step si...
research
03/14/2016

On the Influence of Momentum Acceleration on Online Learning

The article examines in some detail the convergence rate and mean-square...
research
02/03/2019

Finite-Time Error Bounds For Linear Stochastic Approximation and TD Learning

We consider the dynamics of a linear stochastic approximation algorithm ...
research
08/04/2017

Convergence of Variance-Reduced Stochastic Learning under Random Reshuffling

Several useful variance-reduced stochastic gradient algorithms, such as ...
research
12/06/2018

On Uncensored Mean First-Passage-Time Performance Experiments with Multiwalk in R^p: a New Stochastic Optimization Algorithm

A rigorous empirical comparison of two stochastic solvers is important w...
research
07/15/2020

Incremental Without Replacement Sampling in Nonconvex Optimization

Minibatch decomposition methods for empirical risk minimization are comm...
research
12/04/2018

q-LMF: Quantum Calculus-based Least Mean Fourth Algorithm

Channel estimation is an essential part of modern communication systems ...

Please sign up or login with your details

Forgot password? Click here to reset