Log In Sign Up

Shuffling Gradient-Based Methods with Momentum

by   Trang H. Tran, et al.

We combine two advanced ideas widely used in optimization for machine learning: shuffling strategy and momentum technique to develop a novel shuffling gradient-based method with momentum to approximate a stationary point of non-convex finite-sum minimization problems. While our method is inspired by momentum techniques, its update is significantly different from existing momentum-based methods. We establish that our algorithm achieves a state-of-the-art convergence rate for both constant and diminishing learning rates under standard assumptions (i.e., L-smoothness and bounded variance). When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods. This algorithm covers the single-shuffling and incremental gradient schemes as special cases. We prove the same convergence rate of this algorithm under the L-smoothness and bounded gradient assumptions. We demonstrate our algorithms via numerical simulations on standard datasets and compare them with existing shuffling methods. Our tests have shown encouraging performance of the new algorithms.


page 1

page 2

page 3

page 4


Momentum with Variance Reduction for Nonconvex Composition Optimization

Composition optimization is widely-applied in nonconvex machine learning...

Nesterov's Accelerated Gradient and Momentum as approximations to Regularised Update Descent

We present a unifying framework for adapting the update direction in gra...

Nesterov Accelerated Shuffling Gradient Method for Convex Optimization

In this paper, we propose Nesterov Accelerated Shuffling Gradient (NASG)...

A Stable High-order Tuner for General Convex Functions

Iterative gradient-based algorithms have been increasingly applied for t...

Stochastic Gradient Methods with Preconditioned Updates

This work considers non-convex finite sum minimization. There are a numb...

Momentum Q-learning with Finite-Sample Convergence Guarantee

Existing studies indicate that momentum ideas in conventional optimizati...