Shuffling Gradient-Based Methods with Momentum

11/24/2020
by   Trang H. Tran, et al.
0

We combine two advanced ideas widely used in optimization for machine learning: shuffling strategy and momentum technique to develop a novel shuffling gradient-based method with momentum to approximate a stationary point of non-convex finite-sum minimization problems. While our method is inspired by momentum techniques, its update is significantly different from existing momentum-based methods. We establish that our algorithm achieves a state-of-the-art convergence rate for both constant and diminishing learning rates under standard assumptions (i.e., L-smoothness and bounded variance). When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods. This algorithm covers the single-shuffling and incremental gradient schemes as special cases. We prove the same convergence rate of this algorithm under the L-smoothness and bounded gradient assumptions. We demonstrate our algorithms via numerical simulations on standard datasets and compare them with existing shuffling methods. Our tests have shown encouraging performance of the new algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2020

Momentum with Variance Reduction for Nonconvex Composition Optimization

Composition optimization is widely-applied in nonconvex machine learning...
research
07/07/2016

Nesterov's Accelerated Gradient and Momentum as approximations to Regularised Update Descent

We present a unifying framework for adapting the update direction in gra...
research
01/07/2023

An efficient and robust SAV based algorithm for discrete gradient systems arising from optimizations

We propose in this paper a new minimization algorithm based on a slightl...
research
06/01/2022

Stochastic Gradient Methods with Preconditioned Updates

This work considers non-convex finite sum minimization. There are a numb...
research
11/19/2020

A Stable High-order Tuner for General Convex Functions

Iterative gradient-based algorithms have been increasingly applied for t...
research
07/30/2020

Momentum Q-learning with Finite-Sample Convergence Guarantee

Existing studies indicate that momentum ideas in conventional optimizati...
research
08/20/2021

Practical and Fast Momentum-Based Power Methods

The power method is a classical algorithm with broad applications in mac...

Please sign up or login with your details

Forgot password? Click here to reset