Random-reshuffled SARAH does not need a full gradient computations

11/26/2021
by   Aleksandr Beznosikov, et al.
0

The StochAstic Recursive grAdient algoritHm (SARAH) algorithm is a variance reduced variant of the Stochastic Gradient Descent (SGD) algorithm that needs a gradient of the objective function from time to time. In this paper, we remove the necessity of a full gradient computation. This is achieved by using a randomized reshuffling strategy and aggregating stochastic gradients obtained in each epoch. The aggregated stochastic gradients serve as an estimate of a full gradient in the SARAH algorithm. We provide a theoretical analysis of the proposed approach and conclude the paper with numerical experiments that demonstrate the efficiency of this approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/01/2017

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient

In this paper, we propose a StochAstic Recursive grAdient algoritHm (SAR...
research
11/05/2022

Stochastic Variance Reduced Gradient for affine rank minimization problem

We develop an efficient stochastic variance reduced gradient descent alg...
research
07/25/2020

Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient

Deep Q-learning algorithms often suffer from poor gradient estimations w...
research
01/27/2020

Variance Reduction with Sparse Gradients

Variance reduction methods such as SVRG and SpiderBoost use a mixture of...
research
12/31/2019

Stochastic gradient-free descents

In this paper we propose stochastic gradient-free methods and gradient-f...
research
12/19/2022

Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis

In this book chapter, we briefly describe the main components that const...
research
08/23/2020

Multi-kernel Passive Stochastic Gradient Algorithms

This paper develops a novel passive stochastic gradient algorithm. In pa...

Please sign up or login with your details

Forgot password? Click here to reset