A unified variance-reduced accelerated gradient method for convex optimization

05/29/2019
by   Guanghui Lan, et al.
0

We propose a novel randomized incremental gradient algorithm, namely, VAriance-Reduced Accelerated Gradient (Varag), for finite-sum optimization. Equipped with a unified step-size policy that adjusts itself to the value of the conditional number, Varag exhibits the unified optimal rates of convergence for solving smooth convex finite-sum problems directly regardless of their strong convexity. Moreover, Varag is the first of its kind that benefits from the strong convexity of the data-fidelity term, and solves a wide class of problems only satisfying an error bound condition rather than strong convexity, both resulting in the optimal linear rate of convergence. Varag can also be extended to solve stochastic finite-sum problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2022

Nesterov Accelerated Shuffling Gradient Method for Convex Optimization

In this paper, we propose Nesterov Accelerated Shuffling Gradient (NASG)...
research
09/18/2021

An Accelerated Variance-Reduced Conditional Gradient Sliding Algorithm for First-order and Zeroth-order Optimization

The conditional gradient algorithm (also known as the Frank-Wolfe algori...
research
06/10/2018

Dissipativity Theory for Accelerating Stochastic Variance Reduction: A Unified Analysis of SVRG and Katyusha Using Semidefinite Programs

Techniques for reducing the variance of gradient estimates used in stoch...
research
06/06/2017

Limitations on Variance-Reduction and Acceleration Schemes for Finite Sum Optimization

We study the conditions under which one is able to efficiently apply var...
research
02/27/2020

On the Convergence of Nesterov's Accelerated Gradient Method in Stochastic Settings

We study Nesterov's accelerated gradient method in the stochastic approx...
research
01/27/2019

Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the O(1/T) Convergence Rate

Stochastic approximation (SA) is a classical approach for stochastic con...
research
04/15/2023

Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis

We study finite-sum distributed optimization problems with n-clients und...

Please sign up or login with your details

Forgot password? Click here to reset