DeepAI
Log In Sign Up

Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

06/05/2015
by   Zeyuan Allen-Zhu, et al.
0

Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings. In this paper, we show that SVRG is one such method: being originally designed for strongly convex objectives, it is also very robust in non-strongly convex or sum-of-non-convex settings. More precisely, we provide new analysis to improve the state-of-the-art running times in both settings by either applying SVRG or its novel variant. Since non-strongly convex objectives include important examples such as Lasso or logistic regression, and sum-of-non-convex objectives include famous examples such as stochastic PCA and is even believed to be related to training deep neural nets, our results also imply better performances in these applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/19/2020

A Unified Convergence Analysis for Shuffling-Type Gradient Methods

In this paper, we provide a unified convergence analysis for a class of ...
02/19/2017

SAGA and Restricted Strong Convexity

SAGA is a fast incremental gradient method on the finite sum problem and...
07/01/2016

Convergence Rate of Frank-Wolfe for Non-Convex Objectives

We give a simple proof that the Frank-Wolfe algorithm obtains a stationa...
09/08/2017

A Modular Analysis of Adaptive (Non-)Convex Optimization: Optimism, Composite Objectives, and Variational Bounds

Recently, much work has been done on extending the scope of online learn...
10/12/2022

Momentum Aggregation for Private Non-convex ERM

We introduce new algorithms and convergence guarantees for privacy-prese...
11/07/2016

Linear Convergence of SVRG in Statistical Estimation

SVRG and its variants are among the state of art optimization algorithms...