SAGA and Restricted Strong Convexity

02/19/2017
by   Chao Qu, et al.
0

SAGA is a fast incremental gradient method on the finite sum problem and its effectiveness has been tested on a vast of applications. In this paper, we analyze SAGA on a class of non-strongly convex and non-convex statistical problem such as Lasso, group Lasso, Logistic regression with ℓ_1 regularization, linear regression with SCAD regularization and Correct Lasso. We prove that SAGA enjoys the linear convergence rate up to the statistical estimation accuracy, under the assumption of restricted strong convexity (RSC). It significantly extends the applicability of SAGA in convex and non-convex optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2017

Linear convergence of SDCA in statistical estimation

In this paper, we consider stochastic dual coordinate (SDCA) without st...
research
11/07/2016

Linear Convergence of SVRG in Statistical Estimation

SVRG and its variants are among the state of art optimization algorithms...
research
02/19/2020

A Unified Convergence Analysis for Shuffling-Type Gradient Methods

In this paper, we provide a unified convergence analysis for a class of ...
research
07/14/2023

Performance of ℓ_1 Regularization for Sparse Convex Optimization

Despite widespread adoption in practice, guarantees for the LASSO and Gr...
research
04/22/2016

Non-convex Global Minimization and False Discovery Rate Control for the TREX

The TREX is a recently introduced method for performing sparse high-dime...
research
06/05/2015

Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

Many classical algorithms are found until several years later to outlive...
research
05/16/2015

A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives

In this paper we analyze boosting algorithms in linear regression from a...

Please sign up or login with your details

Forgot password? Click here to reset