Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization

05/11/2017
by   Yue Yu, et al.
0

We consider the stochastic composition optimization problem proposed in wang2017stochastic, which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O( S/S), which improves upon the O(S^-4/9) rate in wang2016accelerating when the objective is convex and Lipschitz smooth. Moreover, com-SVR-ADMM possesses a rate of O(1/√(S)) when the objective is convex but without Lipschitz smoothness. We also conduct experiments and show that it outperforms existing algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2017

Accelerated Variance Reduced Stochastic ADMM

Recently, many variance reduced stochastic alternating direction method ...
research
04/24/2016

Stochastic Variance-Reduced ADMM

The alternating direction method of multipliers (ADMM) is a powerful opt...
research
11/02/2021

Faster Convex Lipschitz Regression via 2-block ADMM

The task of approximating an arbitrary convex function arises in several...
research
05/14/2019

Plug-and-Play Methods Provably Converge with Properly Trained Denoisers

Plug-and-play (PnP) is a non-convex framework that integrates modern den...
research
11/12/2019

Nonconvex Stochastic Nested Optimization via Stochastic ADMM

We consider the stochastic nested composition optimization problem where...
research
07/10/2018

Dual optimization for convex constrained objectives without the gradient-Lipschitz assumption

The minimization of convex objectives coming from linear supervised lear...
research
09/15/2019

Communication-Censored Linearized ADMM for Decentralized Consensus Optimization

In this paper, we propose a communication- and computation-efficient alg...

Please sign up or login with your details

Forgot password? Click here to reset