DeepAI AI Chat
Log In Sign Up

Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization

by   Yue Yu, et al.
Tsinghua University

We consider the stochastic composition optimization problem proposed in wang2017stochastic, which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O( S/S), which improves upon the O(S^-4/9) rate in wang2016accelerating when the objective is convex and Lipschitz smooth. Moreover, com-SVR-ADMM possesses a rate of O(1/√(S)) when the objective is convex but without Lipschitz smoothness. We also conduct experiments and show that it outperforms existing algorithms.


page 1

page 2

page 3

page 4


Accelerated Variance Reduced Stochastic ADMM

Recently, many variance reduced stochastic alternating direction method ...

Stochastic Variance-Reduced ADMM

The alternating direction method of multipliers (ADMM) is a powerful opt...

Faster Convex Lipschitz Regression via 2-block ADMM

The task of approximating an arbitrary convex function arises in several...

Plug-and-Play Methods Provably Converge with Properly Trained Denoisers

Plug-and-play (PnP) is a non-convex framework that integrates modern den...

Nonconvex Stochastic Nested Optimization via Stochastic ADMM

We consider the stochastic nested composition optimization problem where...

Dual optimization for convex constrained objectives without the gradient-Lipschitz assumption

The minimization of convex objectives coming from linear supervised lear...

A New Algorithm for Convex Biclustering and Its Extension to the Compositional Data

Biclustering is a powerful data mining technique that allows simultaneou...