Convergence Revisit on Generalized Symmetric ADMM

06/19/2019
by   Jianchao Bai, et al.
0

In this note, we show a sublinear nonergodic convergence rate for the algorithm developed in [Bai, et al. Generalized symmetric ADMM for separable convex optimization. Comput. Optim. Appl. 70, 129-170 (2018)], as well as its linear convergence under assumptions that the sub-differential of each component objective function is piecewise linear and all the constraint sets are polyhedra. These remaining convergence results are established for the stepsize parameters of dual variables belonging to a special isosceles triangle region, which aims to strengthen our understanding for convergence of the generalized symmetric ADMM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2021

Convergence on a symmetric accelerated stochastic ADMM with larger stepsizes

In this paper, we develop a symmetric accelerated stochastic Alternating...
research
10/22/2015

Generalized conditional gradient: analysis of convergence and applications

The objectives of this technical report is to provide additional results...
research
10/10/2019

Understanding Limitation of Two Symmetrized Orders by Worst-case Complexity

It was recently found that the standard version of multi-block cyclic AD...
research
05/19/2016

Randomized Primal-Dual Proximal Block Coordinate Updates

In this paper we propose a randomized primal-dual proximal block coordin...
research
04/21/2021

Fixed-Point and Objective Convergence of Plug-and-Play Algorithms

A standard model for image reconstruction involves the minimization of a...
research
09/01/2019

Accelerating ADMM for Efficient Simulation and Optimization

The alternating direction method of multipliers (ADMM) is a popular appr...
research
05/24/2019

Tight Linear Convergence Rate of ADMM for Decentralized Optimization

The present paper considers leveraging network topology information to i...

Please sign up or login with your details

Forgot password? Click here to reset