On the Convergence of ADMM with Task Adaption and Beyond

09/24/2019
by   Risheng Liu, et al.
9

Along with the development of learning and vision, Alternating Direction Method of Multiplier (ADMM) has become a popular algorithm for separable optimization model with linear constraint. However, the ADMM and its numerical variants (e.g., inexact, proximal or linearized) are awkward to obtain state-of-the-art performance when dealing with complex learning and vision tasks due to their weak task-adaption ability. Recently, there has been an increasing interest in incorporating task-specific computational modules (e.g., designed filters or learned architectures) into ADMM iterations. Unfortunately, these task-related modules introduce uncontrolled and unstable iterative flows, they also break the structures of the original optimization model. Therefore, existing theoretical investigations are invalid for these resulted task-specific iterations. In this paper, we develop a simple and generic proximal ADMM framework to incorporate flexible task-specific module for learning and vision problems. We rigorously prove the convergence both in objective function values and the constraint violation and provide the worst-case convergence rate measured by the iteration complexity. Our investigations not only develop new perspectives for analyzing task-adaptive ADMM but also supply meaningful guidelines on designing practical optimization methods for real-world applications. Numerical experiments are conducted to verify the theoretical results and demonstrate the efficiency of our algorithmic framework.

READ FULL TEXT

page 10

page 11

page 13

research
04/28/2018

Toward Designing Convergent Deep Operator Splitting Methods for Task-specific Nonconvex Optimization

Operator splitting methods have been successfully used in computational ...
research
10/21/2017

Zeroth-Order Online Alternating Direction Method of Multipliers: Convergence Analysis and Applications

In this paper, we design and analyze a new zeroth-order online algorithm...
research
11/21/2017

Proximal Alternating Direction Network: A Globally Converged Deep Unrolling Framework

Deep learning models have gained great success in many real-world applic...
research
07/06/2019

Bilevel Integrative Optimization for Ill-posed Inverse Problems

Classical optimization techniques often formulate the feasibility of the...
research
02/28/2017

An Optimization Framework with Flexible Inexact Inner Iterations for Nonconvex and Nonsmooth Programming

In recent years, numerous vision and learning tasks have been (re)formul...
research
10/10/2019

Understanding Limitation of Two Symmetrized Orders by Worst-case Complexity

It was recently found that the standard version of multi-block cyclic AD...
research
10/18/2019

Investigating Task-driven Latent Feasibility for Nonconvex Image Modeling

Properly modeling the latent image distributions always plays a key role...

Please sign up or login with your details

Forgot password? Click here to reset