Differentiable Linearized ADMM

05/15/2019
by   Xingyu Xie, et al.
0

Recently, a number of learning-based optimization methods that combine data-driven architectures with the classical optimization algorithms have been proposed and explored, showing superior empirical performance in solving various ill-posed inverse problems, but there is still a scarcity of rigorous analysis about the convergence behaviors of learning-based optimization. In particular, most existing analyses are specific to unconstrained problems but cannot apply to the more general cases where some variables of interest are subject to certain constraints. In this paper, we propose Differentiable Linearized ADMM (D-LADMM) for solving the problems with linear constraints. Specifically, D-LADMM is a K-layer LADMM inspired deep neural network, which is obtained by firstly introducing some learnable weights in the classical Linearized ADMM algorithm and then generalizing the proximal operator to some learnable activation function. Notably, we rigorously prove that there exist a set of learnable parameters for D-LADMM to generate globally converged solutions, and we show that those desired parameters can be attained by training D-LADMM in a proper way. To the best of our knowledge, we are the first to provide the convergence analysis for the learning-based optimization method on constrained problems.

READ FULL TEXT
research
08/16/2018

On the Convergence of Learning-based Iterative Methods for Nonconvex Inverse Problems

Numerous tasks at the core of statistics, learning and vision areas are ...
research
08/17/2021

Instabilities in Plug-and-Play (PnP) algorithms from a learned denoiser

It's well-known that inverse problems are ill-posed and to solve them me...
research
06/21/2022

Solving Constrained Variational Inequalities via an Interior Point Method

We develop an interior-point approach to solve constrained variational i...
research
07/06/2019

Bilevel Integrative Optimization for Ill-posed Inverse Problems

Classical optimization techniques often formulate the feasibility of the...
research
07/30/2019

Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query Complexity

Zeroth-order (gradient-free) method is a class of powerful optimization ...
research
04/11/2017

Learning Deep CNN Denoiser Prior for Image Restoration

Model-based optimization methods and discriminative learning methods hav...
research
05/21/2020

Hyperspectral Unmixing Network Inspired by Unfolding an Optimization Problem

The hyperspectral image (HSI) unmixing task is essentially an inverse pr...

Please sign up or login with your details

Forgot password? Click here to reset