Bilevel Integrative Optimization for Ill-posed Inverse Problems

07/06/2019
by   Risheng Liu, et al.
1

Classical optimization techniques often formulate the feasibility of the problems as set, equality or inequality constraints. However, explicitly designing these constraints is indeed challenging for complex real-world applications and too strict constraints may even lead to intractable optimization problems. On the other hand, it is still hard to incorporate data-dependent information into conventional numerical iterations. To partially address the above limits and inspired by the leader-follower gaming perspective, this work first introduces a bilevel-type formulation to jointly investigate the feasibility and optimality of nonconvex and nonsmooth optimization problems. Then we develop an algorithmic framework to couple forward-backward proximal computations to optimize our established bilevel leader-follower model. We prove its convergence and estimate the convergence rate. Furthermore, a learning-based extension is developed, in which we establish an unrolling strategy to incorporate data-dependent network architectures into our iterations. Fortunately, it can be proved that by introducing some mild checking conditions, all our original convergence results can still be preserved for this learnable extension. As a nontrivial byproduct, we demonstrate how to apply this ensemble-like methodology to address different low-level vision tasks. Extensive experiments verify the theoretical results and show the advantages of our method against existing state-of-the-art approaches.

READ FULL TEXT

page 5

page 6

page 7

page 8

page 9

research
10/18/2019

Investigating Task-driven Latent Feasibility for Nonconvex Image Modeling

Properly modeling the latent image distributions always plays a key role...
research
04/28/2018

Toward Designing Convergent Deep Operator Splitting Methods for Task-specific Nonconvex Optimization

Operator splitting methods have been successfully used in computational ...
research
08/16/2018

On the Convergence of Learning-based Iterative Methods for Nonconvex Inverse Problems

Numerous tasks at the core of statistics, learning and vision areas are ...
research
09/24/2019

On the Convergence of ADMM with Task Adaption and Beyond

Along with the development of learning and vision, Alternating Direction...
research
05/15/2019

Differentiable Linearized ADMM

Recently, a number of learning-based optimization methods that combine d...
research
11/21/2017

Proximal Alternating Direction Network: A Globally Converged Deep Unrolling Framework

Deep learning models have gained great success in many real-world applic...
research
04/25/2021

DC3: A learning method for optimization with hard constraints

Large optimization problems with hard constraints arise in many settings...

Please sign up or login with your details

Forgot password? Click here to reset