Toward Designing Convergent Deep Operator Splitting Methods for Task-specific Nonconvex Optimization

04/28/2018
by   Risheng Liu, et al.
0

Operator splitting methods have been successfully used in computational sciences, statistics, learning and vision areas to reduce complex problems into a series of simpler subproblems. However, prevalent splitting schemes are mostly established only based on the mathematical properties of some general optimization models. So it is a laborious process and often requires many iterations of ideation and validation to obtain practical and task-specific optimal solutions, especially for nonconvex problems in real-world scenarios. To break through the above limits, we introduce a new algorithmic framework, called Learnable Bregman Splitting (LBS), to perform deep-architecture-based operator splitting for nonconvex optimization based on specific task model. Thanks to the data-dependent (i.e., learnable) nature, our LBS can not only speed up the convergence, but also avoid unwanted trivial solutions for real-world tasks. Though with inexact deep iterations, we can still establish the global convergence and estimate the asymptotic convergence rate of LBS only by enforcing some fairly loose assumptions. Extensive experiments on different applications (e.g., image completion and deblurring) verify our theoretical results and show the superiority of LBS against existing methods.

READ FULL TEXT

page 4

page 6

page 7

research
07/06/2019

Bilevel Integrative Optimization for Ill-posed Inverse Problems

Classical optimization techniques often formulate the feasibility of the...
research
09/24/2019

On the Convergence of ADMM with Task Adaption and Beyond

Along with the development of learning and vision, Alternating Direction...
research
08/16/2018

On the Convergence of Learning-based Iterative Methods for Nonconvex Inverse Problems

Numerous tasks at the core of statistics, learning and vision areas are ...
research
02/28/2017

An Optimization Framework with Flexible Inexact Inner Iterations for Nonconvex and Nonsmooth Programming

In recent years, numerous vision and learning tasks have been (re)formul...
research
11/11/2021

Nonconvex flexible sparsity regularization: theory and monotone numerical schemes

Flexible sparsity regularization means stably approximating sparse solut...
research
07/18/2023

Connections between Operator-splitting Methods and Deep Neural Networks with Applications in Image Segmentation

Deep neural network is a powerful tool for many tasks. Understanding why...
research
09/01/2011

Nonconvex proximal splitting: batch and incremental algorithms

Within the unmanageably large class of nonconvex optimization, we consid...

Please sign up or login with your details

Forgot password? Click here to reset