Investigating Task-driven Latent Feasibility for Nonconvex Image Modeling

10/18/2019
by   Risheng Liu, et al.
14

Properly modeling the latent image distributions always plays a key role in a variety of low-level vision problems. Most existing approaches, such as Maximum A Posterior (MAP), aimed at establishing optimization models with prior regularization to address this task. However, designing sophisticated priors may lead to challenging optimization model and time-consuming iteration process. Recent studies tried to embed learnable network architectures into the MAP scheme. Unfortunately, for the MAP model with deeply trained priors, the exact behaviors and the inference process are actually hard to investigate, due to their inexact and uncontrolled nature. In this work, by investigating task-driven latent feasibility for the MAP-based model, we provide a new perspective to enforce domain knowledge and data distributions to MAP-based image modeling. Specifically, we first introduce an energy-based feasibility constraint to the given MAP model. By introducing the proximal gradient updating scheme to the objective and performing an adaptive averaging process, we obtain a completely new MAP inference process, named Proximal Average Optimization (PAO), for image modeling. Owning to the flexibility of PAO, we can also incorporate deeply trained architectures into the feasibility module. Finally, we provide a simple monotone descent-based control mechanism to guide the propagation of PAO. We prove in theory that the sequence generated by both our PAO and its learning-based extension can successfully converge to the critical point of the original MAP optimization task. We demonstrate how to apply our framework to address different vision applications. Extensive experiments verify the theoretical results and show the advantages of our method against existing state-of-the-art approaches.

READ FULL TEXT

page 1

page 5

page 7

page 8

page 9

page 11

research
07/06/2019

Bilevel Integrative Optimization for Ill-posed Inverse Problems

Classical optimization techniques often formulate the feasibility of the...
research
11/21/2017

Proximal Alternating Direction Network: A Globally Converged Deep Unrolling Framework

Deep learning models have gained great success in many real-world applic...
research
12/10/2020

Learning Optimization-inspired Image Propagation with Control Mechanisms and Architecture Augmentations for Low-level Vision

In recent years, building deep learning models from optimization perspec...
research
10/09/2018

Learning Converged Propagations with Deep Prior Ensemble for Image Enhancement

Enhancing visual qualities of images plays very important roles in vario...
research
11/18/2017

Learning Aggregated Transmission Propagation Networks for Haze Removal and Beyond

Single image dehazing is an important low-level vision task with many ap...
research
09/24/2019

On the Convergence of ADMM with Task Adaption and Beyond

Along with the development of learning and vision, Alternating Direction...
research
02/11/2023

Hierarchical Optimization-Derived Learning

In recent years, by utilizing optimization techniques to formulate the p...

Please sign up or login with your details

Forgot password? Click here to reset