Hierarchical Optimization-Derived Learning

02/11/2023
by   Risheng Liu, et al.
0

In recent years, by utilizing optimization techniques to formulate the propagation of deep model, a variety of so-called Optimization-Derived Learning (ODL) approaches have been proposed to address diverse learning and vision tasks. Although having achieved relatively satisfying practical performance, there still exist fundamental issues in existing ODL methods. In particular, current ODL methods tend to consider model construction and learning as two separate phases, and thus fail to formulate their underlying coupling and depending relationship. In this work, we first establish a new framework, named Hierarchical ODL (HODL), to simultaneously investigate the intrinsic behaviors of optimization-derived model construction and its corresponding learning process. Then we rigorously prove the joint convergence of these two sub-tasks, from the perspectives of both approximation quality and stationary analysis. To our best knowledge, this is the first theoretical guarantee for these two coupled ODL components: optimization and learning. We further demonstrate the flexibility of our framework by applying HODL to challenging learning tasks, which have not been properly addressed by existing ODL methods. Finally, we conduct extensive experiments on both synthetic data and real applications in vision and other learning tasks to verify the theoretical properties and practical performance of HODL in various application scenarios.

READ FULL TEXT
research
07/28/2023

Learning with Constraint Learning: New Perspective, Solution Strategy and Various Applications

The complexity of learning problems, such as Generative Adversarial Netw...
research
12/10/2020

Learning Optimization-inspired Image Propagation with Control Mechanisms and Architecture Augmentations for Low-level Vision

In recent years, building deep learning models from optimization perspec...
research
02/28/2017

An Optimization Framework with Flexible Inexact Inner Iterations for Nonconvex and Nonsmooth Programming

In recent years, numerous vision and learning tasks have been (re)formul...
research
12/06/2022

Extending Universal Approximation Guarantees: A Theoretical Justification for the Continuity of Real-World Learning Tasks

Universal Approximation Theorems establish the density of various classe...
research
11/18/2017

Learning Aggregated Transmission Propagation Networks for Haze Removal and Beyond

Single image dehazing is an important low-level vision task with many ap...
research
10/01/2021

Towards Gradient-based Bilevel Optimization with Non-convex Followers and Beyond

In recent years, Bi-Level Optimization (BLO) techniques have received ex...
research
10/18/2019

Investigating Task-driven Latent Feasibility for Nonconvex Image Modeling

Properly modeling the latent image distributions always plays a key role...

Please sign up or login with your details

Forgot password? Click here to reset