WARPd: A linearly convergent first-order method for inverse problems with approximate sharpness conditions

10/24/2021
by   Matthew J. Colbrook, et al.
0

Reconstruction of signals from undersampled and noisy measurements is a topic of considerable interest. Sharpness conditions directly control the recovery performance of restart schemes for first-order methods without the need for restrictive assumptions such as strong convexity. However, they are challenging to apply in the presence of noise or approximate model classes (e.g., approximate sparsity). We provide a first-order method: Weighted, Accelerated and Restarted Primal-dual (WARPd), based on primal-dual iterations and a novel restart-reweight scheme. Under a generic approximate sharpness condition, WARPd achieves stable linear convergence to the desired vector. Many problems of interest fit into this framework. For example, we analyze sparse recovery in compressed sensing, low-rank matrix recovery, matrix completion, TV regularization, minimization of Bx_l^1 under constraints (l^1-analysis problems for general B), and mixed regularization problems. We show how several quantities controlling recovery performance also provide explicit approximate sharpness constants. Numerical experiments show that WARPd compares favorably with specialized state-of-the-art methods and is ideally suited for solving large-scale problems. We also present a noise-blind variant based on the Square-Root LASSO decoder. Finally, we show how to unroll WARPd as neural networks. This approximation theory result provides lower bounds for stable and accurate neural networks for inverse problems and sheds light on architecture choices. Code and a gallery of examples are made available online as a MATLAB package.

READ FULL TEXT

page 18

page 29

page 30

research
06/08/2021

Proof methods for robust low-rank matrix recovery

Low-rank matrix recovery problems arise naturally as mathematical formul...
research
08/05/2023

Approximating Positive Homogeneous Functions with Scale Invariant Neural Networks

We investigate to what extent it is possible to solve linear inverse pro...
research
03/02/2022

Stable, accurate and efficient deep neural networks for inverse problems with analysis-sparse models

Solving inverse problems is a fundamental component of science, engineer...
research
02/28/2019

On the convex geometry of blind deconvolution and matrix completion

Low-rank matrix recovery from structured measurements has been a topic o...
research
02/26/2019

GAN-based Projector for Faster Recovery in Compressed Sensing with Convergence Guarantees

A Generative Adversarial Network (GAN) with generator G trained to model...
research
05/30/2016

Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems

Solving inverse problems with iterative algorithms is popular, especiall...
research
05/27/2022

Dual Convexified Convolutional Neural Networks

We propose the framework of dual convexified convolutional neural networ...

Please sign up or login with your details

Forgot password? Click here to reset