Learning the optimal regularizer for inverse problems

06/11/2021
by   Giovanni S. Alberti, et al.
5

In this work, we consider the linear inverse problem y=Ax+ϵ, where A X→ Y is a known linear operator between the separable Hilbert spaces X and Y, x is a random variable in X and ϵ is a zero-mean random process in Y. This setting covers several inverse problems in imaging including denoising, deblurring, and X-ray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator A and depends only on the mean and covariance of x. Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both x and y, and one unsupervised, based only on samples of x. In both cases, we prove generalization bounds, under some weak assumptions on the distribution of x and ϵ, including the case of sub-Gaussian variables. Our bounds hold in infinite-dimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2017

Learning to solve inverse problems using Wasserstein loss

We propose using the Wasserstein loss for training in inverse problems. ...
research
07/01/2021

How many samples are needed to reliably approximate the best linear estimator for a linear inverse problem?

The linear minimum mean squared error (LMMSE) estimator is the best line...
research
11/06/2020

Discretization of learned NETT regularization for solving inverse problems

Deep learning based reconstruction methods deliver outstanding results f...
research
04/20/2023

Adaptive minimax optimality in statistical inverse problems via SOLIT – Sharp Optimal Lepskii-Inspired Tuning

We consider statistical linear inverse problems in separable Hilbert spa...
research
12/21/2020

A generalized conditional gradient method for dynamic inverse problems with optimal transport regularization

We develop a dynamic generalized conditional gradient method (DGCG) for ...
research
11/21/2021

Bilevel learning of l1-regularizers with closed-form gradients(BLORC)

We present a method for supervised learning of sparsity-promoting regula...
research
11/16/2022

Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem

We consider the problem of learning a linear operator θ between two Hilb...

Please sign up or login with your details

Forgot password? Click here to reset