
Learning to solve inverse problems using Wasserstein loss
We propose using the Wasserstein loss for training in inverse problems. ...
read it

How many samples are needed to reliably approximate the best linear estimator for a linear inverse problem?
The linear minimum mean squared error (LMMSE) estimator is the best line...
read it

On the asymptotical regularization for linear inverse problems in presence of white noise
We interpret steady linear statistical inverse problems as artificial dy...
read it

Discretization of learned NETT regularization for solving inverse problems
Deep learning based reconstruction methods deliver outstanding results f...
read it

A generalized conditional gradient method for dynamic inverse problems with optimal transport regularization
We develop a dynamic generalized conditional gradient method (DGCG) for ...
read it

Supervised Learning of SparsityPromoting Regularizers for Denoising
We present a method for supervised learning of sparsitypromoting regula...
read it

Sparse aNETT for Solving Inverse Problems with Deep Learning
We propose a sparse reconstruction framework (aNETT) for solving inverse...
read it
Learning the optimal regularizer for inverse problems
In this work, we consider the linear inverse problem y=Ax+ϵ, where A X→ Y is a known linear operator between the separable Hilbert spaces X and Y, x is a random variable in X and ϵ is a zeromean random process in Y. This setting covers several inverse problems in imaging including denoising, deblurring, and Xray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator A and depends only on the mean and covariance of x. Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both x and y, and one unsupervised, based only on samples of x. In both cases, we prove generalization bounds, under some weak assumptions on the distribution of x and ϵ, including the case of subGaussian variables. Our bounds hold in infinitedimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.
READ FULL TEXT
Comments
There are no comments yet.