Bayesian view on the training of invertible residual networks for solving linear inverse problems

07/19/2023
by   Clemens Arndt, et al.
0

Learning-based methods for inverse problems, adapting to the data's inherent structure, have become ubiquitous in the last decade. Besides empirical investigations of their often remarkable performance, an increasing number of works addresses the issue of theoretical guarantees. Recently, [3] exploited invertible residual networks (iResNets) to learn provably convergent regularizations given reasonable assumptions. They enforced these guarantees by approximating the linear forward operator with an iResNet. Supervised training on relevant samples introduces data dependency into the approach. An open question in this context is to which extent the data's inherent structure influences the training outcome, i.e., the learned reconstruction scheme. Here we address this delicate interplay of training design and data dependency from a Bayesian perspective and shed light on opportunities and limitations. We resolve these limitations by analyzing reconstruction-based training of the inverses of iResNets, where we show that this optimization strategy introduces a level of data-dependency that cannot be achieved by approximation training. We further provide and discuss a series of numerical experiments underpinning and extending the theoretical findings.

READ FULL TEXT

page 20

page 22

page 24

page 25

page 27

page 38

page 39

page 40

research
06/02/2023

Invertible residual networks in the context of regularization theory for linear inverse problems

Learned inverse problem solvers exhibit remarkable performance in applic...
research
09/12/2023

Data-proximal null-space networks for inverse problems

Inverse problems are inherently ill-posed and therefore require regulari...
research
07/18/2023

Convergent regularization in inverse problems and linear plug-and-play denoisers

Plug-and-play (PnP) denoising is a popular iterative framework for solvi...
research
06/22/2021

Learned Interpretable Residual Extragradient ISTA for Sparse Coding

Recently, the study on learned iterative shrinkage thresholding algorith...
research
04/20/2020

Sparse aNETT for Solving Inverse Problems with Deep Learning

We propose a sparse reconstruction framework (aNETT) for solving inverse...
research
10/19/2021

Stochastic Primal-Dual Deep Unrolling Networks for Imaging Inverse Problems

In this work we present a new type of efficient deep-unrolling networks ...

Please sign up or login with your details

Forgot password? Click here to reset