Learning Regularization Parameters of Inverse Problems via Deep Neural Networks

04/14/2021
by   Babak Maboudi Afkham, et al.
0

In this work, we describe a new approach that uses deep neural networks (DNN) to obtain regularization parameters for solving inverse problems. We consider a supervised learning approach, where a network is trained to approximate the mapping from observation data to regularization parameters. Once the network is trained, regularization parameters for newly obtained data can be computed by efficient forward propagation of the DNN. We show that a wide variety of regularization functionals, forward models, and noise models may be considered. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. We emphasize that the key advantage of using DNNs for learning regularization parameters, compared to previous works on learning via optimal experimental design or empirical Bayes risk minimization, is greater generalizability. That is, rather than computing one set of parameters that is optimal with respect to one particular design objective, DNN-computed regularization parameters are tailored to the specific features or properties of the newly observed data. Thus, our approach may better handle cases where the observation is not a close representation of the training set. Furthermore, we avoid the need for expensive and challenging bilevel optimization methods as utilized in other existing training approaches. Numerical results demonstrate the potential of using DNNs to learn regularization parameters.

READ FULL TEXT

page 13

page 16

page 20

page 23

research
03/25/2020

Data-consistent neural networks for solving nonlinear inverse problems

Data assisted reconstruction algorithms, incorporating trained neural ne...
research
11/24/2022

Deep unfolding as iterative regularization for imaging inverse problems

Recently, deep unfolding methods that guide the design of deep neural ne...
research
10/06/2021

Efficient learning methods for large-scale optimal inversion design

In this work, we investigate various approaches that use learning from t...
research
09/28/2021

slimTrain – A Stochastic Approximation Method for Training Separable Deep Neural Networks

Deep neural networks (DNNs) have shown their success as high-dimensional...
research
01/08/2021

On the Turnpike to Design of Deep Neural Nets: Explicit Depth Bounds

It is well-known that the training of Deep Neural Networks (DNN) can be ...
research
11/24/2021

Learning to Refit for Convex Learning Problems

Machine learning (ML) models need to be frequently retrained on changing...
research
06/04/2021

Beyond Target Networks: Improving Deep Q-learning with Functional Regularization

Target networks are at the core of recent success in Reinforcement Learn...

Please sign up or login with your details

Forgot password? Click here to reset