Sparse ℓ^q-regularization of inverse problems with deep learning

08/08/2019
by   Markus Haltmeier, et al.
0

We propose a sparse reconstruction framework for solving inverse problems. Opposed to existing sparse reconstruction techniques that are based on linear sparsifying transforms, we train an encoder-decoder network D ∘ E with E acting as a nonlinear sparsifying transform. We minimize a Tikhonov functional which used a learned regularization term formed by the ℓ^q-norm of the encoder coefficients and a penalty for the distance to the data manifold. For this augmented sparse ℓ^q-approach, we present a full convergence analysis, derive convergence rates and describe a training strategy. As a main ingredient for the analysis we establish the coercivity of the augmented regularization term.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset