Sparse ℓ^q-regularization of inverse problems with deep learning
We propose a sparse reconstruction framework for solving inverse problems. Opposed to existing sparse reconstruction techniques that are based on linear sparsifying transforms, we train an encoder-decoder network D ∘ E with E acting as a nonlinear sparsifying transform. We minimize a Tikhonov functional which used a learned regularization term formed by the ℓ^q-norm of the encoder coefficients and a penalty for the distance to the data manifold. For this augmented sparse ℓ^q-approach, we present a full convergence analysis, derive convergence rates and describe a training strategy. As a main ingredient for the analysis we establish the coercivity of the augmented regularization term.
READ FULL TEXT