Algorithm Unfolding for Block-sparse and MMV Problems with Reduced Training Overhead

09/28/2022
by   Jan Christian Hauffen, et al.
0

In this paper we consider algorithm unfolding for the Multiple Measurement Vector (MMV) problem in the case where only few training samples are available. Algorithm unfolding has been shown to empirically speed-up in a data-driven way the convergence of various classical iterative algorithms but for supervised learning it is important to achieve this with minimal training data. For this we consider learned block iterative shrinkage thresholding algorithm (LBISTA) under different training strategies. To approach almost data-free optimization at minimal training overhead the number of trainable parameters for algorithm unfolding has to be substantially reduced. We therefore explicitly propose a reduced-size network architecture based on the Kronecker structure imposed by the MMV observation model and present the corresponding theory in this context. To ensure proper generalization, we then extend the analytic weight approach by Lui et al to LBISTA and the MMV setting. Rigorous theoretical guarantees and convergence results are stated for this case. We show that the network weights can be computed by solving an explicit equation at the reduced MMV dimensions which also admits a closed-form solution. Towards more practical problems, we then consider convolutional observation models and show that the proposed architecture and the analytical weight computation can be further simplified and thus open new directions for convolutional neural networks. Finally, we evaluate the unfolded algorithms in numerical experiments and discuss connections to other sparse recovering algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2018

Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds

In recent years, unfolding iterative algorithms as neural networks has b...
research
04/25/2022

Hybrid ISTA: Unfolding ISTA With Convergence Guarantees Using Free-Form Deep Neural Networks

It is promising to solve linear inverse problems by unfolding iterative ...
research
12/07/2020

Learned Block Iterative Shrinkage Thresholding Algorithm for Photothermal Super Resolution Imaging

Block-sparse regularization is already well-known in active thermal imag...
research
10/20/2021

Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for sparse recover

In this paper, we consider deep neural networks for solving inverse prob...
research
05/27/2019

Learning step sizes for unfolded sparse coding

Sparse coding is typically solved by iterative optimization techniques, ...
research
06/29/2021

Convolutional Sparse Coding Fast Approximation with Application to Seismic Reflectivity Estimation

In sparse coding, we attempt to extract features of input vectors, assum...
research
03/01/2021

Computing the Information Content of Trained Neural Networks

How much information does a learning algorithm extract from the training...

Please sign up or login with your details

Forgot password? Click here to reset