Learning Cluster Structured Sparsity by Reweighting

10/11/2019
by   Yulun Jiang, et al.
0

Recently, the paradigm of unfolding iterative algorithms into finite-length feed-forward neural networks has achieved a great success in the area of sparse recovery. Benefit from available training data, the learned networks have achieved state-of-the-art performance in respect of both speed and accuracy. However, the structure behind sparsity, imposing constraint on the support of sparse signals, is often an essential prior knowledge but seldom considered in the existing networks. In this paper, we aim at bridging this gap. Specifically, exploiting the iterative reweighted ℓ_1 minimization (IRL1) algorithm, we propose to learn the cluster structured sparsity (CSS) by rewegihting adaptively. In particular, we first unfold the Reweighted Iterative Shrinkage Algorithm (RwISTA) into an end-to-end trainable deep architecture termed as RW-LISTA. Then instead of the element-wise reweighting, the global and local reweighting manner are proposed for the cluster structured sparse learning. Numerical experiments further show the superiority of our algorithm against both classical algorithms and learning-based networks on different tasks.

READ FULL TEXT
research
12/06/2022

Deep Neural Networks Based on Iterative Thresholding and Projection Algorithms for Sparse LQR Control Design

In this paper, we consider an LQR design problem for distributed control...
research
07/18/2018

Learning Hybrid Sparsity Prior for Image Restoration: Where Deep Learning Meets Sparse Coding

State-of-the-art approaches toward image restoration can be classified i...
research
05/27/2019

Learning step sizes for unfolded sparse coding

Sparse coding is typically solved by iterative optimization techniques, ...
research
10/24/2018

Nonconvex and Nonsmooth Sparse Optimization via Adaptively Iterative Reweighted Methods

We present a general formulation of nonconvex and nonsmooth sparse optim...
research
01/07/2019

GASL: Guided Attention for Sparsity Learning in Deep Neural Networks

The main goal of network pruning is imposing sparsity on the neural netw...
research
10/03/2022

Sparsity by Redundancy: Solving L_1 with a Simple Reparametrization

We identify and prove a general principle: L_1 sparsity can be achieved ...
research
09/01/2015

Learning Deep ℓ_0 Encoders

Despite its nonconvex nature, ℓ_0 sparse approximation is desirable in m...

Please sign up or login with your details

Forgot password? Click here to reset