Linear convergence and support recovery for non-convex multi-penalty regularization

08/07/2019
by   Zeljko Kereta, et al.
0

We provide a comprehensive convergence study of the iterative multi-penalty q-thresholding algorithm, with 0<q≤ 1, for recovery of a signal mixture. Leveraging recent results in optimisation, signal processing, and regularization, we present novel results on linear convergence of iterates to local minimizers for studied non-convex multi-penalty functionals. We also provide explicitly compute the convergence constant and establish its dependence with respect to the measurement matrix and parameters of the problem. Finally, we present extensive numerical results, that confirm the theoretical findings, and compare the efficiency of the iterative multi-penalty thresholding algorithm with single-penalty counterpart.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2021

The springback penalty for robust signal recovery

We propose a new penalty, named as the springback penalty, for construct...
research
10/11/2017

Adaptive multi-penalty regularization based on a generalized Lasso path

For many algorithms, parameter tuning remains a challenging and critical...
research
06/23/2016

Non-convex regularization in remote sensing

In this paper, we study the effect of different regularizers and their i...
research
07/30/2020

A projected gradient method for αℓ_1-βℓ_2 sparsity regularization

The non-convex α·_ℓ_1-β·_ℓ_2 (α≥β≥0) regularization has attracted attent...
research
05/19/2018

M-estimation with the Trimmed l1 Penalty

We study high-dimensional M-estimators with the trimmed ℓ_1 penalty. Whi...
research
09/11/2019

Regularized deep learning with a non-convex penalty

Regularization methods are often employed in deep learning neural networ...
research
02/23/2017

Horseshoe Regularization for Feature Subset Selection

Feature subset selection arises in many high-dimensional applications of...

Please sign up or login with your details

Forgot password? Click here to reset