Linearly Involved Generalized Moreau Enhanced Models and Their Proximal Splitting Algorithm under Overall Convexity Condition

10/23/2019
by   Jiro Abe, et al.
0

The convex envelopes of the direct discrete measures, for the sparsity of vectors or for the low-rankness of matrices, have been utilized extensively as practical penalties in order to compute a globally optimal solution of the corresponding regularized least-squares models. Motivated mainly by the ideas in [Zhang'10, Selesnick'17, Yin, Parekh, Selesnick'19] to exploit nonconvex penalties in the regularized least-squares models without losing their overall convexities, this paper presents the Linearly involved Generalized Moreau Enhanced (LiGME) model as a unified extension of such utilizations of nonconvex penalties. The proposed model can admit multiple nonconvex penalties without losing its overall convexity and thus is applicable to much broader scenarios in the sparsity-rank-aware signal processing. Under the general overall-convexity condition of the LiGME model, we also present a novel proximal splitting type algorithm of guaranteed convergence to a globally optimal solution. Numerical experiments in typical examples of the sparsity-rank-aware signal processing demonstrate the effectiveness of the LiGME models and the proposed proximal splitting algorithm.

READ FULL TEXT

page 21

page 23

page 25

page 27

page 28

research
04/28/2014

Proximal Iteratively Reweighted Algorithm with Multiple Splitting for Nonconvex Sparsity Optimization

This paper proposes the Proximal Iteratively REweighted (PIRE) algorithm...
research
06/04/2017

Nonconvex penalties with analytical solutions for one-bit compressive sensing

One-bit measurements widely exist in the real world, and they can be use...
research
07/02/2021

A geometric proximal gradient method for sparse least squares regression with probabilistic simplex constraint

In this paper, we consider the sparse least squares regression problem w...
research
06/19/2017

On Quadratic Convergence of DC Proximal Newton Algorithm for Nonconvex Sparse Learning in High Dimensions

We propose a DC proximal Newton algorithm for solving nonconvex regulari...
research
09/01/2011

Nonconvex proximal splitting: batch and incremental algorithms

Within the unmanageably large class of nonconvex optimization, we consid...
research
08/08/2023

Minimizing Quotient Regularization Model

Quotient regularization models (QRMs) are a class of powerful regulariza...
research
11/13/2017

A Parallel Best-Response Algorithm with Exact Line Search for Nonconvex Sparsity-Regularized Rank Minimization

In this paper, we propose a convergent parallel best-response algorithm ...

Please sign up or login with your details

Forgot password? Click here to reset