Optimal Convex and Nonconvex Regularizers for a Data Source

12/27/2022
by   Oscar Leong, et al.
0

In optimization-based approaches to inverse problems and to statistical estimation, it is common to augment the objective with a regularizer to address challenges associated with ill-posedness. The choice of a suitable regularizer is typically driven by prior domain information and computational considerations. Convex regularizers are attractive as they are endowed with certificates of optimality as well as the toolkit of convex analysis, but exhibit a computational scaling that makes them ill-suited beyond moderate-sized problem instances. On the other hand, nonconvex regularizers can often be deployed at scale, but do not enjoy the certification properties associated with convex regularizers. In this paper, we seek a systematic understanding of the power and the limitations of convex regularization by investigating the following questions: Given a distribution, what are the optimal regularizers, both convex and nonconvex, for data drawn from the distribution? What properties of a data source govern whether it is amenable to convex regularization? We address these questions for the class of continuous and positively homogenous regularizers for which convex and nonconvex regularizers correspond, respectively, to convex bodies and star bodies. By leveraging dual Brunn-Minkowski theory, we show that a radial function derived from a data distribution is the key quantity for identifying optimal regularizers and for assessing the amenability of a data source to convex regularization. Using tools such as Γ-convergence, we show that our results are robust in the sense that the optimal regularizers for a sample drawn from a distribution converge to their population counterparts as the sample size grows large. Finally, we give generalization guarantees that recover previous results for polyhedral regularizers (i.e., dictionary learning) and lead to new ones for semidefinite regularizers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/05/2017

A Matrix Factorization Approach for Learning Semidefinite-Representable Regularizers

Regularization techniques are widely employed in optimization-based appr...
research
09/18/2018

Nonconvex Demixing From Bilinear Measurements

We consider the problem of demixing a sequence of source signals from th...
research
06/13/2016

Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity

The use of convex regularizers allows for easy optimization, though they...
research
03/04/2015

Statistical Limits of Convex Relaxations

Many high dimensional sparse learning problems are formulated as nonconv...
research
03/15/2021

Lasry-Lions Envelopes and Nonconvex Optimization: A Homotopy Approach

In large-scale optimization, the presence of nonsmooth and nonconvex ter...
research
02/15/2021

Sparse Channel Reconstruction With Nonconvex Regularizer via DC Programming for Massive MIMO Systems

Sparse channel estimation for massive multiple-input multiple-output sys...

Please sign up or login with your details

Forgot password? Click here to reset