On Representer Theorems and Convex Regularization

06/26/2018
by   Claire Boyer, et al.
0

We establish a general principle which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms. These atoms are identified with the extreme points and elements of the extreme rays of the regularizer level sets. An extension to a broader class of quasi-convex regularizers is also discussed. As a side result, we characterize the minimizers of the total gradient variation, which was still an unresolved problem.

READ FULL TEXT
research
12/11/2018

Convex Regularization and Representer Theorems

We establish a result which states that regularizing an inverse problem ...
research
10/13/2021

The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program

We study non-convex subgradient flows for training two-layer ReLU neural...
research
10/29/2018

Dominating Points of Gaussian Extremes

We quantify the large deviations of Gaussian extreme value statistics on...
research
02/05/2018

Continuous-Domain Solutions of Linear Inverse Problems with Tikhonov vs. Generalized TV Regularization

We consider linear inverse problems that are formulated in the continuou...
research
02/22/2016

Convexification of Learning from Constraints

Regularized empirical risk minimization with constrained labels (in cont...
research
10/14/2018

Convex Hull Approximation of Nearly Optimal Lasso Solutions

In an ordinary feature selection procedure, a set of important features ...
research
03/14/2023

Shadoks Approach to Convex Covering

We describe the heuristics used by the Shadoks team in the CG:SHOP 2023 ...

Please sign up or login with your details

Forgot password? Click here to reset