DeepAI AI Chat
Log In Sign Up

On Representer Theorems and Convex Regularization

06/26/2018
by   Claire Boyer, et al.
0

We establish a general principle which states that regularizing an inverse problem with a convex function yields solutions which are convex combinations of a small number of atoms. These atoms are identified with the extreme points and elements of the extreme rays of the regularizer level sets. An extension to a broader class of quasi-convex regularizers is also discussed. As a side result, we characterize the minimizers of the total gradient variation, which was still an unresolved problem.

READ FULL TEXT
12/11/2018

Convex Regularization and Representer Theorems

We establish a result which states that regularizing an inverse problem ...
10/29/2018

Dominating Points of Gaussian Extremes

We quantify the large deviations of Gaussian extreme value statistics on...
02/05/2018

Continuous-Domain Solutions of Linear Inverse Problems with Tikhonov vs. Generalized TV Regularization

We consider linear inverse problems that are formulated in the continuou...
02/22/2016

Convexification of Learning from Constraints

Regularized empirical risk minimization with constrained labels (in cont...
05/18/2018

Blended Conditional Gradients: the unconditioning of conditional gradients

We present a blended conditional gradient approach for minimizing a smoo...
01/16/2014

Automated Search for Impossibility Theorems in Social Choice Theory: Ranking Sets of Objects

We present a method for using standard techniques from satisfiability ch...