A unifying representer theorem for inverse problems and machine learning

03/02/2019
by   Michael Unser, et al.
0

The standard approach for dealing with the ill-posedness of the training problem in machine learning and/or the reconstruction of a signal from a limited number of measurements is regularization. The method is applicable whenever the problem is formulated as an optimization task. The standard strategy consists in augmenting the original cost functional by an energy that penalizes solutions with undesirable behavior. The effect of regularization is very well understood when the penalty involves a Hilbertian norm. Another popular configuration is the use of an ℓ_1-norm (or some variant thereof) that favors sparse solutions. In this paper, we propose a higher-level formulation of regularization within the context of Banach spaces. We present a general representer theorem that characterizes the solutions of a remarkably broad class of optimization problems. We then use our theorem to retrieve a number of known results in the literature---e.g., the celebrated representer theorem of machine leaning for RKHS, Tikhonov regularization, representer theorems for sparsity promoting functionals, the recovery of spikes---as well as a few new ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2019

Sparse ℓ^q-regularization of inverse problems with deep learning

We propose a sparse reconstruction framework for solving inverse problem...
research
11/02/2018

An L1 Representer Theorem for Multiple-Kernel Regression

The theory of RKHS provides an elegant framework for supervised learning...
research
02/05/2018

Continuous-Domain Solutions of Linear Inverse Problems with Tikhonov vs. Generalized TV Regularization

We consider linear inverse problems that are formulated in the continuou...
research
02/01/2022

Iterative regularization for low complexity regularizers

Iterative regularization exploits the implicit bias of an optimization a...
research
04/29/2020

A novel two-point gradient method for Regularization of inverse problems in Banach spaces

In this paper, we introduce a novel two-point gradient method for solvin...
research
06/29/2022

From Kernel Methods to Neural Networks: A Unifying Variational Formulation

The minimization of a data-fidelity term and an additive regularization ...
research
07/19/2023

Weighted inhomogeneous regularization for inverse problems with indirect and incomplete measurement data

Regularization promotes well-posedness in solving an inverse problem wit...

Please sign up or login with your details

Forgot password? Click here to reset