DeepAI AI Chat
Log In Sign Up

Analyzing the discrepancy principle for kernelized spectral filter learning algorithms

by   Alain Celisse, et al.

We investigate the construction of early stopping rules in the nonparametric regression problem where iterative learning algorithms are used and the optimal iteration number is unknown. More precisely, we study the discrepancy principle, as well as modifications based on smoothed residuals, for kernelized spectral filter learning algorithms including gradient descent. Our main theoretical bounds are oracle inequalities established for the empirical estimation error (fixed design), and for the prediction error (random design). From these finite-sample bounds it follows that the classical discrepancy principle is statistically adaptive for slow rates occurring in the hard learning scenario, while the smoothed discrepancy principles are adaptive over ranges of faster rates (resp. higher smoothness parameters). Our approach relies on deviation inequalities for the stopping rules in the fixed design setting, combined with change-of-norm arguments to deal with the random design setting.


page 1

page 2

page 3

page 4


Early stopping and polynomial smoothing in regression with reproducing kernels

In this paper we study the problem of early stopping for iterative learn...

Discretisation-adaptive regularisation of statistical inverse problems

We consider linear inverse problems under white noise. These types of pr...

Minimum discrepancy principle strategy for choosing k in k-NN regression

This paper presents a novel data-driven strategy to choose the hyperpara...

Dual gradient flow for solving linear ill-posed problems in Banach spaces

We consider determining the -minimizing solution of ill-posed problem A ...

Stolarsky's invariance principle for finite metric spaces

Stolarsky's invariance principle quantifies the deviation of a subset of...

Combinations of Adaptive Filters

Adaptive filters are at the core of many signal processing applications,...