Fast rates for empirical risk minimization over càdlàg functions with bounded sectional variation norm

07/22/2019
by   Aurélien F. Bibaut, et al.
0

Empirical risk minimization over classes functions that are bounded for some version of the variation norm has a long history, starting with Total Variation Denoising (Rudin et al., 1992), and has been considered by several recent articles, in particular Fang et al., 2019 and van der Laan, 2015. In this article, we consider empirical risk minimization over the class F_d of càdlàg functions over [0,1]^d with bounded sectional variation norm (also called Hardy-Krause variation). We show how a certain representation of functions in F_d allows to bound the bracketing entropy of sieves of F_d, and therefore derive rates of convergence in nonparametric function estimation. Specifically, for sieves whose growth is controlled by some rate a_n, we show that the empirical risk minimizer has rate of convergence O_P(n^-1/3 ( n)^2(d-1)/3 a_n). Remarkably, the dimension only affects the rate in n through the logarithmic factor, making this method especially appropriate for high dimensional problems. In particular, we show that in the case of nonparametric regression over sieves of càdlàg functions with bounded sectional variation norm, this upper bound on the rate of convergence holds for least-squares estimators, under the random design, sub-exponential errors setting.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

07/22/2019

Fast rates for empirical risk minimization with cadlag losses with bounded sectional variation norm

Empirical risk minimization over sieves of the class F of cadlag functio...
11/17/2019

Oracle inequalities for image denoising with total variation regularization

We derive oracle results for discrete image denoising with a total varia...
03/04/2019

Multivariate extensions of isotonic regression and total variation denoising via entire monotonicity and Hardy-Krause variation

We consider the problem of nonparametric regression when the covariate i...
01/28/2021

Interpolating Classifiers Make Few Mistakes

This paper provides elementary analyses of the regret and generalization...
12/08/2021

Optimistic Rates: A Unifying Theory for Interpolation Learning and Regularization in Linear Regression

We study a localized notion of uniform convergence known as an "optimist...
03/20/2020

Sample Complexity Result for Multi-category Classifiers of Bounded Variation

We control the probability of the uniform deviation between empirical an...
06/08/2019

Online Forecasting of Total-Variation-bounded Sequences

We consider the problem of online forecasting of sequences of length n w...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.