Fast Rates for General Unbounded Loss Functions: from ERM to Generalized Bayes

05/01/2016
by   Peter D. Grünwald, et al.
0

We present new excess risk bounds for general unbounded loss functions including log loss and squared loss, where the distribution of the losses may be heavy-tailed. The bounds hold for general estimators, but they are optimized when applied to η-generalized Bayesian, MDL, and ERM estimators. When applied with log loss, the bounds imply convergence rates for generalized Bayesian inference under misspecification in terms of a generalization of the Hellinger metric as long as the learning rate η is set correctly. For general loss functions, our bounds rely on two separate conditions: the v-GRIP (generalized reversed information projection) conditions, which control the lower tail of the excess loss; and the newly introduced witness condition, which controls the upper tail. The parameter v in the v-GRIP conditions determines the achievable rate and is akin to the exponent in the well-known Tsybakov margin condition and the Bernstein condition for bounded losses, which the v-GRIP conditions generalize; favorable v in combination with small model complexity leads to Õ(1/n) rates. The witness condition allows us to connect the excess risk to an 'annealed' version thereof, by which we generalize several previous results connecting Hellinger and Rényi divergence to KL divergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2017

A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity

We present a novel notion of complexity that interpolates between and ge...
research
06/12/2020

PAC-Bayes unleashed: generalisation bounds with unbounded losses

We present new PAC-Bayesian generalisation bounds for learning problems ...
research
09/29/2016

Fast learning rates with heavy-tailed losses

We study fast learning rates when the losses are not necessarily bounded...
research
06/16/2021

Beyond Tikhonov: Faster Learning with Self-Concordant Losses via Iterative Regularization

The theory of spectral filtering is a remarkable tool to understand the ...
research
05/11/2021

Spectral risk-based learning using unbounded losses

In this work, we consider the setting of learning problems under a wide ...
research
02/27/2017

Uniform Deviation Bounds for Unbounded Loss Functions like k-Means

Uniform deviation bounds limit the difference between a model's expected...
research
10/22/2020

Nonvacuous Loss Bounds with Fast Rates for Neural Networks via Conditional Information Measures

We present a framework to derive bounds on the test loss of randomized l...

Please sign up or login with your details

Forgot password? Click here to reset