PAC-Bayes under potentially heavy tails

05/20/2019
by   Matthew J. Holland, et al.
0

We derive PAC-Bayesian learning guarantees for heavy-tailed losses, and demonstrate that the resulting optimal Gibbs posterior enjoys much stronger guarantees than are available for existing randomized learning algorithms. Our core technique itself makes use of PAC-Bayesian inequalities in order to derive a robust risk estimator, which by design is easy to compute. In particular, only assuming that the variance of the loss distribution is finite, the learning algorithm derived from this estimator enjoys nearly sub-Gaussian statistical error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2020

Learning with CVaR-based feedback under potentially heavy tails

We study learning algorithms that seek to minimize the conditional value...
research
09/12/2022

A Note on the Efficient Evaluation of PAC-Bayes Bounds

When utilising PAC-Bayes theory for risk certification, it is usually ne...
research
02/11/2022

On change of measure inequalities for f-divergences

We propose new change of measure inequalities based on f-divergences (of...
research
06/07/2023

Learning via Wasserstein-Based High Probability Generalisation Bounds

Minimising upper bounds on the population risk or the generalisation gap...
research
05/02/2018

ℓ_1-regression with Heavy-tailed Distributions

In this paper, we consider the problem of linear regression with heavy-t...
research
05/25/2023

Exponential Smoothing for Off-Policy Learning

Off-policy learning (OPL) aims at finding improved policies from logged ...
research
04/22/2020

Practical calibration of the temperature parameter in Gibbs posteriors

PAC-Bayesian algorithms and Gibbs posteriors are gaining popularity due ...

Please sign up or login with your details

Forgot password? Click here to reset