Learning via Wasserstein-Based High Probability Generalisation Bounds

06/07/2023
by   Paul Viallard, et al.
16

Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) - this is in particular at the core of PAC-Bayesian learning. Despite its successes and unfailing surge of interest in recent years, a limitation of the PAC-Bayesian framework is that most bounds involve a Kullback-Leibler (KL) divergence term (or its variations), which might exhibit erratic behavior and fail to capture the underlying geometric structure of the learning problem - hence restricting its use in practical applications. As a remedy, recent studies have attempted to replace the KL divergence in the PAC-Bayesian bounds with the Wasserstein distance. Even though these bounds alleviated the aforementioned issues to a certain extent, they either hold in expectation, are for bounded losses, or are nontrivial to minimize in an SRM framework. In this work, we contribute to this line of research and prove novel Wasserstein distance-based PAC-Bayesian generalisation bounds for both batch learning with independent and identically distributed (i.i.d.) data, and online learning with potentially non-i.i.d. data. Contrary to previous art, our bounds are stronger in the sense that (i) they hold with high probability, (ii) they apply to unbounded (potentially heavy-tailed) losses, and (iii) they lead to optimizable training objectives that can be used in SRM. As a result we derive novel Wasserstein-based PAC-Bayesian learning algorithms and we illustrate their empirical advantage on a variety of experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2016

Simpler PAC-Bayesian Bounds for Hostile Data

PAC-Bayesian learning bounds are of the utmost interest to the learning ...
research
05/31/2022

Online PAC-Bayes Learning

Most PAC-Bayesian bounds hold in the batch learning setting where data i...
research
06/07/2022

Shedding a PAC-Bayesian Light on Adaptive Sliced-Wasserstein Distances

The Sliced-Wasserstein distance (SW) is a computationally efficient and ...
research
05/20/2019

PAC-Bayes under potentially heavy tails

We derive PAC-Bayesian learning guarantees for heavy-tailed losses, and ...
research
12/14/2019

Optimal PAC-Bayesian Posteriors for Stochastic Classifiers and their use for Choice of SVM Regularization Parameter

PAC-Bayesian set up involves a stochastic classifier characterized by a ...
research
10/20/2022

Tighter PAC-Bayes Generalisation Bounds by Leveraging Example Difficulty

We introduce a modified version of the excess risk, which can be used to...

Please sign up or login with your details

Forgot password? Click here to reset