Conditional Gaussian PAC-Bayes

10/22/2021
by   Eugenio Clerico, et al.
0

Recent studies have empirically investigated different methods to train a stochastic classifier by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mismatch between the optimisation objective and the actual generalisation bound. The present paper proposes a novel training algorithm that optimises the PAC-Bayesian bound, without relying on any surrogate loss. Empirical results show that the bounds obtained with this approach are tighter than those found in the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2022

A PAC-Bayes bound for deterministic classifiers

We establish a disintegrated PAC-Bayesian bound, for classifiers that ar...
research
06/22/2020

Differentiable PAC-Bayes Objectives with Partially Aggregated Neural Networks

We make three related contributions motivated by the challenge of traini...
research
04/28/2021

Self-Bounding Majority Vote Learning Algorithms by the Direct Minimization of a Tight PAC-Bayesian C-Bound

In the PAC-Bayesian literature, the C-Bound refers to an insightful rela...
research
09/28/2021

A PAC-Bayesian Analysis of Distance-Based Classifiers: Why Nearest-Neighbour works!

Abstract We present PAC-Bayesian bounds for the generalisation error of ...
research
10/03/2022

PAC-Bayes with Unbounded Losses through Supermartingales

While PAC-Bayes is now an established learning framework for bounded los...
research
07/04/2012

PAC-Bayesian Majority Vote for Late Classifier Fusion

A lot of attention has been devoted to multimedia indexing over the past...
research
06/17/2021

Wide stochastic networks: Gaussian limit and PAC-Bayesian training

The limit of infinite width allows for substantial simplifications in th...

Please sign up or login with your details

Forgot password? Click here to reset