PAC-Bayes with Backprop

08/19/2019
by   Omar Rivasplata, et al.
4

We explore a method to train probabilistic neural networks by minimizing risk upper bounds, specifically, PAC-Bayes bounds. Thus randomization is not just part of a proof strategy, but part of the learning algorithm itself. We derive two training objectives, one from a previously known PAC-Bayes bound, and a second one from a novel PAC-Bayes bound. We evaluate both training objectives on various data sets and demonstrate the tightness of the risk upper bounds achieved by our method. Our training objectives have sound theoretical justification, and lead to self-bounding learning where all the available data may be used to learn a predictor and certify its risk, with no need to follow a data-splitting protocol.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2020

Tighter risk certificates for neural networks

This paper presents empirical studies regarding training probabilistic n...
research
11/29/2022

PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison

PAC-Bayes has recently re-emerged as an effective theory with which one ...
research
06/23/2021

Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound

We investigate a stochastic counterpart of majority votes over finite en...
research
02/04/2022

Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning

PAC-Bayesian is an analysis framework where the training error can be ex...
research
12/07/2020

A PAC-Bayesian Perspective on Structured Prediction with Implicit Loss Embeddings

Many practical machine learning tasks can be framed as Structured predic...
research
12/07/2020

Generalization bounds for deep learning

Generalization in deep learning has been the topic of much recent theore...
research
02/11/2022

Controlling Confusion via Generalisation Bounds

We establish new generalisation bounds for multiclass classification by ...

Please sign up or login with your details

Forgot password? Click here to reset