Tighter risk certificates for neural networks

07/25/2020
by   Maria Perez-Ortiz, et al.
10

This paper presents empirical studies regarding training probabilistic neural networks using training objectives derived from PAC-Bayes bounds. In the context of probabilistic neural networks, the output of training is a probability distribution over network weights. We present two training objectives, used here for the first time in connection with training neural networks. These two training objectives are derived from tight PAC-Bayes bounds, one of which is new. We also re-implement a previously used training objective based on a classical PAC-Bayes bound, to compare the properties of the predictors learned using the different training objectives. We compute risk certificates that are valid on any unseen examples for the learnt predictors. We further experiment with different types of priors on the weights (both data-free and data-dependent priors) and neural network architectures. Our experiments on MNIST and CIFAR-10 show that our training methods produce competitive test set errors and non-vacuous risk bounds with much tighter values than previous results in the literature, showing promise not only to guide the learning algorithm through bounding the risk but also for model selection. These observations suggest that the methods studied here might be good candidates for self-bounding learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2021

Progress in Self-Certified Neural Networks

A learning method is self-certified if it uses all available data to sim...
research
08/19/2019

PAC-Bayes with Backprop

We explore a method to train probabilistic neural networks by minimizing...
research
12/21/2021

Risk bounds for aggregated shallow neural networks using Gaussian prior

Analysing statistical properties of neural networks is a central topic i...
research
09/21/2021

Learning PAC-Bayes Priors for Probabilistic Neural Networks

Recent works have investigated deep learning models trained by optimisin...
research
01/26/2022

Self-Certifying Classification by Linearized Deep Assignment

We propose a novel class of deep stochastic predictors for classifying m...
research
02/11/2022

Controlling Confusion via Generalisation Bounds

We establish new generalisation bounds for multiclass classification by ...
research
10/28/2021

Learning Aggregations of Binary Activated Neural Networks with Probabilities over Representations

Considering a probability distribution over parameters is known as an ef...

Please sign up or login with your details

Forgot password? Click here to reset