A PAC-Bayes bound for deterministic classifiers

09/06/2022
by   Eugenio Clerico, et al.
19

We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-time (non-stochastic) gradient descent. Contrarily to what is standard in the PAC-Bayesian setting, our result applies to a training algorithm that is deterministic, conditioned on a random initialisation, without requiring any de-randomisation step. We provide a broad discussion of the main features of the bound that we propose, and we study analytically and empirically its behaviour on linear models, finding promising results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2021

Conditional Gaussian PAC-Bayes

Recent studies have empirically investigated different methods to train ...
research
05/07/2014

PAC-Bayes Mini-tutorial: A Continuous Union Bound

When I first encountered PAC-Bayesian concentration inequalities they se...
research
06/17/2021

Wide stochastic networks: Gaussian limit and PAC-Bayesian training

The limit of infinite width allows for substantial simplifications in th...
research
11/07/2016

Learning Influence Functions from Incomplete Observations

We study the problem of learning influence functions under incomplete ob...
research
06/22/2020

Differentiable PAC-Bayes Objectives with Partially Aggregated Neural Networks

We make three related contributions motivated by the challenge of traini...
research
09/28/2021

A PAC-Bayesian Analysis of Distance-Based Classifiers: Why Nearest-Neighbour works!

Abstract We present PAC-Bayesian bounds for the generalisation error of ...
research
06/23/2016

PAC-Bayesian Analysis for a two-step Hierarchical Multiview Learning Approach

We study a two-level multiview learning with more than two views under t...

Please sign up or login with your details

Forgot password? Click here to reset