Self-Bounding Majority Vote Learning Algorithms by the Direct Minimization of a Tight PAC-Bayesian C-Bound

04/28/2021
by   Paul Viallard, et al.
0

In the PAC-Bayesian literature, the C-Bound refers to an insightful relation between the risk of a majority vote classifier (under the zero-one loss) and the first two moments of its margin (i.e., the expected margin and the voters' diversity). Until now, learning algorithms developed in this framework minimize the empirical version of the C-Bound, instead of explicit PAC-Bayesian generalization bounds. In this paper, by directly optimizing PAC-Bayesian guarantees on the C-Bound, we derive self-bounding majority vote learning algorithms. Moreover, our algorithms based on gradient descent are scalable and lead to accurate predictors paired with non-vacuous guarantees.

READ FULL TEXT

page 17

page 18

research
08/06/2014

On the Generalization of the C-Bound to Structured Output Ensemble Methods

This paper generalizes an important result from the PAC-Bayesian literat...
research
01/16/2019

A Primer on PAC-Bayesian Learning

Generalized Bayesian learning algorithms are increasingly popular in mac...
research
09/02/2010

A PAC-Bayesian Analysis of Graph Clustering and Pairwise Clustering

We formulate weighted graph clustering as a prediction problem: given a ...
research
03/28/2015

Risk Bounds for the Majority Vote: From a PAC-Bayesian Analysis to a Learning Algorithm

We propose an extensive analysis of the behavior of majority votes in bi...
research
10/22/2021

Conditional Gaussian PAC-Bayes

Recent studies have empirically investigated different methods to train ...
research
01/15/2014

Transductive Rademacher Complexity and its Applications

We develop a technique for deriving data-dependent error bounds for tran...
research
06/26/2020

PAC-Bayesian Bound for the Conditional Value at Risk

Conditional Value at Risk (CVaR) is a family of "coherent risk measures"...

Please sign up or login with your details

Forgot password? Click here to reset