Self-Certifying Classification by Linearized Deep Assignment

01/26/2022
by   Bastian Boll, et al.
0

We propose a novel class of deep stochastic predictors for classifying metric data on graphs within the PAC-Bayes risk certification paradigm. Classifiers are realized as linearly parametrized deep assignment flows with random initial conditions. Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables (i) to use risk bounds as training objectives for learning posterior distributions on the hypothesis space and (ii) to compute tight out-of-sample risk certificates of randomized classifiers more efficiently than related work. Comparison with empirical test set errors illustrates the performance and practicality of this self-certifying classification method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2021

Progress in Self-Certified Neural Networks

A learning method is self-certified if it uses all available data to sim...
research
09/21/2021

Learning PAC-Bayes Priors for Probabilistic Neural Networks

Recent works have investigated deep learning models trained by optimisin...
research
03/29/2021

Risk Bounds for Learning via Hilbert Coresets

We develop a formalism for constructing stochastic upper bounds on the e...
research
07/25/2020

Tighter risk certificates for neural networks

This paper presents empirical studies regarding training probabilistic n...
research
06/18/2018

PAC-Bayes bounds for stable algorithms with instance-dependent priors

PAC-Bayes bounds have been proposed to get risk estimates based on a tra...
research
06/23/2021

Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound

We investigate a stochastic counterpart of majority votes over finite en...

Please sign up or login with your details

Forgot password? Click here to reset