DeepAI
Log In Sign Up

A Note on High-Probability versus In-Expectation Guarantees of Generalization Bounds in Machine Learning

10/06/2020
by   Alexander Mey, et al.
0

Statistical machine learning theory often tries to give generalization guarantees of machine learning models. Those models naturally underlie some fluctuation, as they are based on a data sample. If we were unlucky, and gathered a sample that is not representative of the underlying distribution, one cannot expect to construct a reliable machine learning model. Following that, statements made about the performance of machine learning models have to take the sampling process into account. The two common approaches for that are to generate statements that hold either in high-probability, or in-expectation, over the random sampling process. In this short note we show how one may transform one statement to another. As a technical novelty we address the case of unbounded loss function, where we use a fairly new assumption, called the witness condition.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/03/2021

Information-Theoretic Bounds on the Moments of the Generalization Error of Learning Algorithms

Generalization error bounds are critical to understanding the performanc...
12/09/2021

Effective dimension of machine learning models

Making statements about the performance of trained models on tasks invol...
05/15/2022

A Note on the Chernoff Bound for Random Variables in the Unit Interval

The Chernoff bound is a well-known tool for obtaining a high probability...
10/11/2021

Synthesizing Machine Learning Programs with PAC Guarantees via Statistical Sketching

We study the problem of synthesizing programs that include machine learn...
08/06/2020

Data Minimization for GDPR Compliance in Machine Learning Models

The EU General Data Protection Regulation (GDPR) mandates the principle ...
09/09/2019

A New Analysis of Differential Privacy's Generalization Guarantees

We give a new proof of the "transfer theorem" underlying adaptive data a...
05/13/2013

Boosting with the Logistic Loss is Consistent

This manuscript provides optimization guarantees, generalization bounds,...