
InformationTheoretic Bounds on the Moments of the Generalization Error of Learning Algorithms
Generalization error bounds are critical to understanding the performanc...
read it

Synthesizing Machine Learning Programs with PAC Guarantees via Statistical Sketching
We study the problem of synthesizing programs that include machine learn...
read it

Data Minimization for GDPR Compliance in Machine Learning Models
The EU General Data Protection Regulation (GDPR) mandates the principle ...
read it

Explore no more: Improved highprobability regret bounds for nonstochastic bandits
This work addresses the problem of regret minimization in nonstochastic...
read it

A New Analysis of Differential Privacy's Generalization Guarantees
We give a new proof of the "transfer theorem" underlying adaptive data a...
read it

A short note on learning discrete distributions
The goal of this short note is to provide simple proofs for the "folklor...
read it

Boosting with the Logistic Loss is Consistent
This manuscript provides optimization guarantees, generalization bounds,...
read it
A Note on HighProbability versus InExpectation Guarantees of Generalization Bounds in Machine Learning
Statistical machine learning theory often tries to give generalization guarantees of machine learning models. Those models naturally underlie some fluctuation, as they are based on a data sample. If we were unlucky, and gathered a sample that is not representative of the underlying distribution, one cannot expect to construct a reliable machine learning model. Following that, statements made about the performance of machine learning models have to take the sampling process into account. The two common approaches for that are to generate statements that hold either in highprobability, or inexpectation, over the random sampling process. In this short note we show how one may transform one statement to another. As a technical novelty we address the case of unbounded loss function, where we use a fairly new assumption, called the witness condition.
READ FULL TEXT
Comments
There are no comments yet.