Information-Theoretic Bounds on the Moments of the Generalization Error of Learning Algorithms

by   Gholamali Aminian, et al.

Generalization error bounds are critical to understanding the performance of machine learning models. In this work, building upon a new bound of the expected value of an arbitrary function of the population and empirical risk of a learning algorithm, we offer a more refined analysis of the generalization behaviour of a machine learning models based on a characterization of (bounds) to their generalization error moments. We discuss how the proposed bounds – which also encompass new bounds to the expected generalization error – relate to existing bounds in the literature. We also discuss how the proposed generalization error moment bounds can be used to construct new generalization error high-probability bounds.


page 1

page 2

page 3

page 4


Jensen-Shannon Information Based Characterization of the Generalization Error of Learning Algorithms

Generalization error bounds are critical to understanding the performanc...

A Note on High-Probability versus In-Expectation Guarantees of Generalization Bounds in Machine Learning

Statistical machine learning theory often tries to give generalization g...

Estimated VC dimension for risk bounds

Vapnik-Chervonenkis (VC) dimension is a fundamental measure of the gener...

Effective dimension of machine learning models

Making statements about the performance of trained models on tasks invol...

Generalization Analysis for Game-Theoretic Machine Learning

For Internet applications like sponsored search, cautions need to be tak...

Generalization Error Bounds via mth Central Moments of the Information Density

We present a general approach to deriving bounds on the generalization e...

In Search of Robust Measures of Generalization

One of the principal scientific challenges in deep learning is explainin...

Please sign up or login with your details

Forgot password? Click here to reset