Information-theoretic analysis of generalization capability of learning algorithms

05/22/2017
by   Aolin Xu, et al.
0

We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propose a number of methods for this purpose, among which are algorithms that regularize the ERM algorithm with relative entropy or with random noise. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2020

Upper Bounds on the Generalization Error of Private Algorithms

In this work, we study the generalization capability of algorithms from ...
research
02/09/2023

Information Theoretic Lower Bounds for Information Theoretic Upper Bounds

We examine the relationship between the mutual information between the o...
research
01/24/2020

Reasoning About Generalization via Conditional Mutual Information

We provide an information-theoretic framework for studying the generaliz...
research
02/28/2023

Asymptotically Optimal Generalization Error Bounds for Noisy, Iterative Algorithms

We adopt an information-theoretic framework to analyze the generalizatio...
research
02/04/2022

Improved Information Theoretic Generalization Bounds for Distributed and Federated Learning

We consider information-theoretic bounds on expected generalization erro...
research
02/10/2022

Generalization Bounds via Convex Analysis

Since the celebrated works of Russo and Zou (2016,2019) and Xu and Ragin...
research
10/22/2020

The Role of Mutual Information in Variational Classifiers

Overfitting data is a well-known phenomenon related with the generation ...

Please sign up or login with your details

Forgot password? Click here to reset