Information-theoretic Characterizations of Generalization Error for the Gibbs Algorithm

by   Gholamali Aminian, et al.

Various approaches have been developed to upper bound the generalization error of a supervised learning algorithm. However, existing bounds are often loose and even vacuous when evaluated in practice. As a result, they may fail to characterize the exact generalization ability of a learning algorithm. Our main contributions are exact characterizations of the expected generalization error of the well-known Gibbs algorithm (a.k.a. Gibbs posterior) using different information measures, in particular, the symmetrized KL information between the input training samples and the output hypothesis. Our result can be applied to tighten existing expected generalization error and PAC-Bayesian bounds. Our information-theoretic approach is versatile, as it also characterizes the generalization error of the Gibbs algorithm with a data-dependent regularizer and that of the Gibbs algorithm in the asymptotic regime, where it converges to the standard empirical risk minimization algorithm. Of particular relevance, our results highlight the role the symmetrized KL information plays in controlling the generalization error of the Gibbs algorithm.


page 1

page 2

page 3

page 4


Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information

Bounding the generalization error of a supervised learning algorithm is ...

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

We provide an information-theoretic analysis of the generalization abili...

Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning

This monograph deals with adaptive supervised classification, using tool...

On the Generalization Error of Meta Learning for the Gibbs Algorithm

We analyze the generalization ability of joint-training meta learning al...

How Does Pseudo-Labeling Affect the Generalization Error of the Semi-Supervised Gibbs Algorithm?

This paper provides an exact characterization of the expected generaliza...

Information-Theoretic Generalization Bounds for Iterative Semi-Supervised Learning

We consider iterative semi-supervised learning (SSL) algorithms that ite...

SGLD-Based Information Criteria and the Over-Parameterized Regime

Double-descent refers to the unexpected drop in test loss of a learning ...

Please sign up or login with your details

Forgot password? Click here to reset