Characterizing the Generalization Error of Gibbs Algorithm with Symmetrized KL information

07/28/2021
by   Gholamali Aminian, et al.
15

Bounding the generalization error of a supervised learning algorithm is one of the most important problems in learning theory, and various approaches have been developed. However, existing bounds are often loose and lack of guarantees. As a result, they may fail to characterize the exact generalization ability of a learning algorithm. Our main contribution is an exact characterization of the expected generalization error of the well-known Gibbs algorithm in terms of symmetrized KL information between the input training samples and the output hypothesis. Such a result can be applied to tighten existing expected generalization error bound. Our analysis provides more insight on the fundamental role the symmetrized KL information plays in controlling the generalization error of the Gibbs algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/18/2022

Information-theoretic Characterizations of Generalization Error for the Gibbs Algorithm

Various approaches have been developed to upper bound the generalization...
04/27/2023

On the Generalization Error of Meta Learning for the Gibbs Algorithm

We analyze the generalization ability of joint-training meta learning al...
11/02/2021

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

We provide an information-theoretic analysis of the generalization abili...
10/15/2022

How Does Pseudo-Labeling Affect the Generalization Error of the Semi-Supervised Gibbs Algorithm?

This paper provides an exact characterization of the expected generaliza...
05/08/2016

On-Average KL-Privacy and its equivalence to Generalization for Max-Entropy Mechanisms

We define On-Average KL-Privacy and present its properties and connectio...
01/12/2018

Generalization Error Bounds for Noisy, Iterative Algorithms

In statistical learning theory, generalization error is used to quantify...
09/27/2020

Learning Optimal Representations with the Decodable Information Bottleneck

We address the question of characterizing and finding optimal representa...

Please sign up or login with your details

Forgot password? Click here to reset