Learning Algorithm Generalization Error Bounds via Auxiliary Distributions

10/02/2022
by   Gholamali Aminian, et al.
15

Generalization error boundaries are essential for comprehending how well machine learning models work. In this work, we suggest a creative method, i.e., the Auxiliary Distribution Method, that derives new upper bounds on generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the generalized α-Jensen-Shannon, α-Rényi (0< α < 1) information between random variable modeling the set of training samples and another random variable modeling the set of hypotheses. Our upper bounds based on generalized α-Jensen-Shannon information are also finite. Additionally, we demonstrate how our auxiliary distribution method can be used to derive the upper bounds on generalization error under the distribution mismatch scenario in supervised learning algorithms, where the distributional mismatch is modeled as α-Jensen-Shannon or α-Rényi (0< α < 1) between the distribution of test and training data samples. We also outline the circumstances in which our proposed upper bounds might be tighter than other earlier upper bounds.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/23/2020

Jensen-Shannon Information Based Characterization of the Generalization Error of Learning Algorithms

Generalization error bounds are critical to understanding the performanc...
research
02/10/2021

Learning under Distribution Mismatch and Model Misspecification

We study learning algorithms when there is a mismatch between the distri...
research
04/01/2015

Learning in the Presence of Corruption

In supervised learning one wishes to identify a pattern present in a joi...
research
12/23/2019

The Labeling Distribution Matrix (LDM): A Tool for Estimating Machine Learning Algorithm Capacity

Algorithm performance in supervised learning is a combination of memoriz...
research
03/16/2022

New Converse Bounds on the Mismatched Reliability Function and the Mismatch Capacity Using an Auxiliary Genie Receiver

We develop a novel framework for proving converse theorems for channel c...
research
08/04/2020

Analyzing Upper Bounds on Mean Absolute Errors for Deep Neural Network Based Vector-to-Vector Regression

In this paper, we show that, in vector-to-vector regression utilizing de...
research
02/10/2020

On upper bounds on expectations of gOSs based on DFR and DFRA distributions

We focus on the problem of establishing the optimal upper bounds on gene...

Please sign up or login with your details

Forgot password? Click here to reset