Undecidability of Underfitting in Learning Algorithms

02/04/2021
by   Sonia Sehra, et al.
0

Using recent machine learning results that present an information-theoretic perspective on underfitting and overfitting, we prove that deciding whether an encodable learning algorithm will always underfit a dataset, even if given unlimited training time, is undecidable. We discuss the importance of this result and potential topics for further research, including information-theoretic and probabilistic strategies for bounding learning algorithm fit.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2020

An Information-Theoretic Perspective on Overfitting and Underfitting

We present an information-theoretic framework for understanding overfitt...
research
10/26/2013

Efficient Information Theoretic Clustering on Discrete Lattices

We consider the problem of clustering data that reside on discrete, low ...
research
06/10/2019

Big Variates: Visualizing and identifying key variables in a multivariate world

Big Data involves both a large number of events but also many variables....
research
01/28/2020

Margin Maximization as Lossless Maximal Compression

The ultimate goal of a supervised learning algorithm is to produce model...
research
08/08/2022

Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality

Avoiding overfitting is a central challenge in machine learning, yet man...
research
06/07/2021

An Information-theoretic Approach to Distribution Shifts

Safely deploying machine learning models to the real world is often a ch...
research
01/16/2013

Information Theoretic Learning with Infinitely Divisible Kernels

In this paper, we develop a framework for information theoretic learning...

Please sign up or login with your details

Forgot password? Click here to reset