What is important about the No Free Lunch theorems?

07/21/2020
by   David H. Wolpert, et al.
0

The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally. As I discuss in this chapter, the importance of the theorems arises by using them to analyze scenarios involving non-uniform distributions, and to compare different algorithms, without any assumption about the distribution over problems at all. In particular, the theorems prove that anti-cross-validation (choosing among a set of candidate algorithms based on which has worst out-of-sample behavior) performs as well as cross-validation, unless one makes an assumption – which has never been formalized – about how the distribution over induction problems, on the one hand, is related to the set of algorithms one is choosing among using (anti-)cross validation, on the other. In addition, they establish strong caveats concerning the significance of the many results in the literature which establish the strength of a particular algorithm without assuming a particular distribution. They also motivate a “dictionary” between supervised learning and improve blackbox optimization, which allows one to “translate” techniques from supervised learning into the domain of blackbox optimization, thereby strengthening blackbox optimization algorithms. In addition to these topics, I also briefly discuss their implications for philosophy of science.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2021

The Implications of the No-Free-Lunch Theorems for Meta-induction

The important recent book by G. Schurz appreciates that the no-free-lunc...
research
02/09/2022

The no-free-lunch theorems of supervised learning

The no-free-lunch theorems promote a skeptical conclusion that all possi...
research
01/07/2023

On the Weihrauch degree of the additive Ramsey theorem

We characterize the strength, in terms of Weihrauch degrees, of certain ...
research
06/16/2018

Cramér-type Large deviation and non-uniform central limit theorems in high dimensions

Central limit theorems (CLTs) for high-dimensional random vectors with d...
research
08/21/2021

Searching for a practical evidence of the No Free Lunch theorems

According to the No Free Lunch (NFL) theorems all black-box algorithms p...
research
04/11/2023

The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning

No free lunch theorems for supervised learning state that no learner can...
research
10/26/2022

Is Out-of-Distribution Detection Learnable?

Supervised learning aims to train a classifier under the assumption that...

Please sign up or login with your details

Forgot password? Click here to reset