What is important about the No Free Lunch theorems?

07/21/2020 ∙ by David H. Wolpert, et al. ∙ 0

The No Free Lunch theorems prove that under a uniform distribution over induction problems (search problems or learning problems), all induction algorithms perform equally. As I discuss in this chapter, the importance of the theorems arises by using them to analyze scenarios involving non-uniform distributions, and to compare different algorithms, without any assumption about the distribution over problems at all. In particular, the theorems prove that anti-cross-validation (choosing among a set of candidate algorithms based on which has worst out-of-sample behavior) performs as well as cross-validation, unless one makes an assumption – which has never been formalized – about how the distribution over induction problems, on the one hand, is related to the set of algorithms one is choosing among using (anti-)cross validation, on the other. In addition, they establish strong caveats concerning the significance of the many results in the literature which establish the strength of a particular algorithm without assuming a particular distribution. They also motivate a “dictionary” between supervised learning and improve blackbox optimization, which allows one to “translate” techniques from supervised learning into the domain of blackbox optimization, thereby strengthening blackbox optimization algorithms. In addition to these topics, I also briefly discuss their implications for philosophy of science.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The first of what are now called the No Free Lunch (NFL) theorems were published in wolp96ab . Soon after publication they were popularized in scha94 , building on a preprint version of wolp96ab

. Those first theorems focused on (supervised) machine learning. Loosely speaking, they can be viewed as a formalization and elaboration of informal concerns about the legitimacy of inductive inference that date back to David Hume (if not earlier). Shortly after these original theorems were published, additional NFL theorems that apply to search were introduced in 

woma97 . Broadly speaking, the NFL theorems say that under a uniform distribution over problems (be they supervised learning problems or search problems), all algorithms perform equally.

The NFL theorems have stimulated a huge amount of research, with over 10,000 citations of woma97 alone by summer 2020, according to Google Scholar. Much of the early work focused on finding other prior distributions over problems besides a uniform distribution that still result in all algorithms having the same expected performance whitley2011no ; igel2005no . Other, more recent work has extended NFL to other domains, beyond (classical physics based) supervised learning poland2020no , and beyond learning and search entirely peel2017ground .

However, as stated in woma97 , perhaps the primary significance of the NFL theorems for search is what they tell us about “the underlying mathematical ‘skeleton’ of optimization theory

before the ‘flesh’ of the probability distributions of a particular context and set of optimization problems are imposed

”. So in particular, while the NFL theorems have strong implications if one believes in a uniform distribution over optimization problems, in no sense should they be interpreted as advocating such a distribution. Rather such a distribution is used as a tool, to prove results concerning non-uniform distributions, and in addition to compare different search algorithms, without any direct assumption about the distribution over problems at all.

In this chapter I describe these aspects of the NFL theorems. After presenting the inner product formula that determines the performance of any search algorithm, I then expand on the inner product formula to present the NFL theorems for search and supervised learning. As an example of the true significance of the NFL theorems, I consider “anti-cross-validation” which is the meta-algorithm that chooses among a candidate set of algorithms based on which has the worst out-of-sample performance on a given data set. (In contrast, standard cross-validation chooses the algorithm with the best such performance.) As I discuss, the NFL theorems mean that anti-cross-validation outperforms cross-validation as often as vice-versa, over the set of all objective functions. So without making some assumption about the relationship between the candidate algorithms and the distribution over optimization problems, one cannot even justify using cross-validation.

Following up on this, I briefly discuss how the NFL theorems are consistent with the (very) many proofs in the literature that provide lower bounds on the performance of particular algorithms without making any assumptions about the distribution over problems that are fed to those algorithms. I also point out the implications of the NFL theorems for the entire scientific enterprise, i.e., for philosophy of science godfrey2009theory .

I then discuss how the fact that there are NFL theorems for both search and for supervised learning is symptomatic of the deep formal relationship between those two fields. Once that relationship is disentangled, it suggests many ways that we can exploit practical techniques that were first developed in supervised learning to help us do search. I summarize some experiments that confirm the power of search algorithms developed in this way.

After this I briefly discuss the various free lunch theorems that have been derived, which establish a priori benefits for using one algorithm rather than another. I end by discussing possible directions for future research.

2 The inner product at the heart of all search

Let be a countable search space, and specify an objective function where is a countable set. Sometimes an objective function is instead called a “search problem”, “fitness function”, “cost function”, etc. Use the term data set to mean any set of separate pairs , written as . A search algorithm is a function that maps any for any to an

. Examples range from simulated annealing to genetic algorithms to hill-descending. By iteratively running a search algorithm to produce a new sample point

and then evaluating we can build successively larger data sets: for all .

Suppose we are given an arbitrary performance measure . Then we can evaluate how the performance of a given search algorithm on a given objective function changes as it is run on that function. Note that to “normalize” different search algorithms, we only consider their behavior in terms of generating new points at which to sample the objective function that are not yet in the data set. (Equivalently, if an algorithm chooses a new point to sample that is already in its data set, we allow it to “try again”.) This is crucial; we are only interested in off-data-set behavior.

For simplicity, from now on I restrict attention to deterministic search algorithms and deterministic objective functions neither of which varies from one iteration of the search algorithm to the next. However, everything presented in this paper can be extended in a straightforward way to the case of a stochastic search algorithm, stochastic objective function, time-varying objective function, etc.

In practice often one does not know explicitly. This is the case whenever is a “blackbox”, or an “oracle”, that one can sample at a particular , but does not know in closed form. Moreover, often even if a practitioner does explicitly know , they act as though they do not know it, for example when they choose what search algorithm to use on . For example, often someone trying to solve a particular instance of the Traveling Salesman Problem (TSP) will use the same search algorithm that they would use on any other instance of TSP. In such a case, they are behaving exactly as they would if they only knew that the objective function is an TSP, without knowing specifically which one it is.

These kinds of uncertainty about the precise being searched can be expressed as a distribution . Say we are given such a , along with a search algorithm, and a real-valued measure of the performance of that algorithm when it is run on any objective function . Then we can solve for the probability that the algorithm results in a performance value

. The result is an inner product of two real-valued vectors each indexed by

. (See Appendix.) The first of those vectors gives all the details of how the search algorithm operates, but nothing concerning the world in which one deploys that search algorithm. The second vector is . All the details of the world in which one deploys that search algorithm are specified in this vector, but nothing concerning the search algorithm itself.

This result tells us that at root, how well any search algorithm performs is determined by how well it is “aligned” with the distribution that governs the problems on which that algorithm is run. For example, it means that the (tens of?) thousand of person-years of research into the TSP have (presumably) resulted in algorithms aligned with the implicit describing traveling salesman problems of interest to TSP researchers.

3 The No Free Lunch theorems for search

The inner product result governs how well any particular search algorithm does in practice. Therefore, either explicitly or implicitly, it serves as the basis for any practitioner who chooses a search algorithm to use in a given scenario. More precisely, the designer of any search algorithm first specifies a (usually implicitly, e.g., by restricting attention to a class of optimization problems). Then they specify a performance measure (sometimes explicitly). Properly speaking, they should then solve for the search algorithm that the inner product result tells us will have the best distribution of values of that performance measure, for that . In practice though, instead informal arguments are often used to motivate the search algorithm.

In addition to governing both how a practitioner should design their search algorithm, and how well the actual algorithm they use performs, the inner product result can be used to make more general statements about search, results that hold for all ’s. It does this by allowing us to compare the performance of a given search algorithm on different subsets of the set of all objective functions. The result is the NFL theorem for search. It tells us that if any search algorithm performs particularly well on one set of objective functions, it must perform correspondingly poorly on all other objective functions.

This implication is the primary significance of the NFL theorem for search. To illustrate it, choose the first set to be the set of objective functions on which your favorite search algorithm performs better than the purely random search algorithm, which chooses the next sample point randomly. Then the NFL for search theorem says that compared to random search, your favorite search algorithm “loses on as many” objective functions as it wins (if one weights wins / losses by the amount of the win / loss). This is true no matter what performance measure you use.

As another example, say that your performance measure prefers low values of the objective function to high values, i.e., that your goal is to find low values of the objective rather than high ones. Then we can use the NFL theorem for search to compare a hill-descending algorithm to a hill-ascending algorithm, i.e., to an algorithm that “tries” to do as poorly as possible according to the objective function. The conclusion is that the hill-descending algorithm “loses to the hill-ascending algorithm on as many” objective functions as it wins. The lesson is that without arguing for a particular that is biased towards the objective functions on which one’s favorite search algorithm performs well, one has no formal justification that that algorithm has good performance.

A secondary implication of the NFL theorem for search is that if it so happens that you assume / believe that is uniform, then the average over ’s used in the NFL for search theorem is the same as . In this case, you must conclude that all search algorithms perform equally well for your assumed . This conclusion is only as legitimate as is the assumption for it is based on. Once other ’s are allowed, the conclusion need not hold.

An important point in this regard is that simply allowing to be non-uniform, by itself, does not invalidate the NFL theorem for search. Arguments that is non-uniform in the real world do not, by themselves, establish anything whatsoever about what search algorithm to use in the real world.

In fact, allowing ’s to vary provides us with a new NFL theorem. In this new theorem, rather than compare the performance of two search algorithms over all ’s, we compare them over all ’s. The result is what one might expect: If any given search algorithm performs better than another over a given set of ’s, then it must perform corresponding worse on all other ’s. (See appendix for proof.)

4 The supervised learning No Free Lunch theorems

The discussion above tells us that if we only knew and properly exploited , we would be able to design an associated search algorithm that performs better than random. This suggests that we try to use a search process itself to learn something about the real world’s , or at least about how well one or more search algorithms perform on that . For example, we could do this by recording the results of running a particular search algorithm on a set of (randomly chosen) real-world search problems, and using those results as a “training set” for a supervised machine learning algorithm that models how those algorithms compare to one another on such search problems. The hope would be that by doing this, we can give ourselves formal assurances that one search algorithm should be used rather than another, for the that governs the real world.

The precise details of how well such an approach would perform depend on the precise way that it is formalized. However two broadly applicable restrictions on its performance are given by an inner product formula for supervised learning and an associated NFL theorem for supervised learning.

Just like search, supervised learning involves an input space , an output space , a function relating the two, and a data set of pairs. The goal in supervised learning though is not to iteratively augment the data to find what minimizes the “target function”

. Rather it is to take a fixed data set and estimate the entire function

. Such a function mapping a data set to an estimate of (or more generally an estimate of a distribution over ’s) is called a learning algorithm. We then refer to the accuracy of the estimate for ’s that do not occur in the data set as off-training set error

. More precisely, in supervised learning we are concerned with the expected value of a “loss function” over points outside of the training set, which plays the same role as the performance measure

does in search.

The supervised learning inner product formula tells us that the performance of any supervised learning algorithm is governed by an inner product between two vectors, both indexed by the set of all target functions. In particular, as long as the loss function is symmetric, it tells us that how “aligned” the supervised learning algorithm is with the real world (i.e., with the posterior distribution of target functions conditioned on a training set) determines how well that algorithm will generalize from any training set to a separate test set. (See appendix.) This supervised learning inner product formula results in a set of NFL theorems for supervised learning, applicable when some additional common conditions concerning the loss function hold. In some ways these theorems are even more striking than the NFL for search theorems.

As an example, let be a set of the favorite supervised learning algorithms of some scientist . So when given a training set , scientist estimates what produced that training set the following way. First they run cross-validation on to compare the algorithms in . They then choose the algorithm with lowest such cross-validation error. As a final step, they run that algorithm on all of . In this way generates their final hypothesis to generalize from .

Next suppose that scientist has the same set of favorite learning algorithms. So they decide how to generalize from a given data set the same way as does — but with a twist. For some reason, uses anti-cross-validation rather than cross-validation. So the the algorithm they choose to train on all of is the element of with greatest cross-validation error on , not the one with the smallest such error.

Note that since is fixed, the procedure run by scientist is simply a rule that maps any arbitrary training set to an estimate of the target function for outside of that training set. In other words, themselves constitute a supervised learning algorithm. Similarly, since is fixed, is a (different) supervised learning algorithm. So by the NFL theorems for supervised learning, we have no a priori basis for preferring scientist ’s hypothesis to scientist ’s. Although it is difficult to actually produce such ’s in which beats , by the NFL for supervised learning theorem we know that there must be “as many” of them (weighted by performance) as there are ’s for which beats . In other words, anti-cross-validation beats cross-validation as often as the reverse.

Despite this lack of formal guarantees behind cross-validation in supervised learning, it is hard to imagine any scientist who would not prefer to use it to using anti-cross-validation. Indeed, one can view cross-validation (or more generally “out of sample” techniques) as a formalization of the scientific method: choose among theories according to which better fits experimental data that was generated after the theory was formulated, and then use that theory to make predictions for new experiments. By the inner product formula for supervised learning, this bias of the scientific community in favor of using out-of-sample techniques in general, and cross-validation in particular, must correspond somehow to a bias in favor of a particular . This implicit prior

is quite difficult to express mathematically. Yet almost every conventional supervised learning prior (e.g., in favor of smooth targets) or non-Bayesian bias favoring some learning algorithms over others (e.g., a bias in favor of having few degrees of freedom in a hypothesis class, in favor of generating a hypothesis with low algorithmic complexity, etc.) is often debated by members of the scientific community. In contrast, nobody debates the “prior” implicit in out-of-sample techniques. Indeed, it is exactly this prior which justifies the ubiquitous use of contests involving hidden test data sets to judge which of a set of learning algorithms are best.

5 Implications of NFL for other formal results concerning inference, and for philosophy of science

It is worth taking a moment to describe how the NFL theorems can be reconciled with the many proofs in the literature of lower bounds on generalization error which would appear to provide

a priori reason to prefer one algorithm over another. Briefly, there are two problematic aspects to those proofs. First, the NFL theorems all concern the conditional distribution P(Φ∣d, A) where

is the random variable giving the expected loss of the prediction made by algorithm

for test points outside of the training set data . In particular, they tell us that E(Φ∣d, A) = E(Φ∣d, B) for any two algorithms and .

This equation concerns the posterior expected (off-training-set) loss, conditioned on — which according to Bayesian decision theory, should guide our decision of which algorithm to use, or . One can average over (produced by sampling the implicit prior ), to see that NFL also tells us that E(Φ∣m, A) = E(Φ∣m, B) the expected off-training set loss conditioned only on , the size of the data set , not the precise data set.

Next, let be the expected loss over the entire space , not just the portion of outside of . There are many formal results in the literature which concern , not . In addition, many of these results don’t explicitly specify what the conditioning event is for the distributions they calculate, simply writing them without any conditioning event at all. However when you dig into the proofs, you often find that the results concern conditional distributions like P(Φ’ ∣m, A, f) Note that in a formal sense, this conditional distribution is “backwards” — it conditions on , which is what is unknown, and averages over ’s, even though the actual is

known. (This criticism of the choice of conditioning event is, of course, a central issue in the age-old controversy between Bayesian statistics and non-Bayesian statistics.)

Often these results are independent of , which sometimes leads researchers to interpret them as meaning that some algorithm should be used rather than some other algorithm , “no matter what the distribution over ’s is”. In particular, the argument is often made that so long as is far smaller than the size of the space , will approximate arbitrarily well with arbitrarily high probability, and therefore a result like means that algorithm is better than on off-training set error, no matter what

is. That reasoning is simply wrong though; if one tries to use the standard rules of probability theory to convert results like

to results like one fails (unless one makes an assumption for ). This is true no matter how big is compared to . In fact, it is not trivial to make the transition from results concerning to results concerning  wolp95b ; wolpert1997bias .

There has also been work that has claimed to derive a “Bayesian Occam’s razor” using Bayes factors, where the analysis is over (ultimately arbitrarily defined)

models of problems, rather than over individual problems directly mack03 ; jefferys1992ockham ; lore90 ; gull88 . However, NFL tells us that such approaches must, ultimately, simply be hiding their assumption concerning the prior over problems. Indeed, as reductio ad absurdum, one can use the kind of reasoning promoted in these papers to imply that any algorithm is superior to any other, by appropriately redefining the models wolpert1995bayesian . There has also been work that claims to use algorithmic information theory livi08 to refute NFL lattimore2013no . However, ultimately this work simply makes an assumption for a particular prior, and then shows that there are a priori distinctions between algorithms for that prior (in this case, the prior of algorithmic information theory, which explicitly prefers hypotheses that can be encoded in shorter programs). Of course, this is completely consistent with NFL. More broadly, other work has shown that all one needs to assume is that as one’s algorithm is fed more and more data it performs better (on average), in order to justify a particular form of Occam’s razor wolpert1990relationship . This too is consistent with NFL.

The implications of NFL for the entire scientific enterprise are also wide-ranging. In particular, we can let be the specification of how to configure an experimental apparatus, and the outcome of the associated experiment. So is the relevant physical laws determining the results of any such experiment, i.e., they are a specification of a universe. In addition, is a set of such experiments, and the function produced by the “learning algorithm” is a theory that tries to explain that experimental data ( being the distribution that embodies the scientist who generates that theory). Under this interpretation, off-training set error quantifies how well any theory produced by a particular scientist predicts the results of experiments not yet conducted. So roughly speaking, according to the NFL theorems for search, if scientist does a better job than scientist of producing accurate theories from data for one set of universes, scientist will do a better job on the remaining set of universes. This is true even if both universes produced the exact same set of scientific data that the scientists use to construct their theories — in which case it is theoretically impossible for the scientists to use any of the experimental data they have ever seen in any way whatsoever to determine which set of universes they are in.

As another implication of NFL for supervised learning, take to be the specification of an objective function, and say we have two professors, Smith and Jones, each of whom when given any such will produce a search algorithm to run on . Let be the bit that equals 1 iff the performance of the search algorithm produced by Prof. Smith is better than the performance of the search algorithm produced by Prof. Jones.111Note that as a special case, we could have each of the two professors always produce the exact same search algorithm for any objective function they are presented. In this case comparing the performance of the two professors just amounts to comparing the performance of the two associated search algorithms. So any training set is a set of objective functions, together with the bit of which of (the search algorithms produced by) the two professors on those objective functions performed better.

Next, let the learning algorithm be the simple rule that we predict for all to be 1 iff the majority of the values in is 1, and the learning algorithm to be the rule that we predict to be -1 iff the majority of the values in is 1. So is saying that if Professor Smith’s choice of search algorithm outperformed the choice by Professor Jones the majority of times in the past, then predict that they will continue to outperform Professor Jones in the future. In contrast, is saying that there will be a magical flipping of relative performance, in which suddenly Professor Jones is doing better in the future, if and only if they did worse in the past.

The NFL for supervised learning theorem tells us that there are as many universes in which algorithm will perform worse than algorithm — so that Professor Jones magically starts performing worse than Professor Smith — as there are universes the other way around. This is true even if Professor Jones produces the random search algorithm no matter what the value of (i.e., no matter what objective function they are searching). In other words, just because Professor Smith produces search algorithms that outperform random search in the past, without making some assumption about the probability distribution over universes, we cannot conclude that they are likely to continue to do so in the future.

The possible implications for how tenure decisions and grant awards are made will not be considered here.

6 Exploiting the relation between supervised learning and search to improve search

Given the preceding discussion, it seems that supervised learning is closely analogous to search, if one replaces the “search algorithm” with a “learning algorithm” and the “objective function” with a “target function”. So it should not be too surprising that the inner product formula and NFL theorem for search have analogs in supervised learning. This close formal relationship between search and supervised learning means that techniques developed in one field can often be “translated” to apply directly to the other field.

A particularly pronounced example of this occurs in the simplest (greedy) form of the Monte Carlo Optimization (MCO) approach to search erno98 . In that form of MCO, one uses a data set to form a distribution rather than (as in most conventional search algorithms) directly form a new . That is chosen so that that one expects the expected value of the objective function, to have a low value, i.e., so that one expects a sample of to produce an with a good value of the objective function. One then forms a sample of that , and evaluates . This provides a new pair that gets added to the data set , and the process repeats.

MCO algorithms can be viewed as variants of random search algorithms like genetic algorithms and simulated annealing, in which the random distribution governing which point to sample next is explicitly expressed and controlled, rather than be implicit and only manipulated indirectly. Several other algorithms can be cast as forms of MCO (e.g., the cross-entropy method rukr04 , the MIMIC algorithm deis97 ). MCO algorithms differ from one another in how they form the distribution for what point next to sample, with some not trying directly to optimize but instead using some other optimization goal.

It turns out that the problem of how best to choose a next in MCO is formally identical to the supervised learning problem of how best to choose a hypothesis based on a training set  rawo07 ; rawo08 . If one simply re-interprets all MCO variables as appropriate supervised learning variables, one transforms any MCO problem into a supervised learning problem (and vice-versa). The rule for this re-interpretation is effectively a dictionary that allows us to transform any technique that has been developed for supervised learning into a technique for (MCO-based) search. Regularization, bagging, boosting, cross-validation, stacking, etc., can all be transformed this way into techniques to improve search.

As an illustration, we can use the dictionary to translate the use of cross-validation to choose a hyperparameter from the domain of supervised learning into the domain of search. Training sets become data sets, and the hyperparameters of a supervised learning algorithm become the parameters of an MCO-based search algorithm. For example, a regularization constant in supervised learning gets transformed into the temperature parameter of (a form of MCO that is a small variant of) simulated annealing. In this way using the dictionary to translate cross-validation into the search domain shows us how to use it on one’s data set in search to dynamically update the temperature in the temperature-based MCO search algorithm. That updating proceeds by running the MCO algorithm repeatedly on subsets of one’s

already existing data set . (No new samples of the objective function beyond those already in are involved in this use of cross-validation for search, just like no new samples are involved in the use of cross-validation in supervised learning.)

Experimental tests of MCO search algorithms designed by using the dictionary have established that they work quite well in practice rawo07 ; rawo08 . Applying the dictionary to create analogs of bagging and and stacking in the context of search, in addition to creating analogs of cross-validation, have all been found to transform an initially powerful search algorithm into a new one with improved search performance.

Of course, these experimental results do not mean there is any formal justification for these kinds of MCO search algorithms; NFL for search cannot be circumvented.

7 Free lunches and future research

There are many avenues of research related to the NFL theorems which have not yet been properly explored. Some of these involve free lunch theorems which concern fields closely related to search, e.g., co-evolution woma05 . Other free lunches arise in supervised learning, e.g., when the loss function does not obey the conditions that were alluded to above wolpert1997bias .

However it is important to realize that none of these (no) free lunch theorems concern the covariational behavior of search and / or learning algorithms. For example, despite the NFL for search theorems, there are scenarios where, for some ’s, (using the notation of the appendix), but there are no ’s for which the reverse it true, i.e., for which the difference

. It is interesting to speculate that such “head-to-head” distinctions might ultimately provide a rationale for using many almost universally applied heuristics, in particular for using cross-validation rather than anti-cross-validation in both supervised learning and search.

There are other results where, in contrast to the NFL for search theorem, one does not consider fixed search algorithms and averages over ’s, but rather fixes and averages over algorithms. These results allow us to compare how intrinsically hard it is to search over a particular . They do this by allowing us to compare two ’s based on the sizes of the sets of algorithms that do better than the random algorithm does on those ’s mawo96 . While there are presumably analogous results for supervised learning, which would allow us to measure how intrinsically hard it is to learn a given , nobody currently knows. All of these issues are the subject of future research.

Appendix

A.1    NFL and inner product formulas for search

To begin, expand the performance probability distribution:

(1)

where the delta function equals 1 if its two arguments are equal, zero otherwise. The choice of search algorithm affects performance only through the term . In turn, this probability of under is given by

(2)

Plugging in gives P(ϕA, m) &= ∑_f P(f) D(f; d^m_Y, A, m) where D(f; d^m_, A, m) &:= ∑_d^m_Y P(d^m_Y ∣f, A, m) δ(ϕ, Φ(d^Y_m)) So for any fixed , is an inner product of two real-valued vectors each indexed by : and . Note that all the details of how the search algorithm operates are embodied in the first of those vectors. In contrast, the second one is completely independent of the search algorithm.

This notation also allows us to state the NFL for search theorem formally. Let be any subset of the set of all objective functions, . Then A.1    NFL and inner product formulas for search allows us to express the expected performance for functions inside in terms of expected performance outside of : ∑_f ∈B E(Φ∣f, m, A) &= constant - ∑_f ∈Y^X ∖B E(Φ∣f, m, A) where the constant on the right-hand side depends on the performance measure , but is independent of both and  woma97 . Expressed differently, A.1    NFL and inner product formulas for search says that is independent of . This is the core of the NFL for search, as elaborated in the next section.

A.2    NFL for search when we average over ’s

To derive the NFL theorem that applies when we vary over ’s, first recall our simplifying assumption that both and are finite (as they will be when doing search on any digital computer). Due to this, any is a finite dimensional real-valued vector living on a simplex . Let refer to a generic element of . So is the average probability of any one particular , if one uniformly averages over all distributions on ’s. By symmetry, this integral must be a constant, independent of . In addition, as mentioned above, A.1    NFL and inner product formulas for search tells us that is independent of . Therefore for any two search algorithms and ,

(3)

i.e.,

(4)

We can re-express this result as the statement that is independent of .

Next, let be any subset of . Then our result that is independent of implies

(5)

where the constant depends on , but is independent of both and . So if any search algorithm performs particularly well for one set of ’s, , it must perform correspondingly poorly on all other ’s. This is the NFL theorem for search when ’s vary.

A.3    NFL and inner product formulas for supervised learning

To state the supervised learning inner product and NFL theorems requires introducing some more notation. Conventionally, these theorems are presented in the version where both the the learning algorithm and target function are stochastic. (In contrast, the restrictions for search — presented above — conventionally involve a deterministic search algorithm and deterministic objective function.) This makes the statement of the restrictions for supervised learning intrinsically more complicated.

Let be a finite input space, a finite output space, and say we have a target distribution , along with a training set of pairs , that is stochastically generated according to a distribution (conventionally called a likelihood, or “data-generation process”). Assume that based on we have a hypothesis distribution . (The creation of from — specified in toto by the distribution — is conventionally called the learning algorithm.) In addition, let be a loss function taking . Finally, let be an off-training set cost function222The choice to use an off-training set cost function for the analysis of supervised learning is the analog of the choice in the analysis of search to use a search algorithm that only searches over points not yet sampled. In both the cases, the goal is to “mod out” aspects of the problem that are typically not of interest and might result in misleading results: ability of the learning algorithm to reproduce a training set in the case of supervised learning, and ability to revisit points already sampled with a good objective value in the case of search.,

(6)

where is some probability distribution over assigning non-zero measure to .

All aspects of any supervised learning scenario — including the prior, the learning algorithm, the data likelihood function, etc. — are given by the joint distribution

(where is values of the cost function) and its marginals. In particular, in  wolp95b it is proven that the probability of a particular cost value is given by

(7)

for a matrix that is symmetric in its arguments so long as the loss function is.

is the posterior probability that the real world has produced a target

for you to try to learn, given that you only know . It has nothing to do with your learning algorithm. In contrast, is the specification of your learning algorithm. It has nothing to do with the distribution of targets in the real world. So eq. 7 tells us that as long as the loss function is symmetric, how “aligned” you (the learning algorithm) are with the real world (the posterior) determines how well you will generalize.

This supervised learning inner product formula results in a set of NFL theorems for supervised learning, once one imposes some additional conditions on the loss function. See wolp95b for details.

Bibliography

  • [1] D. H. Wolpert. The lack of a prior distinctions between learning algorithms and the existence of a priori distinctions between learning algorithms. Neural Computation, 8:1341–1390,1391–1421, 1996.
  • [2] C. Schaffer. A conservation law for generalization performance. In International Conference on Machine Learning, pages 295–265. Morgan Kaufmann, 1994.
  • [3] D. H. Wolpert and W. G. Macready. No free lunch theorems for optimization.

    IEEE Transactions on Evolutionary Computation

    , 1(1):67–82, 1997.
  • [4] Darrell Whitley and Jonathan Rowe. A “no free lunch” tutorial: Sharpened and focused no free lunch. In Theory Of Randomized Search Heuristics: Foundations and Recent Developments, pages 255–287. World Scientific, 2011.
  • [5] Christian Igel and Marc Toussaint. A no-free-lunch theorem for non-uniform distributions of target functions. Journal of Mathematical Modelling and Algorithms, 3(4):313–322, 2005.
  • [6] Kyle Poland, Kerstin Beer, and Tobias J Osborne. No free lunch for quantum machine learning. arXiv preprint arXiv:2003.14103, 2020.
  • [7] Leto Peel, Daniel B Larremore, and Aaron Clauset. The ground truth about metadata and community detection in networks. Science advances, 3(5):e1602548, 2017.
  • [8] Peter Godfrey-Smith. Theory and reality: An introduction to the philosophy of science. University of Chicago Press, 2009.
  • [9] D. H. Wolpert. The relationship between PAC, the statistical physics framework, the Bayesian framework, and the VC framework. In The Mathematics of Generalization, pages 117–215. Addison–Wesley, 1995.
  • [10] David H Wolpert.

    On bias plus variance.

    Neural Computation, 9(6):1211–1243, 1997.
  • [11] D.J.C. Mackay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.
  • [12] William H Jefferys and James O Berger. Ockham’s razor and bayesian analysis. American Scientist, 80(1):64–72, 1992.
  • [13] T.J. Loredo.

    From laplace to sn 1987a: Bayesian inference in astrophysics.

    In Maximum Entropy and Bayesian Methods, pages 81–142. Kluwer Academic Publishers, 1990.
  • [14] S. F. Gull. Bayesian inductive inference and maximum entropy. In Maximum Entropy and Bayesian Methods, pages 53–74. Kluwer Academic Publishers, 1988.
  • [15] David H Wolpert. On the bayesian “occam factors” argument for occam’s razor. Computational learning theory and natural learning systems III, T. Petsche et al.(eds), 1995.
  • [16] M. Li and Vitanyi P. An Introduction to Kolmogorov Complexity and Its Applications. Springer, 2008.
  • [17] Tor Lattimore and Marcus Hutter. No free lunch versus occam’s razor in supervised learning. In

    Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence

    , pages 223–235. Springer, 2013.
  • [18] David H Wolpert. The relationship between occam’s razor and convergent guessing. Complex Systems, 4:319–368, 1990.
  • [19] Y. M. Ermoliev and V. I. Norkin.

    Monte carlo optimization and path dependent nonstationary laws of large numbers.

    Technical Report IR-98-009, International Institute for Applied Systems Analysis, March 1998.
  • [20] R. Rubinstein and D. Kroese. The Cross-Entropy Method. Springer, 2004.
  • [21] J.S. De Bonet, C.L. Isbell Jr., and P. Viola. Mimic: Finding optima by estimating probability densities. In Advances in Neural Information Processing Systems - 9. MIT Press, 1997.
  • [22] D. Rajnarayan and David H. Wolpert. Exploiting parametric learning to improve black-box optimization. In J. Jost, editor, Proceedings of ECCS 2007, 2007.
  • [23] D. Rajnarayan and David H. Wolpert. Bias-variance techniques for monte carlo optimization: Cross-validation for the ce method. arXiv:0810.0877v1, 2008.
  • [24] D. H. Wolpert and W. Macready. Coevolutionary free lunches. Transactions on Evolutionary Computation, 9:721–735, 2005.
  • [25] W. G. Macready and D. H. Wolpert. What makes an optimization problem hard? Complexity, 1:40–46, 1995.