Discrete MDL Predicts in Total Variation

09/25/2009 ∙ by Marcus Hutter, et al. ∙ 0

The Minimum Description Length (MDL) principle selects the model that has the shortest code for data plus model. We show that for a countable class of models, MDL predictions are close to the true distribution in a strong sense. The result is completely general. No independence, ergodicity, stationarity, identifiability, or other assumption on the model class need to be made. More formally, we show that for any countable class of models, the distributions selected by MDL (or MAP) asymptotically predict (merge with) the true measure in the class in total variation distance. Implications for non-i.i.d. domains like time-series forecasting, discriminative learning, and reinforcement learning are discussed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The minimum description length (MDL) principle recommends to use, among competing models, the one that allows to compress the data+model most [Grü07]. The better the compression, the more regularity has been detected, hence the better will predictions be. The MDL principle can be regarded as a formalization of Ockham’s razor, which says to select the simplest model consistent with the data.

Multistep lookahead sequential prediction. We consider sequential prediction problems, i.e. having observed sequence , predict , then observe for . Classical prediction is concerned with , multi-step lookahead with , and total prediction with . In this paper we consider the last, hardest case. An infamous problem in this category is the Black raven paradox [Mah04, Hut07]: Having observed black ravens, what is the likelihood that all ravens are black. A more computer science problem is (infinite horizon) reinforcement learning, where predicting the infinite future is necessary for evaluating a policy. See Section 6 for these and other applications.

Discrete MDL and Bayes. Let be a countable class of models

=theories=hypotheses=probabilities over sequences

, sorted w.r.t. to their complexity=codelength (say), containing the unknown true sampling distribution . Our main result will be for arbitrary measurable spaces , but to keep things simple in the introduction, let us illustrate MDL for finite .

In this case, we define as the -probability of data sequence . It is possible to code in bits, e.g. by using Huffman coding. Since is sampled from , this code is optimal (shortest among all prefix codes). Since we do not know , we could select the that leads to the shortest code on the observed data . In order to be able to reconstruct from the code we need to know which has been chosen, so we also need to code , which takes bits. Hence can be coded in bits. MDL selects as model the minimizer

Given , the true predictive probability of is . Since is unknown we use as a substitute. Our main concern is how close is the latter to the former. We can measure the distance between two predictive distributions by

(1)

for and . It is easy to see that is monotone increasing and that is twice the total variation distance (tvd) defined in (3).

MDL is closely related to Bayesian prediction, so a comparison to existing results for Bayes is interesting. Bayesians use for prediction, where is the Bayesian mixture with prior weights and . A natural choice is .

Results. The following results can be shown

(2)

where the expectation is w.r.t. . The left statements for imply almost surely, including some form of convergence rate. For Bayes it has been proven in [Hut03]; for MDL the proof in [PH05] can be adapted. As far as asymptotics is concerned, the right results are much stronger, and require more sophisticated proof techniques. For Bayes, the result follows from [BD62]. The proof for MDL is the primary novel contribution of this paper; more precisely for arbitrary measurable in total variation distance. Another general consistency result is presented in [Grü07, Thm.5.1]. Consistency is shown (only) in probability and the predictive implications of the result are unclear. A stronger almost sure result is alluded to, but the given reference to [BC91] contains only results for i.i.d. sequences which do not generalize to arbitrary classes. So existing results for discrete MDL are far less satisfactory than the elegant Bayesian prediction in tvd.

Motivation. The results above hold for completely arbitrary countable model classes . No independence, ergodicity, stationarity, identifiability, or other assumption need to be made.

The bulk of previous results for MDL are for continuous model classes [Grü07]

. Much has been shown for classes of independent identically distributed (i.i.d.) random variables

[BC91, Grü07]. Many results naturally generalize to stationary-ergodic sequences like (th-order) Markov. For instance, asymptotic consistency has been shown in [Bar85]. There are many applications violating these assumptions, some of them are presented below and in Section 6.

One can often hear the exaggerated claim that (e.g. unlike Bayes) MDL can be used even if the true distribution is not in . Indeed, it can be used, but the question is wether this is any good. There are some results supporting this claim, e.g. if is in the closure of , but similar results exist for Bayes. Essentially needs to be at least close to some for MDL to work, and there are interesting environments that are not even close to being stationary-ergodic or i.i.d.

Non-i.i.d. data is pervasive [AHRU09]; it includes all time-series prediction problems like weather forecasting and stock market prediction [CBL06]. Indeed, these are also perfect examples of non-ergodic processes. Too much green house gases, a massive volcanic eruption, an asteroid impact, or another world war could change the climate/economy irreversibly. Life is also not ergodic; one inattentive second in a car can have irreversible consequences. Also stationarity is easily violated in multi-agent scenarios: An environment which itself contains a learning agent is non-stationary (during the relevant learning phase). Extensive games and multi-agent reinforcement learning are classical examples [WR04].

Often it is assumed that the true distribution can be uniquely identified asymptotically. For non-ergodic environments, asymptotic distinguishability can depend on the realized observations, which prevent a prior reduction or partitioning of . Even if principally possible, it can be practically burdensome to do so, e.g. in the presence of approximate symmetries. Indeed this problem is the primary reason for considering predictive MDL. MDL might never identify the true distribution, but our main result shows that the sequentially selected models become predictively indistinguishable.

The countability of is the severest restriction of our result. Nevertheless the countable case is useful. A semi-parametric problem class with (say) can be reduced to a countable class for which our result holds, where

is a Bayes or NML or other estimate of

[Grü07]. Alternatively, could be reduced to a countable class by considering only computable parameters . Essentially all interesting model classes contain such a countable topologically dense subset. Under certain circumstances MDL still works for the non-computable parameters [Grü07]. Alternatively one may simply reject non-computable parameters on philosophical grounds [Hut05]. Finally, the techniques for the countable case might aid proving general results for continuous , possibly along the lines of [Rya09].

Contents. The paper is organized as follows: In Section 2 we provide some insights how MDL and Bayes work in restricted settings, what breaks down for general countable , and how to circumvent the problems. The formal development starts with Section 3, which introduces notation and our main result. The proof for finite is presented in Section 4 and for denumerable in Section 5. In Section 6 we show how the result can be applied to sequence prediction, classification and regression, discriminative learning, and reinforcement learning. Section 7 discusses some MDL variations.

2 Facts, Insights, Problems

Before starting with the formal development, we describe how MDL and Bayes work in some restricted settings, what breaks down for general countable , and how to circumvent the problems. For deterministic environments, MDL reduces to learning by elimination, and the four results in (2) can easily be understood. Consistency of MDL for i.i.d. (and stationary-ergodic) sources is also intelligible. For general , MDL may no longer converge to the true model. We have to give up the idea of model identification, and concentrate on predictive performance.

Deterministic MDL = elimination learning. For a countable class of deterministic theories=models=hypotheses=sequences, sorted w.r.t. to their complexity=codelength (say) it is easy to see why MDL works: Each is a model for one infinite sequence , i.e. . Given the true observations so far, MDL selects the simplest consistent with and for predicts . This (and potentially other) becomes (forever) inconsistent if and only if the prediction was wrong. Assume the true model is . Since elimination occurs in order of increasing index , and never makes any error, MDL makes at most prediction errors. Indeed, what we have described is just classical Gold style learning by elimination. For , the prediction may be wrong only on , which causes wrong predictions before the error is revealed. (Note that at time only is revealed.) Hence the total number of errors is bounded by . The bound is for instance attained on the class consisting of , and the true sequence switches from 1 to 0 after having observed ones. For , a wrong prediction gets eventually revealed. Hence each wrong () gets eventually eliminated, i.e.  gets eventually selected. So for we can (still/only) show that the number of errors is finite. No bound on the number of errors in terms of only is possible. For instance, for , it takes time steps to reveal that prediction is wrong, and can be chosen arbitrarily large.

Deterministic Bayes = majority learning. Bayesian learning is at the same time, closely related to and very different from MDL. Bayes predicts with a -weighted average of the models (rather than with a single one). For a deterministic class, Bayes is similar to prediction by majority: Consider the models consistent with the true observation , having total weight , and take the weighted majority prediction (this is the Bayes-optimal decision under 0-1 loss, Bayesian prediction would randomize). For , making a wrong prediction means that ’s contributing to at least half of the total weight get eliminated. Since never gets eliminated, we have , hence the number of errors is bounded by . For probabilistic Bayesian prediction proper, it is also easy to see that the expected number of errors is bounded by . One can show that these bounds are essentially sharp. (e.g. for defined as the digits after the comma of the binary expansion of for and .) With the same reasoning as in the MDL case, for we have to multiply the bound by ; and for we get correct prediction eventually, but no explicit bound anymore.

Comparison of deterministicprobabilistic and MDLBayes. The flavor of results carries over to some extent to the probabilistic case. On a very abstract level even the line of reasoning carries over, although this is deeply buried in the sophisticated mathematical analysis of the latter. So the special deterministic case illustrates the more complex probabilistic case. For instance for and , we see that “Bayes” makes only errors, while MDL can make up to the errors. This carries over to the probabilistic case. Also the multiplier for and the lack of an explicit bound for carries over. Cf. the bounds in (2). The reader is invited to reveal other relations not explicitly mentioned here. The differences are as follows: In the probabilistic case, the true can in general not be identified anymore. Further, while the Bayesian bound trivially follows from the 1/2-century old classical merging of opinions result [BD62], the corresponding MDL bound we prove in this paper is more difficult to obtain.

Consistency of MDL for stationary-ergodic sources. For an i.i.d. class

, the law of large numbers applied to the random variables

implies with -probability 1. Either the Kullback-Leibler (KL) divergence is zero, which is the case if and only if , or , i.e. asymptotically MDL does not select . For countable , a refinement of this argument shows that MDL eventually selects [BC91]. This reasoning can be extended to stationary-ergodic , but essentially not beyond. To see where the limitation comes from, we present some troubling examples.

Trouble makers. For instance, let be a Bernoulli process, but let the -probability that be , i.e. time-dependent (still assuming independence). For a suitably converging but “oscillating” (i.e. infinitely often larger and smaller than its limit) sequence one can show that converges to but oscillates around w.p.1, i.e. there are non-stationary distributions for which MDL does not converge (not even to a wrong distribution).

One idea to solve this problem is to partition , where two distributions are in the same partition if and only if they are asymptotically indistinguishable (like and above), and then ask MDL to only identify a partition. This approach cannot succeed generally, whatever particular criterion is used, for the following reason: Let . For , let and be asymptotically indistinguishable, e.g. on the remainder of the sequence. For , let and be asymptotically distinguishable distributions, e.g. different Bernoullis. This shows that for non-ergodic sources like this one, asymptotic distinguishability depends on the drawn sequence. The first observation can lead to totally different futures.

Predictive MDL avoids trouble. The Bayesian posterior does not need to converge to a single (true or other) distribution, in order for prediction to work. We can do something similar for MDL. At each time we still select a single distribution, but give up the idea of identifying a single distribution asymptotically. We just measure predictive success, and accept infinite oscillations. That’s the approach taken in this paper.

3 Notation and Main Result

The formal development starts with this section. We need probability measures and filters for infinite sequences, conditional probabilities and densities, the total variation distance, and the concept of merging (of opinions), in order to formally state our main result.

Measures on sequences. Let be the space of infinite sequences with natural filtration and product -field and probability measure . Let be an infinite sequence sampled from the true measure . Except when mentioned otherwise, all probability statements and expectations refer to , e.g. almost surely (a.s.) and with probability 1 (w.p.1) are short for with -probability 1 (w..p.1). Let be the first symbols of .

For countable , the probability that an infinite sequence starts with is . The conditional distribution of an event given is , which exists w.p.1. For other probability measures on , we define and analogously. General are considered at the end of this section.

Convergence in total variation. is said to be absolutely continuous relative to , written

and are said to be mutually singular, written , iff there exists an for which and . The total variation distance (tvd) between and given is defined as

(3)

is said to predict in tvd (or merge with ) if for with -probability 1. Note that this in particular implies, but is stronger than one-step predictive on- and off-sequence convergence for any , not necessarily equal [KL94]. The famous Blackwell and Dubins convergence result [BD62] states that if is absolutely continuous relative to , then (and only then [KL94]) merges with :

Bayesian prediction. This result can immediately be utilized for Bayesian prediction. Let be a countable (finite or infinite) class of probability measures, and with and . If the model assumption holds, then obviously , hence Bayes merges with , i.e.  w.p.1 for all . Unlike many other Bayesian convergence and consistency theorems, no (independence, ergodicity, stationarity, identifiability, or other) assumption on the model class need to be made. Good convergence rates for the weaker distances have also been shown [Hut03]. The analogous result for MDL is as follows:

Theorem 1 (MDL predictions)

Let be a countable class of probability measures on containing the unknown true sampling distribution . No (independence, ergodicity, stationarity, identifiability, or other) assumptions need to be made on . Let

be the measure selected by MDL at time given . Then the predictive distributions converge to in the sense that

is usually interpreted and defined as the length of some prefix code for , in which case . If is chosen as complexity, by Bayes rule

, the maximum a posteriori estimate

. Hence the theorem also applies to MAP. The proof of the theorem is surprisingly subtle and complex compared to the analogous Bayesian case. One reason is that is not a measure on .

Arbitrary . For arbitrary , definitions are more subtle. The casual reader satisfied with countable can skip this paragraph. We can consider even more generally [BD62]. Let be a -field of subsets of for . Let be the -field for generated by (i.e. the smallest -field containing) for . Let be a probability space. Let be the marginal distribution on , i.e.  for . The predictive distribution is (a version of) the conditional distribution of the future “” given past , implicitly defined by . Similarly define and for the other . See [Doo53] for details.

Let be a measure on such that is absolutely continuous (see below) relative to  for all . For instance has this property. Now define the density (Radon-Nikodym derivative) (round brackets) of measure (square brackets) relative to . It is important to note that all essential quantities, in particular , are independent of the particular choice of . We therefore plainly speak of the -density or even -probability of .

For countable and counting measure , and coincide with and above. In the following, we drop the sup&superscripts , since they will always be clear from the argument. Note that by Carath odory’s extension theorem, uniquely defines .

4 Proof for Finite Model Class

We first prove Theorem 1 for finite model classes . For this we need the following Definition and Lemma:

Definition 2 (Relations between and )

For any probability measures and , let

  • be the Lebesgue decomposition of relative to into an absolutely continuous non-negative measure and a singular non-negative measure .

  • be (a version of) the Radon-Nikodym derivative, i.e. .

  • .

  • .

It is well-known that the Lebesgue decomposition exists and is unique. The representation of the Radon-Nikodym derivative as a limit of local densities can e.g. be found in [Doo53, VII§8]: for constitute two martingale sequences, which converge w.p.1. implies that the limit is the Radon-Nikodym derivative . (Indeed, Doob’s martingale convergence theorem can be used to prove the Radon-Nikodym theorem.) implies w.p.1. So is uniquely defined and finite w.p.1.

Lemma 3 (Generalized merging of opinions)

For any and , the following holds:

  • if and only if

  • implies [(i)+[BD62]]

  • [generalizes (ii)]

says that converges almost surely to a strictly positive value if and only if is absolutely continuous relative to , says that an almost sure positive limit of implies that merges with . says that even if , we still have on almost every sequence that has a positive limit of .

Proof. Recall Definition 2.

Assume : implies , since a.s. by assumption . Therefore .

Assume : Choose a for which and . Now implies . By this implies , hence .

That implies is Blackwell-Dubins’ celebrated result. The result now follows from (i).

generalizes [BD62]. For it reduces to (ii). The case is trivial. Therefore we can assume . Consider measure conditioned on .

Assume . Using , we get . Since outside , this implies . So . Hence . Now (ii) implies with probability 1. Since we also get w..p.1.

Together this implies w..p.1, i.e. . The claim now follows from  

The intuition behind the proof of Theorem 1 is as follows. MDL will asymptotically not select for which . Hence for those potentially selected by MDL, we have , hence , for which (a.s.). The technical difficulties are for finite that the eligible depend on the sequence , and for infinite to deal with non-uniformly converging , i.e. to infer .

Proof of Theorem 1 for finite . Recall Definition 2, and let refer to some . The set of sequences for which some for some is undefined has -measure zero, and hence can be ignored. Fix some sequence for which is defined for all , and let .

Consider the difference

For , the r.h.s. is , hence

Since is finite, this implies

Therefore, since , we have , so we can safely ignore all and focus on . Let . Since by Lemma 3(iii), we can also assume .

This implies

where the inequality holds for and the limit holds, since is finite. Since the set of excluded in our considerations has measure zero, w.p.1, which proves the theorem for finite .  

5 Proof for Countable Model Class

The proof in the previous Section crucially exploited finiteness of . We want to prove that the probability that MDL asymptotically selects “complex” is small. The following Lemma establishes that the probability that MDL selects a specific complex infinitely often is small.

Lemma 4 (MDL avoids complex probability measures )

For any and we have .

Proof.

(a) is true by definition of the limit superior , (b) is Markov’s inequality, (c) exploits the fact that the limit of exists w.p.1, (d) uses Fatou’s lemma, and (e) is obvious.  

For sufficiently complex , Lemma 4 implies that for most . Since convergence is non-uniform in , we cannot apply the Lemma to all (infinitely many) complex directly, but need to lump them into one .

Proof of Theorem 1 for countable . Let the be ordered somehow, e.g. in increasing order of complexity , and . Choose some (large) and let be the set of “complex” . We show that the probability that MDL selects infinitely often complex is small:

The first three relations follow immediately from the definition of the various quantities. Bound (a) is the crucial “lumping” step. First we bound

While is not a (single) measure on and hence difficult to deal with, is a proper probability measure on . In a sense, this step reduces MDL to Bayes. Now we apply Lemma 4 in (b) to the (single) measure . The bound (c) holds for sufficiently large , since for . This shows that for the sequence of MDL estimates

Hence the already proven Theorem 1 for finite implies that with probability at least . Since convergence holds for every , it holds w.p.1.  

6 Implications

Due to its generality, Theorem 1 can be applied to many problem classes. We illustrate some immediate implications of Theorem 1 for time-series forecasting, classification, regression, discriminative learning, and reinforcement learning.

Time-series forecasting. Classical online sequence prediction is concerned with predicting from (non-i.i.d.) sequence for . Forecasting farther into the future is possible by predicting for some . One can show that , see (1) and (3). Hence Theorem 1 implies good asymptotic (multi-step) predictions. Offline learning is concerned with training a predictor on for fixed in-house, and then selling and using the predictor on without further learning. Theorem 1 shows that for enough training data, predictions “post-learning” will be good.

Classification and Regression. In classification (discrete ) and regression (continuous ), a sample is a set of pairs , and a functional relationship +noise, i.e. a conditional probability shall be learned. For reasons apparent below, we have swapped the usual role of and . The dots indicate and , while and . If we assume that also follows some distribution, and start with a countable model class

of joint distributions

which contains the true joint distribution , our main result implies that converges to the true distribution . Indeed since/if samples are assumed i.i.d., we don’t need to invoke our general result.

Discriminative learning. Instead of learning a generative [Jeb03] joint distribution , which requires model assumptions on the input , we can discriminatively [LSS07] learn directly without any assumption on (not even i.i.d). We can simply treat as an oracle to all , define with , and apply our main result to , leading to , i.e. . This not yet useful since is never known completely. If are conditionally independent, we can write

Taking the limit we get . This is a generic property satisfied for all causal processes, that a future for does not influence past observations . Hence for a class of conditionally independent distributions, we get . Since the given are not identically distributed, classical MDL consistency results for i.i.d. or stationary-ergodic sources do not apply. The following corollary formalizes our findings:

Corollary 5 (Discriminative MDL)

Let be a class of discriminative causal distributions , i.e. , where and . Regression and classification are typical examples. Further assume is countable. Let be the discriminative MDL measure (at time given ). Then for , almost surely, for every sequence .

For finite and conditionally independent , the intuitive reason how this can work is as follows: If appears in only finitely often, it plays asymptotically no role; if it appears infinitely often, then can be learned. For infinite and deterministic , the result is also intelligible: Every might appear only once, but probing enough function values allows to identify the function.

Reinforcement learning (RL). In the agent framework [RN03], an agent interacts with an environment in cycles. At time , an agent chooses an action based on past experience and past actions with probability (say). This leads to a new perception with probability (say). Then cycle starts. Let be the joint interaction probability. We make no (Markov, stationarity, ergodicity) assumption on and . They may be POMDPs or beyond.

Corollary 6 (Single-agent MDL)

For a fixed policy=agent , and a class of environments , let with . Then with joint -probability 1.

The corollary follows immediately from the previous corollary and the facts that the are causal and that with -probability 1 implies w..p.1 jointly in and .

In reinforcement learning [SB98], the perception consists of some regular observation and a reward . Goal is to find a policy which maximizes accrued reward in the long run. The previous corollary implies

Corollary 7 (Fixed-policy MDL value function convergence)

Let be the future -discounted -expected reward sum (true value of ), and similarly for . Then the MDL value converges to the true value, i.e. , w..p.1. for any policy .

Proof. The corollary follows from the general inequality

by inserting and and , and using and Corollary 6.  

Since the value function probes the infinite future, we really made use of our convergence result in total variation. Corollary 7 shows that MDL approximates the true value asymptotically arbitrarily well. The result is weaker than it may appear. Following the policy that maximizes the estimated (MDL) value is often not a good idea, since the policy does not explore properly [Hut05]. Nevertheless, it is a reassuring non-trivial result.

7 Variations

MDL is more a general principle for model selection than a uniquely defined procedure. For instance, there are crude and refined MDL [Grü07], the related MML principle [Wal05], a static, a dynamic, and a hybrid way of using MDL for prediction [PH05], and other variations. For our setup, we could have defined multi-step lookahead prediction as a product of single-step predictions:

which is a more incremental MDL version. Both, and MDLI are ‘static’ in the sense of [PH05], and each allows for a dynamic and a hybrid version. Due to its incremental nature, MDLI likely has better predictive properties than , and conveniently defines a single measure over , but inconveniently is . One reason for using MDL is that it can be computationally simpler than Bayes. E.g. if is a class of MDPs, then is still an MDP and hence tractable, but MDLI like Bayes are a nightmare to deal with.

Acknowledgements. My thanks go to Peter Sunehag for useful discussions.

References

  • [AHRU09] M.-R. Amini, A. Habrard, L. Ralaivola, and N. Usunier, editors. Learning from non-IID data: Theory, Algorithms and Practice (LNIDD’09), Bled, Slovenia, 2009.
  • [Bar85] A. R. Barron. Logically Smooth Density Estimation. PhD thesis, Stanford University, 1985.
  • [BC91] A. R. Barron and T. M. Cover. Minimum complexity density estimation. IEEE Transactions on Information Theory, 37:1034–1054, 1991.
  • [BD62] D. Blackwell and L. Dubins. Merging of opinions with increasing information. Annals of Mathematical Statistics, 33:882–887, 1962.
  • [CBL06] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006.
  • [Doo53] J. L. Doob. Stochastic Processes. Wiley, New York, 1953.
  • [Grü07] P. D. Grünwald. The Minimum Description Length Principle. The MIT Press, Cambridge, 2007.
  • [Hut03] M. Hutter. Convergence and loss bounds for Bayesian sequence prediction. IEEE Transactions on Information Theory, 49(8):2061–2067, 2003.
  • [Hut05] M. Hutter.

    Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability

    .
    Springer, Berlin, 2005. 300 pages, http://www.hutter1.net/ai/uaibook.htm.
  • [Hut07] M. Hutter. On universal prediction and Bayesian confirmation. Theoretical Computer Science, 384(1):33–48, 2007.
  • [Jeb03] T. Jebara. Machine Learning: Discriminative and Generative. Springer, 2003.
  • [KL94] E. Kalai and E. Lehrer. Weak and strong merging of opinions. Journal of Mathematical Economics, 23:73–86, 1994.
  • [LSS07] P. Long, R. Servedio, and H. U. Simon. Discriminative learning can succeed where generative learning fails. Infor