New Error Bounds for Solomonoff Prediction

12/13/1999 ∙ by Marcus Hutter, et al. ∙ IDSIA 0

Solomonoff sequence prediction is a scheme to predict digits of binary strings without knowing the underlying probability distribution. We call a prediction scheme informed when it knows the true probability distribution of the sequence. Several new relations between universal Solomonoff sequence prediction and informed prediction and general probabilistic prediction schemes will be proved. Among others, they show that the number of errors in Solomonoff prediction is finite for computable distributions, if finite in the informed case. Deterministic variants will also be studied. The most interesting result is that the deterministic variant of Solomonoff prediction is optimal compared to any other probabilistic or deterministic prediction scheme apart from additive square root corrections only. This makes it well suited even for difficult prediction problems, where it does not suffice when the number of errors is minimal to within some factor greater than one. Solomonoff's original bound and the ones presented here complement each other in a useful way.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Induction is the process of predicting the future from the past or, more precisely, it is the process of finding rules in (past) data and using these rules to guess future data. The induction principle has been subject to long philosophical controversies. Highlights are Epicurus’ principle of multiple explanations, Occams’ razor (simplicity) principle and Bayes’ rule for conditional probabilities [2]. In 1964, Solomonoff [8]

elegantly unified all these aspects into one formal theory of inductive inference. The theory allows the prediction of digits of binary sequences without knowing their true probability distribution in contrast to what we call an informed scheme, where the true distribution is known. A first error estimate was also given by Solomonoff 14 years later in

[9]. It states that the total means squared distance of the prediction probabilities of Solomonoff and informed prediction is bounded by the Kolmogorov complexity of the true distribution. As a corollary, this theorem ensures that Solomonoff prediction converges to informed prediction for computable sequences in the limit. This is the key result justifying the use of Solomonoff prediction for long sequences of low complexity.

Another natural question is to ask for relations between the total number of expected errors in Solomonoff prediction and the total number of prediction errors in the informed scheme. Unfortunately [9] does not bound in terms of in a satisfactory way. For example it does not exclude the possibility of an infinite even if is finite. Here we want to prove upper bounds to in terms of ensuring as a corollary that the above case cannot happen. On the other hand, our theorem does not say much about the convergence of Solomonoff to informed prediction. So Solomonoff’s and our bounds complement each other in a nice way.

In the preliminary Section 2

we give some notations for strings and conditional probability distributions on strings. Furthermore, we introduce Kolmogorov complexity and the universal probability, where we take care to make the latter a true probability measure.

In Section 3 we define the general probabilistic prediction scheme () and Solomonoff () and informed () prediction as special cases. We will give several error relations between these prediction schemes. A bound for the error difference between Solomonoff and informed prediction is the central result. All other relations are then simple, but interesting consequences or known results such as the Euclidean bound.

In Section 4 we study deterministic variants of Solomonoff () and informed () prediction. We will give similar error relations as in the probabilistic case between these prediction schemes. The most interesting consequence is that the system is optimal compared to any other probabilistic or deterministic prediction scheme apart from additive square root corrections only.

In the Appendices A, B and C we prove the inequalities (18), (20) and (26), which are the central parts for the proofs of the Theorems 1 and 2.

For an excellent introduction to Kolmogorov complexity and Solomonoff induction one should consult the book of Li and Vitányi [7] or the article [6] for a short course. Historical surveys of inductive reasoning/inference can be found in [1, 10].

2 Preliminaries

Throughout the paper we will consider binary sequences/strings and conditional probability measures on strings.

We will denote strings over the binary alphabet by with and their lengths with . is the empty string, for and for . Furthermore, .

We use Greek letters for probability measures and underline their arguments to indicate that they are probability arguments. Let be the probability that an (infinite) sequence starts with . We drop the index on if it is clear from its arguments:

(1)

We also need conditional probabilities derived from Bayes’ rule. We prefer a notation which preserves the order of the words in contrast to the standard notation which flips it. We extend the definition of to the conditional case with the following convention for its arguments: An underlined argument is a probability variable and other non-underlined arguments represent conditions. With this convention, Bayes’ rule has the following look:

(2)

The first equation states that the probability that a string is followed by is equal to the probability that a string starts with divided by the probability that a string starts with . The second equation is the first, applied times.

Let us choose some universal monotone Turing machine

with unidirectional input and output tapes and a bidirectional work tape. We can then define the prefix Kolmogorov complexity [3, 5] as the length of the shortest program , for which outputs string :

(3)

The universal semi-measure is defined as the probability that the output of the universal Turing machine starts with when provided with fair coin flips on the input tape. It is easy to see that this is equivalent to the formal definition

(4)

where the sum is over minimal programs for which outputs a string starting with . might be non-terminating. has the important universality property [12] that it majorizes every computable probability measure up to a multiplicative factor depending only on but not on :

(5)

The Kolmogorov complexity of a function like is defined as the length of the shortest self-delimiting coding of a Turing machine computing this function. Unfortunately itself is not a probability measure on the binary strings. We have because there are programs which output just , followed neither by nor by ; they just stop after printing or continue forever without any further output. This drawback can easily be corrected111 Another popular way is to keep

and sacrifice some of the axioms of probability theory. The reason for doing this is that

, although not computable [7, 9], is at least enumerable. On the other hand, we are interested in conditional probabilities, derived from , which are no longer enumerable anyway, so there is no reason for us to stick to . is still computable in the limit or approximable. [9]. Let us define the universal probability measure by defining first the conditional probabilities

(6)

and then by using (2) to get . It is easily verified by induction that is indeed a probability measures and universal

(7)

The latter follows from and (5). The universality property (7) is all we need to know about in the following.

3 Probabilistic Sequence Prediction

Every inductive inference problem can be brought into the following form: Given a string , give a guess for its continuation . We will assume that the strings which have to be continued are drawn according to a probability distribution222This probability measure might be for some sequence and for all others. In this case, is equal to (up to terms of order 1).. In this section we consider probabilistic predictors of the next bit of a string. So let be the true probability measure of string , and be the probability that the system predicts as the successor of . We are not interested here in the probability of the next bit itself. We want our system to output either or

. Probabilistic strategies are useful in game theory where they are called mixed strategies. We keep

fixed and compare different . Interesting quantities are the probability of making an error when predicting , given . If , the probability of our system to predict (making an error) is . That is happens with probability . Analogously for . So the probability of making a wrong prediction in the step ( fixed) is

(8)

The total -expected number of errors in the first predictions is

(9)

If is known, a natural choice for is . This is what we call an informed prediction scheme. If the probability of is high (low), the system predicts with high (low) probability. If is unknown, one could try the universal distribution for as defined in (4) and (6). This is known as Solomonoff prediction [8].

What we are most interested in is an upper bound for the -expected number of errors of the -predictor. One might also be interested in the probability difference of predictions at step of the - and -predictor or the total absolute difference to some power (-norm in -space).

(10)

For there is the well known-result [9]

(11)

One reason to directly study relations between and is that from (11) alone it does not follow that is finite, if is finite. Assume that we could choose such that and . Then would be finite, but would be infinite, without violating (11). There are other theorems, the most prominent being with probability 1 (see [7] page 332). However, neither of them settles the above question. In the following we will show that a finite causes a finite .

Let us define the Kullback Leibler distance [4] or relative entropy between and :

(12)

is then defined as the sum-expectation for which the following can be shown [9]

(13)

In the first line we have inserted (12) and used Bayes rule . Due to (1) we can replace by as the argument of the logarithm is independent of . The sum can now be exchanged with the sum and transforms to a product inside the logarithm. In the last equality we have used the second form of Bayes rule (2) for and . If we use universality (7) of , i.e. , the final inequality in (13) is yielded, which is the basis of all error estimates.

We now come to our first theorem:

Theorem 1. Let there be binary sequences drawn with probability for the first bits. A -system predicts by definition from with probability . is the error probability in the prediction (8) and is the -expected total number of errors in the first predictions (9). The following error relations hold between universal Solomonoff (), informed () and general () predictions:

where is the relative entropy (13) and is the Kolmogorov complexity of (3).

Corollary 1. For computable , i.e. for , the following statements immediately follow from Theorem 1:

Relation is the central new result. It is best illustrated for computable by the corollary. Statements , and follow directly from and the finiteness of . Statement follows from .

First of all, ensures finiteness of the number of errors of Solomonoff prediction, if the informed prediction makes only a finite number of errors. This is especially the case for deterministic , as in this case333We call a probability measure deterministic if it is 1 for exactly one sequence and 0 for all others.. Solomonoff prediction makes only a finite number of errors on computable sequences. For more complicated probabilistic environments, where even the ideal informed system makes an infinite number of errors, ensures that the error excess of Solomonoff prediction is only of order . This ensures that the error densities of both systems converge to each other, but actually says more than this. It ensures that the quotient converges to 1 and also gives the speed of convergence .

Relation is the well-known Euclidean bound [9]. It is the only upper bound in Theorem 1 which remains finite for . It ensures convergence of the individual prediction probabilities . Relation shows that the system makes at least half of the errors of the system. Relation improves the lower bounds of and . Together with the upper bound in it says that the excess of errors as compared to errors is given by apart from corrections. The excess is neither smaller nor larger. This result is plausible, since knowing means additional information, which saves making some of the errors. The information content of (relative to ) is quantified in terms of the relative entropy .

Relation states that no prediction scheme can have less than half of the errors of the system, whatever we take for . This ensures the optimality of apart from a factor of 2. Combining this with ensures optimality of Solomonoff prediction, apart from a factor of 2 and additive (inverse) square root corrections , . Note that even when comparing with , the computability of is what counts, whereas might be any, even an uncomputable, probabilistic predictor. The optimality within a factor of 2 might be sufficient for some applications, especially for finite or if , but is inacceptable for others. More about this in the next section, where we consider deterministic prediction, where no factor 2 occurs.

Proof of Theorem 1. The first inequality in follows directly from the definition of and and the triangle inequality. For the second inequality, let us start more modestly and try to find constants and which satisfy the linear inequality

(14)

If we could show

(15)

for all and all , (14) would follow immediately by summation and the definition of , and . With , , , fixed now, we abbreviate

(16)

The various error functions can then be expressed by , and

(17)

Inserting this into (15) we get

(18)

In Appendix A we will show that this inequality is true for , . Inequality (14) therefore holds for any , provided we insert . Thus we might minimize the r.h.s of (14) w.r.t . The minimum is at leading to the upper bound

which completes the proof of .

Bound is well known [9]. It is already linear and is proved by showing . Inserting the abbreviations (17) we get

(19)

This lower bound for the Kullback Leibler distance is well known [4].

Relation does not involve at all and is elementary. It is reduced to , equivalent to , equivalent to , which is obviously true.

The second inequality of is trivial and the first is proved similarly to . Again we start with a linear inequality , which is further reduced to . Inserting the abbreviations (17) we get

(20)

In Appendix B this inequality is shown to hold for , when . If we insert and minimize w.r.t. , the minimum is again at leading to the upper bound restricted to , which completes the proof of .

Statement is satisfied because . Statement is a direct consequence of and . This completes the proof of Theorem 1.

4 Deterministic Sequence Prediction

In the last section several relations were derived between the number of errors of the universal -system, the informed -system and arbitrary -systems. All of them were probabilistic predictors in the sense that given they output or with certain probabilities. In this section, we are interested in systems whose output on input is deterministically or . Again we can distinguish between the case where the true distribution is known or unknown. In the probabilistic scheme we studied the and the system. Given any probabilistic predictor it is easy to construct a deterministic predictor from it in the following way: If the probability of predicting is larger than , the deterministic predictor always chooses . Analogously for . We define444All results will be independent of the choice for , so one might choose for definiteness.

Note that every deterministic predictor can be written in the form for some and that although , defined via Bayes’ rule (2), takes only values in , it may still be interpreted as a probability measure. Deterministic prediction is just a special case of probabilistic prediction. The two models and will be studied now.

Analogously to the last section we draw binary strings randomly with distribution and define the probability that the system makes an erroneous prediction in the step and the total -expected number of errors in the first predictions as

(21)

The definitions (12) and (13) of and remain unchanged ( is not replaced by ).

The following relations will be derived:

Theorem 2. Let there be binary sequences drawn with probability for the first bits. A -system predicts by definition from with probability . A deterministic system always predicts if and 0 otherwise. If is the error probability in the prediction, the total -expected number of errors in the first predictions (9), the following relations hold:

where is the relative entropy (13), which is finite for computable .

No other useful bounds have been found, especially no bounds for the analogue of .

Corollary 2. For computable , i.e. for , the following statements immediately follow from Theorem 2:

Most of what we said in the probabilistic case remains valid here, as the Theorems and Corollaries 1 and 2 parallel each other. For this reason we will only highlight the differences.

The last inequality of is the central new result in the deterministic case. Again, it is illustrated in the corollary, which follows trivially from Theorem 2.

From we see that is the best prediction scheme possible, compared to any other probabilistic or deterministic prediction . The error expectation is smaller in every single step and hence, the total number of errors are also. This itself is not surprising and nearly obvious, as the system always predicts the bit of highest probability. So, for known , the system should always be preferred to any other prediction scheme, even to the informed prediction system.

Combining and leads to a bound on the number of prediction errors of the deterministic variant of Solomonoff prediction. For computable , no prediction scheme can have fewer errors than that of the system, whatever we take for , apart from some additive correction of order . No factor 2 occurs as in the probabilistic case. Together with the quick convergence stated in , the model should be sufficiently good in many applications.

Example. Let us consider a critical example. We want to predict the outcome of a die colored black (=0) and white (=1). Two faces should be white and the other 4 should be black. The game becomes more interesting by having a second complementary die with two black and four white sides. The dealer who throws the dice uses one or the other die according to some deterministic rule. The stake is $3 in every round; our return is $5 for every correct prediction.

The coloring of the dice and the selection strategy of the dealer unambiguously determine . is for die 1 or for die 2. If we use for prediction, we will have made incorrect and correct predictions in the first rounds. The expected profit will be

(22)

The winning threshold is reached if .

If we knew , we could use the best possible prediction scheme . The error (21) and profit (22) expectations per round in this case are

(23)

so we can make money from this game. If we predict according to the probabilistic prediction scheme (8) we would lose money in the long run:

In the more interesting case where we do not know we can use Solomonoff prediction or its deterministic variant . From of Corollaries 1 and 2 we know that

so asymptotically the system provides the same profit as the system and the system the same as the system. Using the system is a losing strategy, while using the system is a winning strategy. Let us estimate the number of rounds we have to play before reaching the winning zone with the system. if if

by Theorem 2 . Solving w.r.t.  we get

Using and (23) we expect to be in the winning zone for

If the die selection strategy reflected in is not too complicated, the prediction system reaches the winning zone after a few thousand rounds. The number of rounds is not really small because the expected profit per round is one order of magnitude smaller than the return. This leads to a constant of two orders of magnitude size in front of . Stated otherwise, it is due to the large stochastic noise, which makes it difficult to extract the signal, i.e. the structure of the rule . Furthermore, this is only a bound for the turnaround value of . The true expected turnaround might be smaller.

However, every game for which there exists a winning strategy with , is guaranteed to get into the winning zone for some , i.e. for sufficiently large . This is not guaranteed for the -system, due to the factor in the bound of Corollary 1.

Proof of Theorem 2. The method of proof is the same as in the previous section, so we will keep it short. With the abbreviations (16) we can write and in the forms

(24)

With these abbreviations, is equivalent to , which is true, because the minimum of two numbers is always smaller than their weighted average.

The first inequality and equality of follow directly from . To prove the last inequality, we start once again with a linear model

(25)

Inserting the definition of and , using (24), and omitting the sums we have to find and , which satisfy

(26)

In Appendix C we will show that the inequality is satisfied for and . Inserting into (25) and minimizing the r.h.s. w.r.t. , we get the upper bound

Statement is a direct consequence of and . This completes the proof of Theorem 2.

5 Conclusions

We have proved several new error bounds for Solomonoff prediction in terms of informed prediction and in terms of general prediction schemes. Theorem 1 and Corollary 1 summarize the results in the probabilistic case and Theorem 2 and Corollary 2 for the deterministic case. We have shown that in the probabilistic case is asymptotically bounded by twice the number of errors of any other prediction scheme. In the deterministic variant of Solomonoff prediction this factor 2 is absent. It is well suited, even for difficult prediction problems, as the error probability converges rapidly to that of the minimal possible error probability .

Acknowledgments:

I thank Ray Solomonoff and Jürgen Schmidhuber for proofreading this work and for numerous discussions.

Appendix A Proof of Inequality (18)

555The proofs are a bit sketchy. We will be a little sloppy about boundary values , , , versus , and approaching versus at the boundary. All subtleties have been checked and do not spoil the results. As , therefore is strict.

With the definition

we have to show for , and suitable and . We do this by showing that at all extremal values, ‘at’ boundaries and at non-analytical points. for , if we choose . Moreover, at the non-analytic point we have for . The extremal condition for (keeping fixed) leads to

Inserting into the definition of and omitting the positive term , we get

We have reduced the problem to showing . Since , we have for . The latter is quadratic in and symmetric in with a maximum at . Thus it is sufficient to check the boundary values . They are non-negative for . Putting everything together, we have proved that