# How to lose at Monte Carlo: a simple dynamical system whose typical statistical behavior is non computable

We consider the simplest non-linear discrete dynamical systems, given by the logistic maps f_a(x)=ax(1-x) of the interval [0,1]. We show that there exist real parameters a∈ (0,4) for which almost every orbit of f_a has the same statistical distribution in [0,1], but this limiting distribution is not Turing computable. In particular, the Monte Carlo method cannot be applied to study these dynamical systems.

There are no comments yet.

## Authors

• 5 publications
• 3 publications
08/21/2011

### A Dynamical Systems Approach for Static Evaluation in Go

In the paper arguments are given why the concept of static evaluation ha...
01/31/2020

### Efficient computation of extreme excursion probabilities for dynamical systems

We develop a novel computational method for evaluating the extreme excur...
06/06/2016

### Undecidability and Irreducibility Conditions for Open-Ended Evolution and Emergence

Is undecidability a requirement for open-ended evolution (OEE)? Using me...
10/06/2021

### Distribution Preserving Multiple Hypotheses Prediction for Uncertainty Modeling

Many supervised machine learning tasks, such as future state prediction ...
04/30/2019

### A Detailed Analysis of Quicksort Algorithms with Experimental Mathematics

We study several variants of single-pivot and multi-pivot Quicksort algo...
02/04/2021

### HMC, an Algorithms in Data Mining, the Functional Analysis approach

The main purpose of this paper is to facilitate the communication betwee...
06/18/2010

### Fractal Basins and Boundaries in 2D Maps inspired in Discrete Population Models

Two-dimensional maps can model interactions between populations. Despite...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

For all practical purposes, the world around us is not a deterministic one. Even if a simple physical system can be described deterministically, say by the laws of Newtonian mechanics, the differential equations expressing these laws typically cannot be solved explicitly. This means that predicting the exact evolution of the system is impossible. A classical example is the famous 3-body Problem, which asks to describe the evolution of a system in which three celestial bodies (the “Earth”, the “Sun”, and the “Moon”) interact with each other via the Newton’s force of gravity. Computers are generally not of much help either: of course, a system of ODEs can be solved numerically, but the solution will inevitably come with an error due to round-offs. Commonly, solutions of dynamical systems are very sensitive to such small errors (the phenomenon known as “Chaos”), so the same computation can give wildly different numerical results.

An extreme example of the above difficulties is the art of weather prediction. A realistic weather model will have such a large number of inputs and parameters that simply running a numerical computation will require a massive amount of computing resources; it is, of course, extremely sensitive to errors of computation. A classical case in point is the Lorenz system suggested by meteorologist Edward Lorenz in 1963 [Lor63]. It has only three variables and is barely non-linear (just enough not to have an explicit solution), and nevertheless it possesses a chaotic attractor [Tuc02] – one of the first such examples in history of mathematics– so deterministic weather predictions even in such a simplistic model are practically impossible.

Of course, this difficulty is well known to practitioners, and yet weather predictions are somehow made, and sometimes are even accurate. They are made in the language of statistics (e.g. there is a 40% chance of rain tomorrow), and are based on what is broadly known as Monte Carlo technique, pioneered by Ulam and von Neumann in 1946 [URvN47, MS49, Met87]. Informally speaking, we can throw random darts to select a large number of initial values; run our simulation for the desired duration for each of them; then statistically average the outcomes. We then expect these averages to reflect the true statistics of our system. To set the stage more formally, let us assume that we have a discrete-time dynamical system

 f:D→D, where D is a finite domain in Rn

that we would like to study. Let be points in randomly chosen, for some

and consider the probability measure

 (1.1) μk,n=1knk∑l=1n∑m=1δf∘m(¯xl),

where is the delta-mass at the point . The mapping can either be given by mathematical formulas, or stand for a computer program we wrote to simulate our dynamical system. The standard postulate is then that for the probabilities converge to a limiting statistical distribution that we can use to make meaningful long-term statistical predictions of our system.

Let us say that a measure on is a physical measure of if its basin –that is, the set of initial values for which the weak limit of equals – has positive Lebesgue measure. This means that the limiting statistics of such points will appear in the averages (1.1) with a non-zero probability. If there is a unique physical measure in our dynamical system, then one random dart in (1.1) will suffice. Of course, there are systems with many physical measures. For instance, Newhouse [New74] showed that a polynomial map in dimension can have infinitely many attracting basins, on each of which the dynamics will converge to a different stable periodic regime. This in itself, however, is not necessarily an obstacle to the Monte-Carlo method, and indeed, the empirical belief is that it still succeeds in these cases.

Our results are most surprising in view of the above computational statistical paradigm. Namely we consider the simplest examples of non-linear dynamical systems: quadratic maps of the interval of the form

 fa(x)=ax(1−x),a∈(0,4]

and find an uncountable set of values of for which:

1. there exists a unique physical measure and its basin has full Lebesgue measure.

2. the measure is not computable.

Thus, the Monte-Carlo computational approach fails spectacularly for truly simple maps – not because there are no physical measures, or too many of them, but because the “nice” unique limiting statistics cannot be computed, and thus the averages (1.1) will not converge to anything meaningful in practice.

It is worth drawing a parallel with our recent paper [RY19], in which we studied the computational complexity of topological attractors of maps . Such attractors capture the limiting deterministic behavior of the orbits. They are always computable, and we found that for almost every parameter , the time complexity of computing its attractor is polynomial, although there exist attractors with an arbitrarily high computational complexity. In dynamics, both in theory and in practice, it is generally assumed that long-term statistical properties are simpler to analyze than their deterministic counterparts. From the point of view of computational complexity, this appears to be false.

We note that a different notion of statistical intractability in dynamics, based on the complexity of a mathematical description of the set of limiting measures, has been studied by P. Berger [Ber17, BB19]. In the context of Cellular Automata, there have been some recent works on the study of the computational properties of the limiting statistics, see e.g. [HdMS16]. The computational complexity of individual trajectories in Hamiltonian dynamics has been addressed in e.g. [KTZ18]

. Long-term unpredictability is generally associated with dynamical systems containing embedded Turing machines (see e.g. the works

[Moo91, MK99, KCG94, BGR12, BRS15]). Dynamical properties of Turing machines viewed as dynamical systems have similarly been considered (cf. [Kur97, Jea14]). Yet we are not aware of any studies of the limiting statistics in this context.

From a practical point of view, some immediate questions arise. Our examples are rare in the one-parameter quadratic family . However, there are reasons to expect that in more complex multi-parametric, multi-dimensional families they can become common. Can they be generic in a natural setting? As the results of [BB19] suggest, the answer may already be ”yes” for quadratic polynomial maps in dimension two. Furthermore, even in the one-dimensional quadratic family it is natural to ask what the typical computational complexity of the limiting statistics is – even if it is computable in theory, it may not be in practice.

## 2. Preliminaries

### Statistical simulations and computability of probability measures

We give a very brief summary of relevant notions of Computability Theory and Computable Analysis. For a more in-depth introduction, the reader is referred to e.g. [BY08]. As is standard in Computer Science, we formalize the notion of an algorithm as a Turing Machine [Tur36]. We will call a function computable (or recursive), if there exists a Turing Machine which, upon input , outputs . Extending algorithmic notions to functions of real numbers was pioneered by Banach and Mazur [BM37, Maz63], and is now known under the name of Computable Analysis. Let us begin by giving the modern definition of the notion of computable real number, which goes back to the seminal paper of Turing [Tur36]. By identifying with through some effective enumeration, we can assume algorithms can operate on . Then a real number is called computable if there is an algorithm which, upon input , halts and outputs a rational number such that . Algebraic numbers or the familiar constants such as , , or the Feigenbaum constant are computable real numbers. However, the set of all computable real numbers is necessarily countable, as there are only countably many Turing Machines.

We now define computability of functions over . Recall that for a continuous function , a modulus of continuity consists of a function such that whenever . A function is computable if it has a computable modulus of continuity and there is an algorithm which, provided with a rational number which is -close to , outputs a rational number which is -close to .

Computability of probability measures, say over for instance, is defined by requiring the ability to compute the expected value of computable functions.

###### Definition 2.1.

Let be any sequence of uniformly computable functions over . A probability measure over is computable if there exist a Turing Machine which on input (with ) outputs a rational satisfying

 |M(i,ϵ)−∫fidμ|<ϵ.

We note that this definition it compatible with the notion of weak convergence (see Section 3.1) of measures in the sense that a measure is computable if and only if it can be algorithmically approximated (in the weak topology) to an arbitrary degree of accuracy by measures supported on finitely many rational points and with rational weights. Moreover, this definition also models well the intuitive notion of statistical sampling in the sense that a measure

is computable if and only if there is an algorithm to convert sequences sampled from the uniform distribution into sequences sampled with respect to

.

In this paper, we will be interested in the computability properties of invariant measures of quadratic maps of the form , with . As is standard in computing practice, we will assume that the algorithm can read the value of externally in order to compute . More formally, let us denote the set of dyadic rational numbers with denominator . We say that a function is an oracle for if for every

 ϕ(m)∈Dm and d(ϕ(m),a)<2−(m−1).

We amend our definitions of computability of a probability measure by allowing oracle Turing Machines where is any function as above. On each step of the algorithm, may read the value of for an arbitrary . This approach allows us to separate the questions of computability of a parameter from that of the measure.

### Invariant measures of quadratic polynomials and the statement of the main result.

As before, we denote

 fa(x)=ax(1−x).

For , this quadratic polynomial maps the interval to itself. We will view as a discrete dynamical system, and will denote the -th iterate of .

A measure is called physical or Sinai-Ruelle-Bowen (SRB) if

 (2.1) 1nn−1∑k=0δfkx→μ

for a set of positive Lebesgue measure. It is known that if a physical measure exists for a quadratic map , then it is unique and (2.1) is satisfied for Lebesgue almost all .

Main Theorem. There exists parameters for which the quadratic map has a physical measure which is not computable.

## 3. Proof of the Main Theorem

The proof is based on a delicate construction in one-dimensional dynamics described in [HK90], which will allow us to construct maps with physical measures which selectively charge points in a countable set of periodic orbits. To give precise formulations, we will need to introduce some further concepts.

### 3.1. Setting the stage

It will be convenient to recall that weak convergence of measures on is compatible with the notion of Wasserstein-Kantorovich distance, defined by:

 W1(μ,ν)=supf∈1-Lip([0,1])∣∣∣∫fdμ−∫fdν∣∣∣

where is the space of Lipschitz functions on , having Lipschitz constant less than one.

For and , we set

 νna(x)=1nn−1∑k=0δfkax.

We will make use of the following folklor fact (see e.g. [dMvS93]):

###### Proposition 3.1.

Suppose, for the map has an attracting periodic orbit of period :

 x0fa↦x1fa↦⋯fa↦xp−1fa↦x0,∣∣∣ddxfpa(x0)∣∣∣<1.

Let

 μ≡1pp−1∑k=0δxk.

Then is the unique physical measure of (so, in particular, the attracting orbit is unique); and

 W1(νna(x),μ)→0

uniformly on a set of full Lebesgue measure in .

For consider the third iterate . We start by noting that there exists a parameter value such that the following holds:

 gc(0.5)≠g2c(0.5)=g3c(0.5).

If we denote , then , and denoting , we have

 gc(Ic)=Ic;

both endpoints of map to . The restriction maps both halfs ( and ) of the interval onto the whole in a monotone fashion (that is, it folds over itself).

For , there exists a continuous branch of the fixed point

 ga(βa)=βa,

and we again set (so ), and . Now, if , the image

 ga(Ia)⊋Ic, with ga(0,5)<β′a.

Thus, there is a pair of sub-intervals , inside which are mapped monotonely over by (the endpoints , are both mapped to . See Figure 1 for an illustration.

Assigning values to , and to we obtain symbolic dynamics on the set of points

 Ca≡{x∈Ia such that gna(x)∈Ia for all n∈N}.

If , then, of course, . Otherwise, the following is well-known:

###### Proposition 3.2.

If then is a Cantor set, and the symbolic dynamics conjugates to the full shift on .

In particular, every periodic sequence of ’s and ’s corresponds to a unique periodic orbit in with this symbolic dynamics. These orbits clearly move continuously with , and can be easily computed given and the symbolic sequence as the unique fixed points of the corresponding monotone branches of iterates of .

We enumerate all periodic sequences of and ’s as follows. A sequence with a smaller period will precede a sequence with a larger period. Within the sequences of the same period, the ordering will be lexicographic, based on the convention . We let

 {p1a,n,…,pkna,n}

be the periodic orbit of in which corresponds to the -th symbolic sequence in this ordering (note that the first one is ). We denote

 Pera(n)=∪2j=0fja({p1a,n,…,pkna,n}),

which is, clearly, a periodic orbit of . Let us denote

 λa(n)=1|Pera(n)|∑x∈Pera(n)δx.

### 3.2. Main construction

Our arguments will be based on the results of F. Hofbauer and G. Keller in [HK90]; see also the earlier paper of S. Johnson [Joh87], which uses similar language to ours.

Let us develop some further notation. For and we let denote the set of weak limits of the sequence . Let us denote the collection of parameters such that the following holds:

• has a unique physical probability measure ;

• denoting , we have

 (3.1) ∞∑n=1ma(n)=1.

Thus, the charge of the physical measure resides in the periodic orbits in the Cantor set .

We will formulate the following direct consequence of the main result (Theorem 5) of [HK90]111Note that the set of physical measures constructed in Theorem 5 of [HK90] includes convex combinations of . Compare also with Theorem 1 of [HK90]:

###### Theorem 3.3.

There exists an infinite set such that the following holds:

1. for Lebesgue almost every ;

2. for any sequence of non-negative reals with , the subset of for which is dense in .

We will outline the idea of the construction of such maps below, but the complete proof of Theorem 3.3 is quite technical and goes beyond the scope of this paper.

We start with the following “simple” example:

Example 1: The set

 Ωa(0.5)=λa(1)=13(δβa+δfa(βa)+δf2a(βa))

and for almost every (compare with Theorem 1 of [HK90]).

Consider again Figure 1 as an illustration. We note that there exists an interval to the right of the fixed point such that the following holds:

• ;

• Denote by the branch of which fixes . Then the interval is contained in the domain of definition of . Thus, there is an orbit

 J−n≡ψna(J0)→βa(here f3na(J−n)=J0).

Moving the parameter , we can place the image at any point of , for an arbitrary . If the value of is large, then the -orbit of will spend a long time in a small neighborhood of , before hitting some . Adjusting the position of , we can ensure that is inside for an even larger , so the orbit returns to an even smaller neighborhood of where it will spend an even longer time. Continuing increasing ’s as needed so the orbit of spends most of its time in ever smaller neighborhoods of , we can ensure that the averages converge to the delta masses supported on the orbit of .

Proceeding in this way, for an arbitrarily large and we can find and such that:

1. the distance

 W1(νma(0.5)−λa(1))<2−l;
2. the iterate lies in ;

3. the next iterate .

Property (3) ensures that the critical point is periodic with period . Since , we have , so this is a (super)attracting periodic point. Proposition 3.1 implies that the physical measure for is supported on the orbit of , and thus

 W1(μa−λa(1))<2⋅2−l.

Again, by Proposition 3.1 and considerations of continuity, there exist and such that for any with , we have

 W1(νna′(x)−λa(1))<4⋅2−l

for any in a set of length .

Assuming is small enough, we again have slightly to the right of and we can repeat the above steps inductively to complete the construction.

As a next step, we construct an asymptotic measure supported on two periodic orbits:

Example 2: the set for and .

Let and, as before, denote by its period. Letting denote the branch of fixing , we again find a -orbit

 J′0↦J′−1↦J′−2↦⋯, with J′−k→p,

where

 fsa(J0)⋑[β′a,βa]

for a univalent branch of the iterate .

Now we can play the same game as in Example 1, alternating between entering the orbit close to the point , and the orbit close to . In this way, we can achieve the desired limiting asymptotics with any values .

The above construction can be clearly modified for any countable collection of periodic orbits in , as required for the proof of Theorem 3.3.

### 3.3. Constructing non computable physical measures

###### Definition 3.1.

Let us define a very particular subset as follows: if

 (3.2) ma(2n−1)+ma(2n)=2−n for all n∈N.

For convenience of reference, let us formulate a corollary of Theorem 3.3:

###### Proposition 3.4.

Suppose, . Then, for every , and , there exists such that

• ;

• ;

• for all and

• .

Let be the smallest collection of functions containing the step continuous functions of rational intervals, and which is closed by rational linear combinations and scalar multiplication. Note that this is a countable collection of functions that can be enumerated in an effective way.

We construct a parameter for which the map has a unique physical measure such that for any Turing Machine with an oracle for , that computes a probability measure, there exists and such that

 |Mϕ(i,ϵ)−∫τidμ|>ϵ.

Our construction can be thought of as a game between a Player and infinitely many opponents, which will correspond to the sequence consisting of machines that compute some probability measure. The opponents try to compute by asking the Player to provide an oracle for , while the Player tries to chose the bits of in such a way that none of the opponents correctly computes .

We show that the Player always has a winning strategy: it plays against each machine, one by one, asking the machine to compute the expected value of a particular function to a certain degree of accuracy. The machine then runs for a while, asking the Player to provide more and more bits of , until it eventually halts and outputs a rational number. Then the Player reveals the next bit of and shows that the machine’s answer is incompatible with . The details are as follows.

We will proceed inductively. Let be some enumeration of all the machines with an oracle for that compute some probability measure. At step of the induction, we will have a parameter and a natural number such that:

1. ;

2. there exists such that either

• whereas ; or

• whereas

In other words, given an oracle for , the machine cannot correctly approximate the value of at ;

3. for all ;

4. .

Base of the induction. We start by letting be any of the parameters in . We note that It follows that there exists such that 222Note that the mass of the higher periodic points that may fall in an open set containing goes to zero as the diameter of the open set goes to zero., and

 (3.3) Suppτ∩Per(j)=∅ for all 1

We now let the machine compute the expected value of with precision , giving it as the parameter. Let be the last time a bit of is queried by during the computation. By Proposition 3.4, for any there exists such that

• ;

• ;

• .

Let . There are two possibilities:

1. If , we chose above so as to have ;

2. If , we chose above so as to have (and therefore );

We then let . By 3.3, . Note that up to the first bits, and are indistinguishable and therefore the machine will return the same answer for both parameters. It follows that the machine cannot correctly approximate at .

Step of the induction. Assume has been defined. Then it holds

 man−1(2n−1)+man−1(2n)=2−n,

and there exists such that and

 (3.4) Suppτ∩Per(j)=∅ for all 2n−1

Once again, we let the machine compute the expected value of with precision , giving it as the parameter. Let be the last time a bit of is queried by during the computation. By Proposition 3.4 again, for any there exists such that

• ;

• ;

• for all and

• .

Let . There are two possibilities:

1. If , we chose above so as to have ;

2. If , we chose above so as to have (and therefore );

We then let . Since satisfies property 3.4, we have that . Note that up to the first bits, and are indistinguishable, and thus the machine will return the same answer for both parameters. It follows that the machine cannot correctly approximate at . Moreover, property 3.4 again and the fact that (by construction) satisfies

 man−1(t)=man(t) for all t∉{2n−1,2n},

guarantee that Condition (3) is satisfied as well. We now let and claim that has the required properties. Indeed, Condition (2) ensures that for every there is a step function at which machine fails to compute correctly the expected value for , and Condition (3) guarantees that the same holds for .

## 4. Conclusion

Ever since the first numerical studies of chaotic dynamics appeared in the early 1960’s (such as the work of Lorenz [Lor63]), it has become commonly accepted among practitioners that computers cannot, in general, be used to make deterministic predictions about future behavior of nonlinear dynamical systems. Instead, the standard practice now is to make statistical predictions. This approach is based on the Monte Carlo method, pioneered by Ulam and von Neumann at the dawn of the computing age. It is universal and powerful – and only requires access to the dynamical system as a black box, which is then subjected to a number of statistical trials. Applications of the Monte Carlo technique are ubiquitous, ranging from weather forecasts to simulating nuclear weapons tests (nuclear weapons design was, of course, the original motivation of its inventors).

Our result raises a disturbing possibility that even for the most simple family of examples of non-linear dynamical systems the Monte Carlo method can fail. Given one of our examples as a black box, no algorithm can find its limiting statistics. How common such examples are in higher-dimensional families of dynamical systems, and whether one is likely to encounter one in practice remain exciting open questions.

## References

• [BB19] P. Berger and J. Bochi, On emergence and complexity of ergodic decompositions, arXiv preprint math/1901.03300 (2019).
• [Ber17] P. Berger, Unpredictability of dynamical systems and non-typicality of the finiteness of the number of attractors in various topologies, Tr. Mat. Inst. Steklova 297 (2017), 7–37, English version published in Proc. Steklov Inst. Math. 297 (2017), no. 1, 1–27.
• [BGR12] Mark Braverman, Alexander Grigo, and Cristobal Rojas, Noise vs computational intractability in dynamics, Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (New York, NY, USA), ITCS ’12, ACM, 2012, pp. 128–141.
• [BM37] S. Banach and S. Mazur, Sur les fonctions caluclables, Ann. Polon. Math. 16 (1937).
• [BRS15] Mark Braverman, Cristobal Rojas, and Jon Schneider, Space-bounded Church-Turing thesis and computational tractability of closed systems, Physical Review Letters 115 (2015), no. 9.
• [BY08] M Braverman and M. Yampolsky, Computability of Julia sets, Algorithms and Computation in Mathematics, vol. 23, Springer, 2008.
• [dMvS93] W. de Melo and S. van Strien, One-dimensional dynamics, Springer-Verlag, 1993.
• [HdMS16] Benjamin Hellouin de Menibus and Mathieu Sablik, Characterisation of sets of limit measures after iteration of a cellular automaton on an initial measure, Ergodic Theory and Dynamical Systems 38 (2016), no. 2, 601–650.
• [HK90] F. Hofbauer and G. Keller, Quadratic maps without asymptotic measure, Comm. Math. Phys. 127 (1990), 319–337.
• [Jea14] E. Jeandel, Computability of the entropy of one-tape Turing machines, 31st International Symposium on Theoretical Aspects of Computer Science (STACS), LIPIcs. Leibniz Int. Proc. Inform., vol. 25, 2014, pp. 421–432.
• [Joh87] S. Johnson, Singular measures without restrictive intervals, Commun. Math. Phys. 110 (1987), 185–190.
• [KCG94] P. Koiran, M. Cosnard, and M. Garzon, Computability with low-dimensional dynamical systems, Theoret. Comput. Sci. 132 (1994), no. 1-2, 113–128.
• [KTZ18] Akitoshi Kawamura, Holger Thies, and Martin Ziegler, Average-case polynomial-time computability of Hamiltonian dynamics, 43rd International Symposium on Mathematical Foundations of Computer Science, MFCS 2018, August 27-31, 2018, Liverpool, UK, 2018, pp. 30:1–30:17.
• [Kur97] Petr Kurka, On topological dynamics of Turing machines., Theoret. Comput. Sci. 174 (1997), 203–2016.
• [Lor63] E. N. Lorenz, Deterministic nonperiodic flow, J. Atmos. Sci. 20 (1963), 130–141.
• [Maz63] S. Mazur, Computable Analysis, vol. 33, Rosprawy Matematyczne, Warsaw, 1963.
• [Met87] N. Metropolis, The beginning of the Monte Carlo method, Los Alamos Science Special Issue (1987), 125–130.
• [MK99] C. Moore and P. Koiran, Closed-form analytic maps in one and two dimensions can simulate universal Turing machines., Theoret. Comput. Sci. 210 (1999), no. 1, 2217–223.
• [Moo91] C. Moore, Generalized shifts: unpredictability and undecidability in dynamical systems, Nonlinearity 4 (1991), no. 2, 199–230.
• [MS49] N. Metropolis and Ulam. S., The Monte Carlo method, Journal of the American Statistical Association 44 (1949), 335–341.
• [New74] S. Newhouse, Diffeomorphisms with infinitely many sinks, Topology 13 (1974), 9–18.
• [RY19] Cristobal Rojas and Michael Yampolsky, Computational intractability of attractors in the real quadratic family, Advances in Mathematics 349 (2019), 941 – 958.
• [Tuc02] W. Tucker, A rigorous ODE solver and Smale’s 14th problem, Found. Comp. Math. 2 (2002), 53–117.
• [Tur36] A. M. Turing, On computable numbers, with an application to the Entscheidungsproblem, Proceedings, London Mathematical Society (1936), 230–265.
• [URvN47] S. Ulam, R.D. Richtmyer, and J. von Neumann, Statistical methods in neutron diffusion, Los Alamos Scientific Laboratory report LAMS 551 (1947).