1 Introduction and Preliminaries
Broadly speaking, a dynamical system is one that changes over time. Prediction of the future behaviour of dynamical systems is a fundamental concern of science generally. Scientific theories are tested upon the accuracy of their predictions, and establishing invariable properties through the evolution of a system is an important goal. Limits to this predictability are known in science. For instance, chaos theory establishes the existence of systems in which small deficits in the information of the initial states makes accurate predictions of future states unattainable. However, in this document we focus on systems for which we have unambiguous, finite (as to size and time) and complete descriptions of initial states and behaviour: computable dynamical systems.
Since their formalization by Church and Turing, the class of computable systems has shown that, even without information deficits (i.e., with complete descriptions), there are future states that cannot be predicted, in particular the state known as the halting state . We will use this result and others from algorithmic information theory to show how predictability imposes limits to the growth of complexity during the evolution of computable systems. In particular, we will show that random (incompressible) times tightly bound the complexity of the associated states.
The relationship between dynamical systems and computability has been studied before by Bournez [11, 10], Blondel , Moore  and by Fredkin, Margolus and Toffoli [22, 30], among others. That emergence is a consequence of incomputability has been proposed by Cooper . Complexity as a source of undecidability has been observed in logic by Calude and Jurgensten . Delvenne, Kurka and Blondel  have proposed robust definitions of computable (effective) dynamical systems and universality, generalizing Turing’s halting states, while also setting forth the conditions and implications for universality and decidability and their relationship with chaos. The definitions and general approach used in this paper differ from those in the sources cited above, but are ultimately related.
We will denote by the algorithmic descriptive complexity of the string with respect to the string . The dynamical systems we are considering are deterministic, and each state must contain all the information needed to compute successive states. We are assuming an infinity of possible states for non-cyclical systems. Mechanisms and requirements for open-ended evolution in systems with a finite number of states (resource-bounded) have been studied by Adams et al. .
1.1 Computable Functions
In a broad sense, an object is computable
if it can be described by a Turing machine; for example, if there exists a Turing machine that produces as an output. It is clear that any finite string on a finite alphabet is a computable object. We provide below a more formal definition, in the tradition of Turing.
As usual, we can define a one-to-one mapping between the set of all finite binary strings and the natural numbers by the relation induced by a lexicographic order of the form: . Using this relation we can see all natural numbers (or positive integers) as binary strings and vice versa. Accordingly all natural numbers are computable.
A string is a valid program for the Turing machine if during the execution of with as input all the characters in are read. We call the output of the machine, if it stops. A Turing Machine is prefix-free if no valid program can be a proper substring of another valid program (though it can be a postfix of one). We call a valid program a self-delimited object. Note that, given the relationship between natural numbers and binary strings, the set of all valid programs is an infinite proper subset of the natural numbers.
Formally, a function is computable if there exists a Turing Machine such that . A Turing Machine is considered universal if there exists a computable function such that for every Turing machine there exists a string such that , where is the concatenation of the strings and . Given the previous case, and are called a codification or a representation of the function and the natural number , respectively. From now on we will denote the codification of and by and . The codification is unambiguous if it is injective.
For functions with more than one variable, if is a pair , we say that the codification is unambiguous if it is injective and the inverse functions and are computable. If is a tuple , then the codification is unambiguous if the function is computable.
A sequence of strings is computable if the function is computable. A real number is computable if its decimal expansion is a computable sequence. For complex numbers and higher dimensional spaces, we say that they are computable if each of their coordinates is also computable.
Finally, for each of the objects described, we refer to the representation of the associated Turing machine as the representation of the object for the reference Turing machine , and we define the computability of further objects by considering their representations. For example, a function is computable if the mapping is computable and we will denote by the representation of the associated Turing machine, calling it the codification of itself.
1.2 Algorithmic Descriptive Complexity
Given a prefix-free universal Turing Machine with alphabet , the algorithmic descriptive complexity (also known as Kolmogorov complexity and Kolmogorov-Chaitin complexity [25, 15]) of a string is defined as
where is a universal prefix-free Turing Machine and is the number of characters of .
Algorithmic descriptive complexity measures the minimum amount of information needed to fully describe a computable object within the framework of a universal Turing machine . If then the program is called a description of . The first of the smallest descriptions (in alphabetical order) is denoted by and by , a not necessarily minimal description computable over the class of objects. If is a Turing machine, a program is a description or codification of for if for every string we have it that . In the case of numbers, functions, sequences and other computable objects we consider the descriptive complexity of their smallest descriptions. For example, for a computable function , is defined as , where is the first of the minimal descriptions for .
Of particular importance for this document is the conditional descriptive complexity, which is defined as:
where is the concatenation of and . This measure can be interpreted as the smallest amount of information needed to describe given a full description of . We can think of as a program with input .
One of the most important properties of the descriptive complexity measure is its stability: the difference between the descriptive complexity of an object, given two universal Turing machines, is at most constant. Therefore the reference machine is usually omitted in favor of the universal measure . From now on we will omit the subscript from the measure.
Given a natural number , a string is considered -random or incompressible if . This definition would have it that a string is random if it does not have a significantly shorter complete description than the string itself. A simple counting argument shows the existence of random strings. Now, it is easy to verify that every string has a self -delimited, unambiguous computable codification with strings of the form ( 1s followed by a 0, then the binary string corresponding to concatenated with the string itself [28, section 1.4]). Therefore, there exists a natural such that if is -random then , where is a positive term. We will say that such strings hold the randomness inequality tightly.
Let be a halting Turing Machine with description for the reference machine . A simple argument can show t that the halting time of cannot be a large random number. Let be a Turing Machine that emulates while counting the number of steps, returning the execution time upon halting. If is a large random number, then cannot stop in time , otherwise the program will give us a short description of . This argument is summarized by the following inequality:
where is the number of steps that it took the machine to
reach the halting state, the execution time of the machine
1.3 Computable Dynamical Systems
Formally, a dynamical system is a rule of evolution in time within a state space; a space that is defined as the set of all possible states of the system . In this paper we will focus on a functional model for dynamical systems with a constant initial state and variables representing the previous state and the time of the system. This model allows us to set halting states for each time on a discrete scale in order to study the impact of the descriptive complexity of time during the evolution of a discrete computable system.
A deterministic discrete space system is defined by an evolution function (or rule) of the form , where is called the initial state and is a positive integer called the time variable of the system. The sequence of states is called the evolution of the system. Given a reference universal Turing Machine , if is a computable function and is a computable object, we will say that is a computable dynamical system. An important property of computable dynamical systems is the uniqueness of the successor state, which implies that equal states must evolve equally given the same evolution function. In other words:
The converse is not necessarily true.
Now, a complete description of a computable system should contain enough information to compute the state of the system at any time and hence it must entail the codification of its evolution function and a description of the initial state , which is denoted by . As a consequence, if we only describe the system at time by a codification of , then we would not have enough information to compute the successive states of the system. So we will specify the complete description of a computable system at time
as an unambiguous codification of the ordered pair composed ofand , i.e. , with representing the initial state of the system. It is important to note that, for any computable and unambiguous codification function of the stated pair, we have , as we can write a program that uses the descriptions for , and to find the parameters and then evaluate , finally producing .
It is important to mention that, given that the dynamical systems we are considering are deterministic, and that each state must contain all the information needed to compute successive states, we are assuming an infinity of possible states for non-cyclical systems. Mechanisms and requirements for open-ended evolution in systems with a finite number of states (resource-bounded) have been studied by Adams et al. .
1.4 Open-Ended Evolution in Computable Dynamical Systems
Informally, Open-ended evolution (OEE) has been characterized as “evolutionary dynamics in which new, surprising, and sometimes more complex organisms and interactions continue to appear” . Establishing and defining the properties required for a system to exhibit OEE is considered an open question [7, 34, 35] and OEE has been proposed as a required property of evolutionary systems capable of producing life . This has been implicitly verified by various experiments in-silico [29, 1, 27, 5].
One line of thought posits that open-ended evolutionary systems tend to produce families of objects of increasing complexity [6, 5]. Furthermore, for a number of complexity measures, it can be shown that the objects belonging to a given level of complexity are finite (for instance ). Therefore an increase of complexity is a requirement for the continued production of new objects. A related observation, proposed by Chaitin [18, 17], associates evolution with the search for mathematical creativity, which implies an increase of complexity, as more complex mathematical operations are needed in order to solve interesting problems, which are required to drive evolution.
Following the aforementioned lines of thought, we have chosen to characterize OEE in computable dynamical systems as a process that has the property of producing families of objects of increasing complexity. Formally, given a complexity measure , we say that a computable dynamical system exhibits open-ended evolution with respect to if for every time there exists a time such that the complexity of the system at time is greater than the complexity at time , i.e. , where a complexity measure is a (not necessarily computable) function that goes from the state space to a positive numeric space.
The existence of such systems is trivial for complexity measures on which any infinite set of natural numbers (not necessarily computable) contains a subset where the measure grows strictly:
Let be a complexity measure such that any infinite set of natural numbers has a subset where grows strictly. Then a computable system is a system that produces an infinite number of different states if and only if it exhibits OEE for .
Let be a system that does not exhibit OEE, and a complexity measure as described. Then there exists a time such that for any other time we have , which holds true for any subset of states of the system. It follows that the set of states must be finite. Conversely, if the system exhibits OEE, then there exists an infinite subset of states on which grows strictly, hence an infinity of different states.
Given the previous lemma, a trivial computable system that simply produces all the strings in order exhibits OEE on a class of complexity measures that includes algorithmic description complexity. However, we intuitively conjecture that such systems have a much simpler behaviour compared to that observed in the natural world and the artificial life systems referenced. To avoid some of these issues we propose a stronger version of OEE.
A sequence of naturals exhibits strong open-ended evolution (strong OEE) with respect to a complexity measure if for every index there exists an index such that , and the sequence of complexities does not drop significantly, i.e. there exists a such that implies where is a positive function that does not grow significantly.
It is important to note that while the definition of OEE allows for significant drops in complexity during the evolution of a system, strong OEE requires that the complexity of the system not decrease significantly during its evolution. In particular we will require that the complexity drops as measured by not grow as fast as the complexity itself and that they reach a constant level an infinite number of times. Formally should not be upper-bounded for any infinite subsequence for the smallest where the strong OEE inequality holds.
We will construe the concept of speed of growth of complexity in a comparative way: given two sequences of natural numbers and , grows faster than if for every infinite subsequence and natural number , there exists such that . Conversely, a subsequence of indexes denoted by grows faster than a subsequence of indexes denoted by if for every natural , there exists with , such that .
If a complexity measure is sophisticated enough to depend on more than just the size of an object, significant drops in complexity are a feature that can be observed in trivial sequences such as the ones produced by enumeration machines. Whether this is also true for non-trivial
sequences is open to debate. However, if we classify random strings as low complexity objects and posit that non-trivial sequences must contain a limited number of random objects, then a non-trivial sequence must observe bounded drops in complexity in order to be capable of showing non-trivial OEE. This is the intuition behind the definition of strong OEE.
Now, in the literature on dynamical systems, random objects are often considered simple ([2, pp.1]), with complexity being taken to lie between regularity and randomness. Various complexity measures have been proposed that assign low complexity to random or incompressible natural numbers. Two examples of such measures are logical depth  and sophistication . Classifying random naturals as low complexity objects is a requirement for the results shown in section 3 Beyond Halting States: Open-Ended Evolution.
2 A Computational Model for Adaptation
Let’s start by describing the evolution of an organism or a population by a computable dynamical system. It has been argued that in order for adaptation and survival to be possible an organism must contain an effective representation of the environment, so that, given a reading of the environment, the organism can choose a behaviour accordingly . The more approximate this representation, the better the adaptation. If the organism is computable, this information can be codified by a computable structure. We will denote this structure by , where stands for the time corresponding to each of the stages of the evolution of the organism. This information is then processed following a finitely specified unambiguous set of rules that, in finite time, will determine the adapted behaviour of the organism according to the information codified by . We will denote this behaviour (or a theory explaining it) using the program . An adapted system is one that produces an acceptable approximation of its environment. An environment can also be represented by a computable structure . In other words, the system is adapted if produces . Based on this idea we propose a robust, formal characterization for adaptation:
Let be the prefix-free descriptive complexity. We say that the system at the state is -adapted to the if:
The inequality states that the minimal amount of information that is needed to describe from a complete description of is or less. This information is provided in the form of a program that produces from the system at time . We will define such a program as the adapted behaviour of the system. It is not required that be unique.
The proposed structure for adapted systems is robust since is less than or equal to the number of characters needed to describe any computable method of describing from the state of the system at time , whether it be a computable theory for adaptation or a computable model for an organism that tries to predict . It follows that any computable characterization of adaptation that can be described within number of bits meets the definition of -adapted, given a suitable choice of , the adaptation condition for any given environment. It is important to note that, although inspired by a representationalist approach to adaptation, the proposed characterization of adaptation is not contingent on the organism ’s containing an actual codification of the environment, since any organism that can produce an adapted behaviour that can be explained effectively (is computable in finite time) is -adapted for some .
As a simple example, we can think of an organism that must find food located at the coordinates on a grid in order to survive. If the information in an organism is codified by a computable structure (such as DNA), and there is a set of finitely specified, unambiguous rules that govern how this information is used (such as the ones specified by biochemistry and biological theories), codified by a program , then we say that the organism finds the food if . If , then the we say that the organism is adapted according to a behaviour that can be described within characters. The proposed model for adaptation is not limited to such simple interactions. For a start, we can suppose that the organism sees a grid, denoted by , of size with food at the coordinates . The environment can be codified as a function such that and -adapted implies that the organism defined by the genetic code , which is interpreted by a theory or behaviour written on bits, is capable of finding the food upon seeing . Similarly, more complex computational structures and interactions imply -adaptation.
Now, describing an evolutionary system that (eventually) produces an -adapted system is trivial via an enumeration machine (the program that produces all the natural numbers in order), as it will eventually produce itself. Moreover, we require the output of our process to remain adapted. Therefore we propose a stronger condition called convergence:
Given the description of a computable dynamical system where is the variable of time, is an initial state and is an environment, we say that the system converges towards with degree if there exists such that implies .
For a fixed initial state and environment , it is easy to see that the descriptive complexity of a state of the system depends mostly on : we can describe a program that, given full descriptions of , , and , finds . Therefore
where the constant term is the length of the program described. In other words, as the time grows, time becomes the main driver for the descriptive complexity within the system.
2.1 Irreducibility of Descriptive Time Complexity
In the previous section, it was established that time was the main factor in the descriptive complexity of the states within the evolution of a system. This result is expanded by the time complexity stability theorem (5). This theorem establishes that, within an algorithmic descriptive complexity framework, similarly complex initial states must evolve into similarly complex future states over similarly complex time frames, effectively erasing the difference between the complexity of the state of the system and the complexity of the corresponding time, and establishing absolute limits to the reducibility of future states.
Let be the real execution time of the system at time . Using our time counting machine , it is easy to see that is computable and, given the uniqueness of the successor state, increases strictly with , and hence is injective. Consequently, has a computational inverse over its image. Therefore, we have it that (up to a small constant) and . It follows that , where is an integer independent of (but that can depend on ). In other words, for a fixed system , the execution time and the system time are equally complex up to a constant. From here on we will not differentiate between the complexity of both times. A generalization of the previous equation is given by the following theorem:
Theorem 5 (Time Complexity Stability).
Let and be two computable systems and and the first time where each system reaches the states and respectively. Then there exists such that and . Specifically:
There exists a natural number that depends on and , but not on , such that
If and then there exists a constant that does not depend on such that , where and are the minimum times for which the corresponding state is reached.
Let and be two dynamical systems with an infinite number of equally–up to a constant–descriptive complex times and . For any infinite subsequence of times with strictly growing descriptive complexity, all but finitely many such that comply with the equation: .
First, note that we can describe a program such that given , and , runs for each until it finds . Therefore
.Similarly for . By the inequality 4 and the hypothesized equalities we obtain
which implies the first part. The second part is a direct consequence.
For the third part, suppose that there exists an infinity of times such that . Therefore , which implies that the difference is unbounded, which is a contradiction of the first part. Analogously, the other inequality yields the same contradiction. ∎
The slow growth of time is a possible objection to the assertion that in the descriptive complexity of systems time is the dominating parameter for predicting their evolution: the function grows within an order of , which is very slow and often considered insignificant in the information theory literature. However, we have to consider the scale of time we are using. For instance, one second of real time in the system we are modelling may mean an exponential number of discrete time steps for our computable model (for instance, if we are modelling a genetic machine with current computer technology), yielding a potential polynomial growth in their descriptive complexity. However, if this time conversion is computable, then grows at most at a constant pace. This is an instance of irreducibility, as there exist infinite sequences of times that cannot be obtained by computable methods. In the upcoming sections we will call such times random times and the sequences containing them will be deemed irreducible.
2.2 Non-Randomness of Decidable Convergence Times
One of the most important issues for science is predicting the future behaviour of dynamical systems. The prediction we will focus on is about the first state of convergence (definition 4): Will a system converge and how long will it take? In this section we shall show the limit that decidability imposes on the complexity of the first convergent state. A consequences of this is the existence of undecidable adapted states.
Formally, for the convergence of a system with degree to be decidable there must exist an algorithm such that if the system is convergent at time and otherwise. Moreover, we can describe a machine such that given full descriptions of , and it runs with inputs and while running over all the possible times , returning the first for which the system converges. Note that . Hence we have a short description of and therefore cannot be random: if is a convergent system then
where is the first time at which convergence is reached. Note that all the variables are known at the initial state of the system. This result can summed up by the following lemma:
Let be a system convergent at time . If is considerably more descriptively complex than the system and the environment, i.e. if for every reasonably large natural number we have it that
then cannot be found by an algorithm described within number of characters.
It is a direct consequence of the inequality 7. ∎
We call such times random convergence times and the state of the system a random state. It is important to note that the descriptive complexity of a random state must also be high:
Let be a convergent system with a complex state . For every reasonably large we have it that
Suppose the contrary to be true, i.e. that there exist small enough that . Let be the program that, given , , and , runs in order for each and compares the result to , returning the first time where the equality is reached. Therefore, given the uniqueness of the successor state (2), and
which gives us a small upper bound to the random convergence time . ∎
In other words, if has high descriptive complexity, then there does not exist a reasonable algorithm that finds it even if we have a complete description of the system and its environment. It follows that the descriptive complexity of a computable convergent state cannot be much greater than the descriptive complexity of the system itself.
What a reasonably large is has been handled so far with ambiguity, as it represents the descriptive complexity of any computable method . We may intend to find convergence times, which intuitively cannot be arbitrarily large. It is easy to ‘cheat’ on the inequality 7 by including in the description of the program the full description of the convergence time , which is why we ask for reasonable descriptions.
Another question left to be answered is whether complex convergence times do exist for a given limit , considering that the limits imposed by the inequality 7 loosen up in direct relation to the descriptive complexity of , and .
The next result answers both questions by proving the existence of complex convergence times for a broad characterization of the size of :
Lemma 8 (Existence of Random Convergence Times).
Let be a total computable function. For any there exists a system such that the convergence times are -random.
Let and be two natural numbers such that . By reduction to the Halting Problem () it is easy to see the existence of -random convergence times: Let be a Turing Machine, and the Turing machine that emulates for steps with input and returns for every time equal to or greater than the halting time, and otherwise. Let us consider the system .
If the convergence times are not -random, then there exists a constant such that we can decide by running for each that meets the inequality , which cannot be done, since is undecidable. ∎
Let us focus on what the previous lemma is saying: can be any computable function. It can be a polynomial or exponential function with respect to the length of a given description for and . It can also be any computable theory that we might propose for setting an upper limit to the size of an algorithm that finds convergence times given descriptions of the system’s behaviour, environment and initial state. In other words, for a class of dynamical systems, finding convergence times, therefore convergent states, is not decidable, even with complete information about the system and its initial state. Finally, by the proof of the lemma, adapted states can be seen as a generalization of halting states.
2.3 Randomness of Convergence in Dynamic Environments
So far we have limited the discussion to fixed environments. However, as observed in the physical world, the environment itself can change over time. We call such environments dynamic environments. In this section we extend the previous results to cover environments that change depending on time as well as on the initial state of the system. We also propose a weaker convergence condition called weak convergence and propose a necessary (but not sufficient) condition for the computability of convergence times called descriptive differentiability.
We can think of an environment as a dynamic computable system, a moving target that also changes with time and depends on the initial state . In order for the system to be convergent, we propose the same criterion—there must exist such that implies
A system with a dynamic environment also meets the inequality 7 and lemmas 6 and 8 since we can describe a machine that runs both and for the same time . Given that is a moving target it is convenient to consider an adaptation period for the new states of :
We say that converges weakly to if there exist an infinity of times such that
Let be a weakly converging system. Any decision algorithm can only decide the first non-random time.
As noted above, these results do not change when dynamic environments are considered. In fact, we can think of static environments as a special case of dynamic environments. However, with different targets of adaptability and convergence, it makes sense to generalize beyond the first convergence time. Also, it should be noted that specifying a convergence index adds additional information that a decision algorithm can potentially use.
Let be a weakly converging system with an infinity of random times such that implies that , where is a (not necessarily computable) function with a range confined to the positive integers. If the function is unbounded with respect to , then any decision algorithm , where is the -th convergence time, can only decide a finite number of s.
Suppose that can decide an infinite number of instances. Let us consider two times and . Note that we can describe a program that, by using , , and and together with the distance , finds . The next inequality follows:
Next, note that we can describe another program that given and using , , and finds , from which
and is bounded with respect to . ∎
We will say that a sequence of times is non-descriptively differentiable if is not a total function, which, as a consequence of the previous lemma, implies non-computability of the sequence.
We say that a sequence of times is non-descriptively differentiable if is not a total function.
3 Beyond Halting States: Open-Ended Evolution
Inequality 7 states that being able to predict or recognize adaptation imposes a limit to the descriptive complexity of the first adapted state. A particular case is the halting state, as shown in the proof of lemma 8. In this section we extend the lemma to continuously evolving systems, showing that computability of adapted times limits the complexity of adapted states beyond the first, imposing a limit to open-ended evolution for three complexity measures: sophistication, coarse sophistication and busy beaver logical depth.
For a system in constant evolution converging to a dynamic environment, the lemma 11 imposes a limit to the growth of the descriptive complexity of a system with computable adapted states: if the growth of the descriptive complexity of a sequence of convergent times is unbounded in the sense of definition 12, then all but a finite number of times are undecidable. The converse would be convenient, however it is not always true. Moreover, the next series of results shows that imposing such a limit would impede strong OEE:
Let be a non-cyclical computable system with initial state , a dynamic environment, and a sequence of times such that for each there exists a total function such that . If the function is computable, then the function is computable.
Assume that is computable. We can describe a program such that, given , , and , runs and for each time , returning if -th is such that , and otherwise. Therefore the sequence of ’s is computable. ∎
The last result can be applied naturally to weakly convergent systems (9): the way each adapted state approaches to is unpredictable, in other words, its behaviour changes over different stages unpredictably. Formally:
Let be a weakly converging system, with adapted states and its respective adapted behaviour. If the mapping is non-descriptively differentiable then the function is not computable.
It is a direct consequence of applying the theorem 13 to the definition of weakly converging systems. ∎
While asking for totality might look like an arbitrary limitation at first glance, the reader should recall that in weakly convergent systems the program represents an organism, a theory or other computable system that uses ’s information to predict the behaviour of , and if this prediction does not process its environment in a sensible time frame then it is hard to argue that it represents an adapted system or a useful theory.
The intuition behind classifying descriptively differentiable adapted time sequences as less complex is better explained by borrowing ideas developed by Bennett and Koppel, within the framework of logical depth  and sophistication , respectively. Their argument states that random strings are as simple as very regular strings, given that there is no complex underlying structure in their minimal descriptions. The intuition that random objects contain no useful information leads us to the same conclusion. And given the theorem 5, the states must retain a high degree of randomness for random times.
Sophistication is a measure of useful information within a string. Proposed by Koppel, the underlying approach consists in dividing the description of a string into two parts: the program that represents the underlying structure of the object, and the input, which is the random or structureless component of the object. This function is denoted by , where is a natural number representing the significance level.
The sophistication of a natural number at the significance level , , is defined as:
Now, the images of a mapping already have the form , where and represent the structure and the random component respectively. Random strings should bind this structure strongly up to a logarithmic error, which is proven in the next lemma.
Let be a sequence of different natural numbers and a natural number. If the function is computable then there exists an infinite subsequence where the sophistication is bounded up to an a logarithm of a logarithmic term of their indexes.
Let be a computable function. Note that since is computable and the sequence is composed of different naturals, its inverse function can be computed by a program which, given a description of and , finds the first that produces and returns it; therefore and . Now, if is a -random natural where the inequality holds tightly, we have it that , which implies that, since is a total function, . Therefore, the sophistication is bounded up to an alogarithm of a logarithmic term for a constant significance level for an infinite subsequence. ∎
Small changes in the significance level of sophistication can have a large impact on the sophistication of a given string. Another possible issue is that the constant proposed in lemma 16 could appear to be large at first (but it becomes comparatively smaller as grows). A robust variation of sophistication called coarse sophistication  incorporates the significance level as a penalty. The definition presented here differs slightly from theirs in order to maintain congruence with the chosen prefix-free universal machine and to avoid negative values. This measure is denoted by .
The coarse sophistication of a natural number is defined as:
where is a computable unambiguous codification of .
With a similar argument as the one used to prove lemma 16, it is easy to show that coarse sophistication is similarly bounded up to an algorithm of a logarithmic term.
Let be a sequence of different natural numbers and a natural number. If the function is computable, then there exists an infinite subsequence where the coarse sophistication is bounded up to an a lgorithm of a logarithmic term.
If is computable and is -random, then by definition of and the inequalities presented in the proof of lemma 16, we have it that
Another proposed measure of complexity is Bennett’s logical depth , which measures the minimum computational time required to compute an object from a nearly minimal description. Logical depth works under the assumption that complex or deep natural numbers take a long time to compute from near minimal descriptions. Conversely, random or incompressible strings are shallow since their minimal descriptions must contain the full description verbatim. For the next result we will use a related measure called busy beaver logical depth, denoted by .
The busy beaver logical depth of the description of a natural , denoted by , is defined as:
where is the halting time of the program and , known as the busy beaver function, is the halting time of the slowest program that can be described within bits .
Let be a sequence of different natural numbers and a natural number. If the function is computable, then there exists an infinite subsequence where the busy beaver logical depth is bounded up to an algorithm of a logarithmic term of their indexes.
), these last results imply that either the complexity of the adapted states of a system (using any of the three complexity measures) grows very slowly for an infinite subsequence of times (becoming increasingly common up to a probability limit of 1) or the subsequence of adapted times is undecidable.
If is a weakly converging system with adaptation times that exhibits strong OEE with respect to and , then the mapping is not computable. Also, there exists a constant such that the result applies to .
We can see the sequence of adapted states as a function . By lemmas 16 and 18 and corollary 20, for the three stated measures of complexity, there exists an infinite subsequence where the respective complexity is upper bounded by . It follows that if the complexity grows faster than for an infinite subsequence, then there must exist an infinity of indexes in the bounded succession where grows faster than . Therefore there exists an infinity of indexes where is upper bounded. Finally, note that if a computable mapping allows growth on the order of , then the computable function would grow faster than the stated bound. ∎
Now, in the absence of absolute solutions to the problem of finding adapted states in the presence of strong OEE, one might cast about for a partial solution or approximation that decides most (or at least some) of the adapted states. The following corollary shows that the problem is not even semi-computable: any algorithm one might propose can only decide a bounded number of adapted states.
If is a weakly converging system with adapted states that show strong OEE, then the mapping is not even semi-computable.
Note that for any subsequence of adaptation times , the system must show strong . Therefore, by theorem 21, any subsequence must also not be computable. It follows that there cannot exist an algorithm that produces an infinity of elements of the sequence, since such an algorithm would allow the creation of a computable subsequence of adaptation times. ∎
In short, the theorem 21 imposes undecidability on strong OEE and, according to theorem 14, the behaviour and interpretation of the system evolves in an unpredictable way, establishing one path for emergence: a set of rules for future states that cannot be reduced to an initial set of rules. Recall that for a given weakly converging dynamical system, the sequence of programs represents the behaviour or interpretation of each adapted state . If a system exhibits strong OEE with respect to the complexity measures , or , by corollary 14 and theorem 21 the sequence of behaviours is uncomputable, and therefore irreducible to any function of the form , even when possessing complete descriptions for the behaviour of the system, its environment and its initial state. In other words, the behaviour of iterative adapted states cannot be obtained from the initial set of rules. Furthermore, we conjecture that the results hold for all adequate measures of complexity:
Computability bounds the growing complexity rate to that of an order of the slowest growing infinite subsequence with respect to any adequate complexity measure .
3.1 A System Exhibiting OEE
With the aim of providing mathematical evidence for the adequacy of Darwinian evolution, Chaitin developed a mathematical model that converges to its environment significantly faster than exhaustive search, being fairly close to an intelligent solution to a mathematical problem that requires maximal creativity [18, 17].
One of the solutions Chaitin proposes is to find digital organisms that approximate the busy beaver function:
which is equivalent (up to a constant) to asking for the largest natural number that can be named within number of bits and the first bits of Chaitin’s constant, which is defined as , where is the set of all halting Turing machines for the universal machine . We will omit the subindex from in the rest of this text.
Chaitin’s evolutionary system searches non-deterministically through the space of Turing machines using a reference universal machine with the property that all strings are valid programs. This random walk starts with the empty string , and each new state is defined as the output of a Turing machine, called a mutation, with the previous state as an input. These mutations are chosen stochastically according to the universal distribution . If these mutations help to more accurately approximate the digits of , then this program becomes the new state , otherwise we keep searching for new organisms. Chaitin demonstrates that the system approaches efficiently (with quadratic overhead), arguing that this is evidence of the adequacy of Darwinian evolution .
Given that can be used to compute , a deterministic version of Chaitin’s system is the following:
where is the distance between the programs , is the quantification of the number of mutations needed to transform one string into the other, and is a positive integer acting as an accumulator that resets to 1 whenever increases in value, adding 1 otherwise.
Defining a computable environment or adaptation condition for this system is difficult since the system seeks to approach an uncomputable function () and the evolution rule itself is not computable given the halting problem. The most direct way to define it is or, equivalently, as the first -bits of Chaitin’s constant .
Another way to define the environment is by an encoding of the proposition larger than for each time . Given that we can compute and its relationship with given a description of the latter and a constant amount of information (), we find adaptation at the times where the busy beaver function grows.
It is easy to see that the sequence of programs is precisely what generates the busy beaver sequence . Given that is not a computable function, the evolution of the system, along with the respective adaptation times, is not computable. Furthermore, this sequence is composed of programs that compute, in order, an element of a sequence that exhibits strong OEE with respect to : let be the sequence of all busy beaver values; by definition, if is the first value for which was obtained, , where . It follows that and , otherwise would not be the minimal program.
Computing the system described requires a solution for the Halting Problem, and the system itself might also seem unnatural at first glance. However, we can think of the biosphere as a huge parallel computer that is constantly approximating solutions to the adaptation problem by means of survivability, and just as has been approximated , we claim that just as we cannot know whether a Turing machine will halt until it does, we may not know if an organism will keep adapting and survive in the future, but we can know when it failed to do so (extinction).
4 Logical Depth and Future Work
Although we conjecture that the theorem 21 must also hold for logical depth as defined by Bennett , extending the results to this measure is still a work in progress. Encompassing logical depth will require a deeper understanding of the internal structure of the relationship between system and computing time, beyond the time complexity stability (5), and might be related to open fundamental problems in computer science and mathematics. For instance, finding a low upper bound to the growth of logical depth of all computable series of natural numbers would suggest a negative answer to the question of the existence of an efficient way of generating deep strings, which Bennett relates to the problem.
One way to understand conjecture 23 is that the information of future states of a system is either contained at the initial state–hence their complexity is bounded by that initial state– or is undecidable. This should be a consequence given that, for any computable dynamical system, the randomness induced by time cannot be avoided.
Given that we intend to expand upon these questions in the future, it is important to address the fact that the diagonal algorithm that Bennett proposes for generating deep strings represents a contradiction to our conjecture: The logical depth of a natural at the level of significance is defined as:
The algorithm produces strings of length with depth for a significance level , where must be smaller than , and must not be as large (or larger) than to avoid shallow strings. One possible issue with this algorithm is that the significance level is not computable, and we can expect it to vary greatly with respect to : For large with small (such as ) the significance level is nearly , which suggests that, for a steady significance level with respect to times with large , the growth in complexity might not be stable. This issue, along with an algorithm that consistently enumerates pairs of and s such that for growing ’s, will be explored in future work and its solution would require a formal definition of adequate complexity measures. The fact that presents a challenge to the conjecture 23 would suggest an important difference from the three complexity measures used in this article.
We have presented a formal and general mathematical model for adaptation within the framework of computable dynamical systems. This model exhibits universal properties for all computable dynamical systems, of which Turing machines are a subset. Among other results, we have given formal definitions of open-ended evolution (OEE) and strong open-ended evolution and supported the latter on the basis that it allows us to differentiate between trivial and non-trivial systems.
We have also shown that decidability imposes universal limits on the growth of complexity in computable systems, as measured by sophistication, coarse sophistication and busy beaver logical depth. We show that as time dominates the descriptive algorithmic complexity of the states, the complexity of the evolution of a system tightly follows that of natural numbers, implying the existence of non-trivial states but the non-existence of an algorithm for finding these states or any subsequence of them, which makes the computations for harnessing or identifying them undecidable.
Furthermore, as a direct implication of corollary 14 and theorem 21, the undecidability of adapted states and the unpredictability of the behaviour of the system at each state is a requirement for a system to exhibit strong open-ended evolution with respect to the complexity measures known as sophistication, coarse sophistication and busy beaver logical depth, providing rigorous proof that undecidability and irreducibility of future behaviour is a requirement for the growth of complexity in the class of computable dynamical systems. We conjecture that these results can be extended to any adequate complexity measure that assigns low complexity to random objects. Finally, we provide an example of a (non-computable) system that exhibits strong OEE and supply arguments for its adequacy as a model of evolution, which we claim supports our characterization of strong OEE.
We would like to thank Carlos Gershenson García for his comments during the development of this project and to acknowledge support from grants CB-2013-01/221341 and PAPIIT IN113013.
-  C. Adami and C. T. Brown. Evolutionary learning in the 2D artificial life system avida. In Proc. Artificial Life IV, pages 377–381. MIT Press, 1994.
-  Christoph Adami. What is complexity? BioEssays, 24(12):1085–1094, 2002.
-  A. Adams, H. Zenil, P.W.C. Davies, and S.I. Walker. Formal definitions of unbounded evolution and innovation reveal universal mechanisms for open-ended evolution in dynamical systems. Scientific Reports (in press), 2016.
-  L. Antunes and L. Fortnow. Sophistication revisited. In ICALP: Annual International Colloquium on Automata, Languages and Programming, 2003.
-  Joshua Evan Auerbach and Josh C. Bongard. Environmental influence on the evolution of morphological complexity in machines. PLoS Computational Biology, 10(1), 2014.
-  Bedau. Four puzzles about life. ARTLIFE: Artificial Life, 4, 1998.
-  Bedau, McCaskill, Packard, Rasmussen, Adami, Green, Ikegami, Kaneko, and Ray. Open problems in artificial life. ARTLIFE: Artificial Life, 6, 2000.
-  C. H. Bennett. Logical depth and physical complexity. In R. Herken, editor, The Universal Turing Machine: A Half-Century Survey, pages 227–257. Oxford University Press, 1988.
-  Vincent D. Blondel, Olivier Bournez, Pascal Koiran, and John N. Tsitsiklis. The stability of saturated linear dynamical systems is undecidable. In Horst Reichel Sophie Tison, editor, Symposium on Theoretical Aspects of Computer Science (STACS), Lille, France, volume 1770 of Lecture Notes in Computer Science, pages 479–490. Springer-Verlag, Feb 2000.
-  Olivier Bournez, Daniel S. Graça, Amaury Pouly, and Ning Zhong. Computability and computational complexity of the evolution of nonlinear dynamical systems. In Springer, editor, Computability in Europe (CIE’2013), Lecture Notes in Computer Science, 2013.
-  Olivier Bournez, Daniel S. Graça, Amaury Pouly, and Ning Zhong. The Nature of Computation. Logic, Algorithms, Applications: 9th Conference on Computability in Europe, CiE 2013, Milan, Italy, July 1-5, 2013. Proceedings, chapter Computability and Computational Complexity of the Evolution of Nonlinear Dynamical Systems, pages 12–21. Springer Berlin Heidelberg, Berlin, Heidelberg, 2013.
-  Cristian S Calude, Michael J Dinneen, Chi-Kou Shu, et al. Computing a glimpse of randomness. Experimental Mathematics, 11(3):361–370, 2002.
-  Cristian S. Calude and Michael Stay. Most programs stop quickly or never halt. CoRR, abs/cs/0610153, 2006.
-  C.S. Calude and H. Jugensen. Is complexity a source of incompleteness? ADVAM: Advances in Applied Mathematics, 35, 2005.
-  G. J. Chaitin. Algorithmic information theory. In Encyclopaedia of Statistical Sciences, volume 1, pages 38–41. Wiley, 1982.
-  Gregory Chaitin. Life as evolving software. In Hector Zenil, editor, A Computable Universe: Understanding and Exploring Nature as Computation, chapter 16. World Scientific Publishing Company, 10 2012.
-  Gregory Chaitin. Proving Darwin: Making Biology Mathematical. Vintage, 2013.
-  Gregory J. Chaitin. Evolution of mutating software. Bulletin of the EATCS, 97:157–164, 2009.
-  S. Barry Cooper. Emergence as a computability-theoretic phenomenon. Applied Mathematics and Computation, 215(4):1351–1360, 2009.
-  R. Daley. Busy beaver sets: Characterizations and applications. INFCTRL: Information and Computation (formerly Information and Control), 52:52–67, 1982.
-  Jean-Charles Delvenne, Petr Kurka, and Vincent D. Blondel. Decidability and universality in symbolic dynamical systems. Fundam. Inform, 74(4):463–490, 2006.
-  E. Fredkin and T. Toffoli. Conservative logic. International Journal of Theoretical Physics, 21:219–253, 1982.
-  Gardner. Mathematical games: The random number omega bids fair to hold the mysteries of the universe. SCIAM: Scientific American, 241, 1979.
-  Walter Kirchherr, Ming Li, and Paul Vitányi. The miraculous universal distribution. The Mathematical Intelligencer, 19(4):7–15, 1997.
-  Andrey Kolmogorov. Three approaches to the quantitative definition of information. Problems Inform. Transmission, 1:1–7, 1965.
-  M. Koppel. Structure. In R. Herken, editor, The Universal Turing Machine: A Half-Century Survey, pages 435–452. Oxford University Press, 1988.
-  Joel Lehman and Kenneth O. Stanley. Exploiting open-endedness to solve problems through the search for novelty. In Seth Bullock, Jason Noble, Richard A. Watson, and Mark A. Bedau, editors, ALIFE, pages 329–336. MIT Press, 2008.
-  M. Li and P. Vitányi. An introduction to Kolmogorov complexity and its applications. Springer, 2nd edition, 1997.
-  Kristian Lindgren. Evolutionary phenomena in simple dynamics. In Christopher G. Langton, Charles Taylor, J. Doyne Farmer, and Steen Rasmussen, editors, Artificial Life II, pages 295–312. Addison-Wesley, Redwood City, CA, 1992.
Physics-like models of computation.
Physica D, 10:81–95, 1984.
Discussion of reversible cellular automata illustrated by an implementation of Fredkin’s Billiard-Ball model of computation.
-  J. Meiss. Dynamical systems. Scholarpedia, 2(2):1629, 2007. revision #121407.
-  Christopher Moore. Generalized shifts: Unpredictability and undecidability in dynamical systems. Nonlinearity, 4(2):199–230, 1991.
-  K. Ruiz-Mirazo, J. Peretó, and A. Moreno. A universal definition of life: Autonomy and open-ended evolution. Origins of life and evolution of the biosphere, 34(3):323–346, 2002.
-  L. B. Soros and Kenneth O. Stanley. Identifying necessary conditions for open-ended evolution through the artificial life world of chromaria. In Fourteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE 14). MIT Press, 2014.
-  Russell K. Standish. Open-ended artificial evolution. International Journal of Computational Intelligence and Applications, 3(2):167–175, 2003.
-  Tim Taylor. Requirements for open-ended evolution in natural and artificial systems. CoRR, abs/1507.07403, 2015.
-  A. M. Turing. On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 42:230–265, 1936.
-  Héctor Zenil, Carlos Gershenson, James A. R. Marshall, and David A. Rosenblueth. Life as thermodynamic evidence of algorithmic structure in natural environments. Entropy, 14(11), 2012.