Quantum-inspired identification of complex cellular automata

03/25/2021
by   Matthew Ho, et al.
Nanyang Technological University
0

Elementary cellular automata (ECA) present iconic examples of complex systems. Though described only by one-dimensional strings of binary cells evolving according to nearest-neighbour update rules, certain ECA rules manifest complex dynamics capable of universal computation. Yet, the classification of precisely which rules exhibit complex behaviour remains a significant challenge. Here we approach this question using tools from quantum stochastic modelling, where quantum statistical memory – the memory required to model a stochastic process using a class of quantum machines – can be used to quantify the structure of a stochastic process. By viewing ECA rules as transformations of stochastic patterns, we ask: Does an ECA generate structure as quantified by the quantum statistical memory, and if so, how quickly? We illustrate how the growth of this measure over time correctly distinguishes simple ECA from complex counterparts. Moreover, it provides a more refined means for quantitatively identifying complex ECAs – providing a spectrum on which we can rank the complexity of ECA by the rate in which they generate structure.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 11

page 12

04/29/2019

An overview of Quantum Cellular Automata

Quantum cellular automata consist in arrays of identical finite-dimensio...
03/17/2019

A quantum cellular automaton for one-dimensional QED

We propose a discrete spacetime formulation of quantum electrodynamics i...
01/15/2020

How Does Adiabatic Quantum Computation Fit into Quantum Automata Theory?

Quantum computation has emerged as a powerful computational medium of ou...
12/23/2015

Interacting Behavior and Emerging Complexity

Can we quantify the change of complexity throughout evolutionary process...
11/07/2019

Robust inference of memory structure for efficient quantum modelling of stochastic processes

A growing body of work has established the modelling of stochastic proce...
09/09/2010

Is there a physically universal cellular automaton or Hamiltonian?

It is known that both quantum and classical cellular automata (CA) exist...
05/14/2021

Quantum coarse-graining for extreme dimension reduction in modelling stochastic temporal dynamics

Stochastic modelling of complex systems plays an essential, yet often co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

A: Sub-tree reconstruction algorithm

Here, inference of the classical statistical complexity is achieved through the sub-tree reconstruction algorithm [7]. It works by explicitly building an -machine of a stochastic process, from which may readily be deduced. The steps are detailed below.

1. Constructing a tree structure. The sub-tree construction begins by drawing a blank node to signify the start of the process with outputs . A moving window of size is chosen to parse through the process. Starting from the blank node, successive nodes are created with a directed link for every in each moving window . For any sequence starting from within whose path can be traced with existing directed links and nodes, no new links and nodes are added. New nodes with directed links are added only when the does not have an existing path. This is illustrated in Fig. S1

For example, suppose , giving rise to six nodes that branch outwards in serial from the initial blank node. If , the first five nodes gain no new branches, while the sixth node gains a new branch connecting to a new node with a directed link. Each different element of has its individual set of directed links and nodes, allowing a maximum of branches that originate from the blank node.

Figure S1: The sub-tree reconstruction algorithm, here illustrated for .

2. Assigning probabilities. The probability for each branch from the first node to occur can be determined by the ratio of the number of occurences the associated strings to the total number of strings. Correspondingly, this allows each link to be denoted with an output with its respective transition probability .

3. Sub-tree comparison. Next, starting from the initial node, the tree structure of outputs is compared against all other nodes. Working through all reachable nodes from the initial node, any nodes with identical and branch structure of size are given the same label. Because of finite data and finite , a test is used to account for statistical artefacts. The test will merge nodes that have similar-enough tree structures. This step essentially enforces the causal equivalence relation on the nodes.

4. Constructing the -machine.

It is now possible to analyse each individually-labelled node with their single output and transition probability to the next node. An edge-emitting hidden Markov model of the process can then be drawn up. This edge-emitting hidden Markov model represents the (inferred)

-machine of the process.

5. Computing the statistical complexity. The hidden Markov model associated with the -machine has a transition matrix giving the probability of the next output being given we are in causal state , and

being the causal state of the updated past. The steady-state of this (i.e., the eigenvector

satisfying ) gives the steady-state probabilities of the causal states. Taking , we then have the Shannon entropy of this distribution gives the statistical complexity:

(S1)

B: Quantum Models

Quantum models are based on having a set of non-orthogonal memory states in one-to-one correspondence with the causal states . These quantum memory states are constructed to satisfy

(S2)

for some suitable unitary operator  [4, 30]. Here, is the probability of output given the past is in causal state , and is a deterministic update function that updates the memory state to that corresponding to the causal state of the updated past. Sequential application of then replicates the desired statistics (see Fig. S2).

Then, is the steady-state of the quantum model’s memory, and the quantum statistical memory is then given by the the von Neumann entropy of this state:

(S3)
Figure S2: A quantum model consists of a unitary operator acting on a memory state and blank ancilla . Measurement of the ancilla produces the output symbol, with the statistics of the modelled process realised through the measurement statistics.

C: Quantum Inference Protocol

A quantum model can be systematically constructed from the -machine of a process, and so a quantum model can be inferred from data by first inferring the -machine. However, the quantum model will then inherit errors associated with the classical inference method, such as erroneous pairing/separation of pasts into causal states (due to e.g., the test in sub-tree reconstruction). For this reason, a quantum-specific inference protocol was recently developed [23] that bypasses the need to first construct a -machine, thus circumventing some of these errors. Moreover, it offers a means to infer the quantum statistical memory of a quantum model without explicitly constructing said model.

It functions by scanning the through the stochastic process in moving windows of size , in order to estimate the probabilities , from which the marginal and conditional distributions and can be determined. From these, we construct a set of inferred quantum memory states , satisfying

(S4)

for some suitable unitary operator . When is greater than or equal to the Markov order of the process, and the probabilities used are exact, this recovers the same quantum memory states to the exact quantum model Eq. (S2), where the quantum memory states associated to two different pasts are identical iff the pasts belong to the same causal state. Otherwise, if is sufficiently long to provide a ‘good enough’ proxy for the Markov order, and the data stream is long enough for accurate estimation of the -length sequence probabilities, then the quantum model will still be a strong approximation with a similar memory cost. From the steady-state of these inferred quantum memory states, the quantum statistical memory can be inferred [23].

However, the explicit quantum model need not be constructed as part of the inference of the quantum statistical memory. The spectrum of the quantum model steady-state is identical to that of its Gram matrix [24]. For the inferred quantum model, this Gram matrix is given by

(S5)

The associated conditional probabilities can either be estimated from compiling the using as a proxy for the Markov order, or directly by frequency counting of strings of length of in the data stream. Then, the quantum inference protocol yields an estimated quantum statistical memory :

(S6)

D: Methodology (Extended)

In this work, we study finite-width analogues of ECA. To avoid boundary effects from the edges of the ECA, to obtain a ECA state of width for up to timesteps we generate an extended ECA of width with periodic boundary conditions and keep only the centremost cells; this is equivalent to generating a width ECA with open boundaries (see Fig. S3). Note however that the choice of boundary condition showed little quantitative effect upon our results.

Figure S3: Generation of finite-width ECA evolution with open boundaries via extended ECA with periodic boundaries.

The state of an ECA can be interpreted as a stochastic pattern. That is, given an ECA at time in state , we can interpret this as a finite string of outcomes from a stochastic process with the same alphabet. We can then apply the tools of computational mechanics to this finite string, inferring the classical statistical complexity through the sub-tree reconstruction algorithm, and the quantum statistical memory from the quantum inference protocol. For both inference methods we use , as little qualitative difference was found using larger (see Fig. S6). For the sub-tree reconstruction we set the tolerance of the test to 0.05.

We apply the inference methods to ECA states of width . For each ECA rule we generate an initial state for where each cell is randomly assigned or with equal probability, and then evolve for steps (in Fig. S6 we analyse select rules for up to , finding that the qualitative features of interest are already captured at the shorter number of timesteps). Note that this is many, many orders of magnitude smaller than the time for which a typical finite-width ECA is guaranteed to cycle through already-visited states ([17]. We then apply the inference methods to the states at

; evaluating at every timestep shows little qualitative difference beyond highlighting the short periodicity of some Class II rules. We repeat five times for each rule, and plot the mean and standard deviation of

and (see Fig. S4 and Fig. S5).

Figure S4: Evolution of (blue) and (red) for all Wolfram Class I and II rules. Lines indicate mean values over five different intial random states, and translucent surrounding the standard deviation.
Figure S5: Evolution of (blue) and (red) for all Wolfram Class III and IV rules. Rules are placed on a simplicity-complexity spectrum according to the growth of . Lines indicate mean values over five different intial random states, and translucent surrounding the standard deviation.

E: Longer , larger

Here [Fig. S6] we present plots supporting that our choice of , appear to be sufficiently large to capture the qualitative features of interest, showing little difference when they are extended.

Figure S6: plots for a selection of rules with longer and larger . Plots shown for , , and .

The exception to this is Rule 110, which appears to plateau at longer times. We believe this to be attributable to the finite width of the ECA studied – as there are a finite number of gliders generated by the initial configuration, over time as the gliders annihilate there will be fewer of them to interact and propagate further correlations. This is illustrated in Fig. S7.

Figure S7: Over time, the finite number of gliders present in the initial configuration of a finite-width Rule 110 ECA will disappear due to annihilation with other gliders. At longer times, there are then fewer gliders to propagate longer-range correlations across the ECA state.

I F: Rule 18 and kinks

Within the dynamics of Rule 18 (Wolfram Class III), a phenomenon referred to as ‘kinks’ has been discovered [16]. These kinks are identified with the presence of two adjacent black cells in the ECA state, and have been shown capable of undergoing random walks [13], and annihilate when they meet. These kinks can be seen by applying a filter to the dynamics of Rule 18, replacing all cells by white, with the exception of adjacent pairs of black cells (see Fig. S8). The movement and interaction of kinks is reminiscent to that of a glider system like that of Rules 54 and 11; though, while information may be encoded into these kinks, they are noisy due to their seemingly random motion.

Figure S8: Rule 18 with and without a filter. The filter indicates the location of the kinks.