I Introduction
Complex processes are prevalent throughout the world, taking the form of natural processes such as the weather Lynch (2008); Bauer et al. (2015) and DNA sequences Pavlos et al. (2015), as well as artificial processes like the stock market Preis et al. (2012) and traffic Kerner and Rehborn (1996). We construct models of these processes in order to better understand their structure and predict their behaviour. Within complexity science, the field of computational mechanics Crutchfield and Young (1989); Shalizi (2001); Crutchfield (2011) offers a systematic approach to understanding the intrinsic computation of a process by identifying the causal links between its past and future, and has been used to study a diverse set of dynamics such as deterministic chaos in the logistic map Crutchfield and Young (1989); Crutchfield (2011), cellular automata Hanson and Crutchfield (1997), the dripping faucet experiment Gonçalves et al. (1998), stock markets Park et al. (2007), and neural spike trains Haslinger et al. (2010). A key component of the approach are socalled machines, which as a valuable byproduct represent the most parsimonius causal model of a process.
In recent decades, the prospect of using quantum effects in information processing has emerged, promising advantages for a range of applications in terms of algorithmic speedups Montanaro (2016), secure communication Bennett and Brassard (2014), and beyond. Stochastic modelling is no exception to this, and a growing body of work has established that when information is encoded into a quantum memory, causal models of a stochastic process can be designed that function whilst retaining less information about the past than is classically possible Gu et al. (2012); Mahoney et al. (2016); Riechers et al. (2016); Thompson et al. (2017); Elliott and Gu (2018); Binder et al. (2018); Elliott et al. (2019a); Liu et al. (2019). This quantum memory advantage can grow unbounded Garner et al. (2017); Aghamohammadi et al. (2017a); Elliott and Gu (2018); Elliott et al. (2019a); Thompson et al. (2018); Elliott et al. (2019b), and has been verified experimentally Palsson et al. (2017); Jouneghani et al. (2017); Ghafari et al. (2019). Like its classical counterpart, the amount of information stored within these quantum models has been suggested as a measure of structural complexity in stochastic dynamics Tan et al. (2014); Suen et al. (2017); Aghamohammadi et al. (2017b); Suen et al. (2018).
Currently, systematic approaches to constructing such quantum models are predicated on having a prior exact statistical description of the process, or knowledge of its machine. As a result, to apply these tools to realworld systems we must first use classical inference protocols to construct an machine Crutchfield and Young (1989); Shalizi and Klinker (2004); Strelioff and Crutchfield (2014), and then use this as a basis to construct a corresponding quantum model. It is desirable to instead have a model inference protocol to directly go from data to the quantum model, avoiding any extra computational overhead associated with also determining the classical model. In this vein, here we introduce such a protocol for directly inferring the memory structure of a quantum model of a stochastic process – which we show is robust to statistical noise. The protocol is tailored specifically for quantum models, taking advantage of certain of their features that allow some approximations that must be made in classical information processing to be avoided. Fig. 1 provides a schematic of our motivation.
The layout of this article is as follows. In Section II we outline the general framework of stochastic processes and computational mechanics as is relevant here, as well as the more efficient quantum models. Section III provides the core of our results, introducing the inference protocol, showing its robustness to statistical fluctuations, and justifying its accuracy. The efficacy of our inference protocol is then demonstrated in practice with two toy processes in Section IV. Finally, we conclude in Section V, and discuss some future directions.
Ii Framework
ii.1 Stochastic processes
We consider discretetime stochastic processes represented by a biinfinite probabalistic string of outcomes , where
are random variables that take on values
drawn from an alphabet , and the subscript represents the timestep. Consecutive strings are called words, with the left index inclusive and the right exclusive. We consider stationary processes, such that . We partition the process into (semiinfinite) pasts and futures, denoted as and respectively, where is taken to be the present.The Markov order is an important property of a process that defines an effective history length; a process is said to have Markov order if is the smallest value such that is satisfied Racca et al. (2007). That is, it is the smallest block length of the most recent past that provides a sufficient statistic of the future. When the process is said to be Markovian.
ii.2 Models
Computational mechanics. Computational mechanics Crutchfield and Young (1989); Shalizi (2001); Crutchfield (2011) provides a formal statistical framework for identifying and analysing structure in complex processes. Its modus operandi involves the minimal causal representation^{1}^{1}1Here, by causal we mean that the representation stores no information about the future of the process that could not be deduced from past observations. of stochastic processes, which may be determined by a systematic clustering of pasts. Specifically, the causal states of a process are a set of equivalence classes on the pasts, defined according to the relation
(1) 
that is, two pasts belong to the same causal state iff they give rise to statisticallyidentical futures. We label the causal states as .
Because of the deterministic assignment of pasts to causal states, it can be seen that transitions between causal states are also deterministic conditional on the output symbol; that is, given a past , upon emission of the next symbol the new past must belong to causal state , where is a deterministic update function. This deterministic transition structure is sometimes referred to as unifilarity
, and allows us to represent the process as a deterministic edgeemitting hidden Markov model (HMM) known as the
machine, where the causal states form the hidden states of the model, and the edge emissions the observed symbols Crutchfield and Young (1989); Shalizi (2001).The amount of information stored by the machine can be quantified by the Shannon entropy of the stationary distribution on causal states:
(2) 
where . Across all (classical) causal representations of a process, the machine minimises the information cost of its corresponding memory states, and it is in this sense we refer to it as being minimal (or optimal). Because of this distinguished feature, is called the statistical complexity, and is considered as a quantifier of structure in the process Crutchfield (2011), in some sense representing how much information about the past is needed to produce the future. This quantity is lower bounded by the mutual information between the past and future Shalizi (2001); in general this bound is not strict, and the difference is referred to as the modelling overhead Mahoney et al. (2011).
When dealing with raw data, one must estimate the probabilities through inference, which will be subject to unavoidable statistical fluctuations due to the finite amount of data. As such, when applying the equivalence relation Eq. (
1) a threshold tolerence must be permitted, where pasts are assigned to the same causal state if their conditional future distributions are ‘close enough’ Crutchfield and Young (1989)^{2}^{2}2It should be noted that there is not a fixed definition of how this tolerence should be implemented, but typically it would be appropriate to use some form of statistical distance between the conditional distributions, with some maximal allowed distance for merging parameterised by .. Adjusting the strictness of this tolerence induces diferent levels of coarsegraining: if too narrow then the fluctuations will lead to additional spurious causal states that would have been merged with knowledge of the exact distributions; if too loose then pasts with different conditional futures can be merged. Depending on this degree of coarsegraining, the obtained value of will vary; the statistical complexity is sensitive to statistical fluctuations, and is generally not robust to noise.Quantum computational mechanics. When considering quantum methods of information processing, the minimality of machines no longer holds; it has been shown that causal quantum models can be found with lower information costs Gu et al. (2012), using nonorthogonal memory states to reduce the modelling overhead. The current stateoftheart quantum models Liu et al. (2019) are based on unitary interactions between the memory subsystem and a probe ancilla:
(3) 
where the first subspace contains the memory, the second the probe (measured after the interaction to produce the symbol for that timestep), are the quantum memory states (in onetoone correspondence with the causal states ), and the phase factors are tunable parameters. Successive applications of on the quantum memory state and probe at each timestep will yield a string of outputs from the probe measurement that are statistically distributed according to the modelled process^{3}^{3}3After each timestep, the probe ancilla is either reset, or a fresh ancilla is introduced., as depicted in Fig. 2.
The corresponding memory cost is called the quantum statistical memory, given by the von Neumann entropy of the quantum memory states:
(4) 
where . The title of quantum statistical complexity is reserved for the minimum of this quantity over all causal quantum models; there is as yet no systematic approach to finding this minimal model however, and optimising over the phase factors is a cumbersome task. For this reason, we shall here use the best quantum models for which a systematic construction method is known: phaseless unitary quantum models Binder et al. (2018), given by Eq. (3) with all phase factors set to zero. Despite not generally being minimal, the corresponding quantum statistical memory of these models has still been suggested as quantifier of structure Gu et al. (2012); Tan et al. (2014); Suen et al. (2017); Aghamohammadi et al. (2017b); Suen et al. (2018), often emphasising different features to the classical statistical complexity.
Iii Inference protocol
We here introduce an inference protocol for the quantum statistical memory ^{4}^{4}4We use tildes to represent estimated quantities. of the phaseless unitary quantum models that can be used to investigate structure in timeseries data. The protocol is tailored specifically to take advantage of features of the specific model, and bypassing the need to construct the machine as an intermediate step. It is agnostic to the causal architecture of the process, and requires only an estimate of the Markov order.
The inference protocol is based on a set of postulated quantum memory states given by clustering pasts in which the last symbols are identical; we thus have a memory state for each of the possible length words . The choice of should correspond to the estimated Markov order of the process (or at least, the effective Markov order – see below). From the data we then estimate the conditional probabilities , and implicitly define the quantum memory states to satisfy the interaction
(5) 
A set of quantum memory states and corresponding interaction can be found through a recursive expression for the overlaps of the states and employing a reverse GramSchmidt procedure Binder et al. (2018). The estimated quantum statistical memory is then given by the von Neumann entropy of the corresponding stationary state of the memory:
(6) 
where .
To show that this protocol provides a faithful estimate of the quantum statistical memory, we first prove two properties of the above construction:
 A

Selfmerging of quantum memory states
We show that when is at least as large as the Markov order the overlap of quantum memory states assigned to different pasts in the same causal state is unity when exact probabilities are used.  B

Robustness of quantum statistical memory.
We show that the quantum statistical memory of the phaseless unitary quantum model is insensitive to small perturbations in the probabilities.
With these properties we can then discuss the accuracy of the inference protocol:
 C

Inference protocol.
We indicate how the accuracy of our estimate of quantum statistical memory scales with the amount of data, and how it converges for sufficiently large data streams.
iii.1 Selfmerging of quantum memory states
We first show that the blind construction Eq. (5) will automatically adopt the causal architecture of the process (i.e., that pasts belonging to the same causal state are assigned to the same memory state) without explicit need to apply the causal equivalence relation Elliott et al. (2019a), provided that exact probabilities are used, and the chosen is at least as large as the Markov order of the process. That is, the quantum memory states we construct will correspond to the same states as would be obtained from the phaseless form of the model Eq. (3), but without prior knowledge of how the pasts are clustered into causal states. In turn, this means the blind construction will faithfully replicate the process, with the same quantum statistical memory.
To see this, let denote the Markov order of the process, and recall that this means . Since the Markov order can alternatively be expressed as the longest history length needed to determine the causal state (i.e., , all pasts where the latest symbols are identical belong to the same causal state Mahoney et al. (2016). We can see that if , the construction already correctly merges all pasts where the latest symbols are identical.
By analogy with the corresponding methods for phaseless unitary models Binder et al. (2018), we can express
(7) 
Iteratively applying this relation, we obtain that
(8) 
if we then have
(9) 
where the full pasts and can be taken as any pasts with the correct corresponding last symbols. We are thus able to conclude that
(10) 
which can be seen as an instantiation of the causal equivalence relation, i.e., two pasts are mapped to the same memory state iff they have the same conditional future statistics.
iii.2 Robustness of quantum statistical memory
We next show that the quantum statistical memory of our construction is robust to small perturbations of the probabilities. Consider mapping
(11) 
where governs the relative changes in the distribution for each string, and the strength of the perturbation. We here outline a proof that the perturbation to scales smoothly with ; full details may be found in Appendix A.
The Gram matrix of a quantum state is defined as , and can be shown to have the same spectrum as Jozsa (2000); Horn and Johnson (2012); Riechers et al. (2016). As such, it is possible to define the Gram matrix of our construction as
(12) 
and correspondingly, from its spectrum calculate .
Consider that we have , such that it is possible to express the overlaps of the quantum memory states as
(13) 
Note that we only need consider
steps into the future as this uniquely determines the subsequent memory state, independent of the past. Now replace each of the probability distributions in this expression by their corresponding perturbed forms, which may be obtained from the marginals of Eq. (
11). We can then calculate the perturbed form of the corresponding Gram matrix using this expression for the overlaps, and show that its spectrum varies smoothly with . Since the von Neumann entropy is a continuous function of the spectrum of a state Nielsen and Chuang (2010), we thus find that it too smoothly deforms with .Hence, we can conclude that the quantum statistical memory is robust to small perturbations in the probability distributions. Due to the selfmerging of our quantum memory states, we can also see that the quantum statistical memory of phaseless unitary quantum models is similarly robust in general. This is in contrast to the classical statistical complexity, which can vary discontinuously with the probabilities – notably, whenever the pertubation triggers a new merging of pasts into a causal state, or conversely, the splitting of a causal state.
iii.3 Inference protocol
With these two results in hand, we are now in a position to argue the accuracy of our inference protocol. By parsing the data and adopting a frequentist approach to estimate the marginal distribution of words of length (for some chosen ), we are able to estimate the conditional probabilities needed for Eq. (5). Moreover, we can use these marginals to construct estimates for the probabilities needed to evaluate Eq. (12) (the conditional probabilities can be obtained from multiplying with the assumption that is at least as large of the Markov order), and in turn, estimate . From the results of the previous two subsections, we can be assured that this should be an faithful estimate, provided that the estimated marginals are close to the exact distributions, and .
Let us first consider the error in our estimates of the marginals. With any inference from finite data there will of course be statistical fluctuations; this is not problematic for our inference protocol provided that these fluctuations are sufficiently small. These fluctuations can be treated in the same manner as the perturbations of the previous section, but with the perturbation terms now being stochastic variables. In Appendix B we show that the size of the error in the inferred Gram matrix and the estimated quantum statistical memory approximately scale as , where is the size of the data stream considered.
The next question is how we determine the choice of . From the above, we see that taking too large an will lead to untenably large fluctuations. We must therefore cap it at some value where ; as a rough guideline we suggest . On the other hand, we require to be large enough that it matches or exceeds the Markov order of the process, in order to effect the (approximate) selfmerging of quantum memory states.
When a process has a large, or even infinite Markov order, it may not be possible to have an that satisfies both of these requirements. Nevertheless, while the Markov order tells us how far back into the past memory effects can persist, it does not inform us how strong they are. It is often the case that these longrange historical dependencies only have minor influence on the future, and that the recent past is much more relevant. In such instances, the influence of the distant past can be thought of as a small perturbation to the statistics with respect to the recent past alone, and so from the previous subsection we can expect that they have minimal impact on the requisite quantum statistical memory. We therefore introduce the concept of an effective Markov order , that encapsulates the idea that a sufficientlylong string of past observations that is less than the Markov order may nevertheless still be ‘good enough’ to capture most of the predictive information contained in the past.
We define the effective Markov order as the smallest length of a string of past observations for which the influence of considering an additional symbol one step further into the past does not exceed some threshold. Specifically, we define as the smallest integer that satisfies
(14) 
where is the parameter defining the threshold^{5}^{5}5Strictly, we have a family of effective Markov orders for the process, parameterised by ., the expectation value is taken over the distribution of strings of length , and is some distance measure between probability distributions; for the purposes of this work we will use the trace distance . We can estimate the effective Markov order from this data, and use this as a guide for choosing a value for in the inference protocol.
Iv Examples
We now demonstrate the efficacy of our model with two toy example processes: the socalled  golden mean and nemo processes. We use an exact HMM representation of the processes to generate a representative string of outputs, and infer directly from this timeseries. We choose the initial state according to the stationary distribution of the HMM, such that the output statistics are representative of the stationary state of the process. We will look at how the estimate for the quantum statistical memory varies both with the amount of data , and the history length , highlighting the range of values that would be considered appropriate given the above considerations regarding the (effective) Markov order and expected size of fluctuations.
iv.1 Golden mean process
We first look at the  golden mean process family. Here, and are tunable parameters that correspond to the Markov order and cryptic order^{6}^{6}6The cryptic order is a counterpart to the Markov order, describing the minimum length of past observations required to be certain of the present causal state, given that one knows the entire future, i.e., the smallest satisfying . of the process respectively Crutchfield et al. (2009); Mahoney et al. (2009, 2011, 2016). Here, we will consider the  golden mean process specifically, as represented by its machine in Fig. 3(a).
In Fig. 3(b) we plot the expectation of the trace distance between differing symbols increasingly far into the past (i.e, between and ), from which we can infer an effective Markov order for the process as defined in Eq. (14). The limits of finite data are already visible in this plot, with the instability clear when is too large relative to , due to undersampling of the process statistics. The hollow circle on each plot represents the point at which – beyond this point we consider the statistics to be undersampled. Setting a threshold , we would assign an effective Markov order of , which aligns with the true Markov order of the process. Fig. 3(c) displays the estimated ; we see that at the Markov order the estimate is very close to the exact value , with statistical noise gradually degrading the quality of the estimate at larger when we have insufficient data. This highlights both the efficacy of protocol, and importance of selecting an appropriate value for .
iv.2 Nemo process
As a second example, we consider the nemo process, which can be represented by its machine as in Fig. 4(a). A key feature of this process is that it has infinite Markov order: a contiguous string of zeros of any length cannot be exactly synchronised to a causal state. As such, with this example it is not possible to choose an that matches the Markov order of the process. Nevertheless, we will show that the effective Markov order can provide a suitable proxy.
Fig. 4(b) shows the expected trace distance for variation in the symbol steps into the past; setting a tolerance we assign an effective Markov order . Examining the estimated quantum statistical memory [Fig. 4(c)], we see that setting does indeed appear to provide an accurate estimate of , striking a balance between allowing sufficiently long histories to capture most of the past dependency, while not going as far as to underfit. Note that in this case we should consider to be insufficient data to provide a good estimate, as the prescribed is significantly smaller than the effective Markov order – practically, this could be deduced from the trace distance, which never drops below the threshold value.
V Discussion
We have introduced a protocol for estimating the information cost of quantum simulation of stochastic processes. We have shown that both this quantity and our protocol are robust to small statistical perturbations. This provides a means to characterise structure in a process according to the amount of (quantum) resources needed to capture its behaviour, analogous to corresponding classical quantities Crutchfield and Young (1989); Shalizi and Klinker (2004); Strelioff and Crutchfield (2014). Moreover, this provides a key step towards blind construction of quantum models that efficiently replicate the behaviour of such processes.
An essential consideration to be made in this latter vein is the capabilities of current and nearterm quantum technologies. Our inferrence protocol accurately captures the information that must be stored by a quantum model of a process – appropriately indicating the amount of structure in the process – at the expense of indicating a multitude of memory states, typically , in excess of the number of causal states. The number of memory states is parameterised by a companion metric, the topological memory ; quantum advantages can also be found in terms of this measure Thompson et al. (2018); Ghafari et al. (2019); Liu et al. (2019); Elliott et al. (2019b); Loomis and Crutchfield (2019). Future work will investigate methods of compression in this parameter via truncation in terms of the quantum state space, opening up the possibility to implement the inferred constructions experimentally. The accuracy of these inferred models can then be explored using recentlydeveloped quantifiers of process distinguishability Yang et al. (2019).
Acknowledgements.
We thank Chew Lock Yue, Suen Whei Yeap, and Suryadi for discussions. This work was funded by the Lee Kuan Yew Endowment Fund (Postdoctoral Fellowship), Singapore Ministry of Education Tier 1 grant RG190/17, FQXiRFP1809 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation, Singapore National Research Foundation Fellowship NRFNRFF201602, and NRFANR grant NRF2017NRFANR004 VanQuTe. T.J.E. thanks the Centre for Quantum Technologies for their hospitality.Appendix A Detailed proof of robustness of quantum statistical memory
Recall from the main text that the Gram matrix of a quantum state is defined as Jozsa (2000); Horn and Johnson (2012); Riechers et al. (2016). If one considers a purification of the original state , then can be recovered by taking a partial trace over the second subsystem, and by tracing out the first. The Gram matrix thus has the same spectrum as the original state, and so can be used to calculate functions of this spectrum, such as the entropy. As stated in Eq. (12), for our construction the Gram matrix is given by
(15) 
Using Eq. (13), this can be expanded as
(16) 
We now examine how this changes when the probabilities are replaced by their peturbed versions . Consider expanding out the square root of the product of two such pertubations:
(17) 
Substituting this into Eq. (A), we obtain
(18) 
Thus, we can write
(19) 
where
(20) 
From Weyl’s inequality, it then follows that the perturbation to the eigenvalues of
are bounded by the spectral norm of Horn and Johnson (2012). Clearly, this norm scales with , and so the perturbation to the spectrum of varies continuously with . Finally, since the von Neumann entropy of a quantum state is given by the Shannon entropy of its spectrum – and is a continuous function of it Nielsen and Chuang (2010) – the quantum statistical memory is smoothly deformed by the pertubation, and so is robust.Appendix B Scaling of statistical noise
We now examine the effects of statistical noise in our estimates of word probabilities on . These fluctuations can be considered as a (stochastic) pertubation, i.e. , allowing us to employ the results above. We will set to 1, folding the full scaling of the perturbation with and into .
Recall that the corrections to the eigenvalues arising from the perturbation are bounded by the spectral norm of – i.e., its largest eigenvalue – which in turn is bounded by the product of the dimension of the matrix with its largest element. The elements of this matrix are given in Eq. (20
); to assess how they scale we will replace the statistical variables by their standard errors. Since the word probabilities
are described by binomial distributions (a randomly selected string can be assigned as either being the given word
or not), the standard error is given by(21) 
Inserting this into Eq. (20) we obtain
(22) 
The probability of obtaining a given word of length falls off approximately exponentially with . Let us assume all long words are roughly evenly distributed, and take . Considering that there are roughly terms in the sum, we have . Finally, since the dimension of the matrix scales as , the spectral norm (and thus the bound on the size of perturbations to the spectrum of the Gram matrix) scales .
References
 Lynch (2008) P. Lynch, Journal of Computational Physics 227, 3431 (2008).
 Bauer et al. (2015) P. Bauer, A. Thorpe, and G. Brunet, Nature 525, 47 (2015).
 Pavlos et al. (2015) G. P. Pavlos, L. P. Karakatsanis, A. C. Iliopoulos, E. G. Pavlos, M. N. Xenakis, P. Clark, J. Duke, and D. S. Monos, Physica A 438, 188 (2015).
 Preis et al. (2012) T. Preis, D. Y. Kenett, H. E. Stanley, D. Helbing, and E. BenJacob, Scientific Reports 2, 1 (2012).
 Kerner and Rehborn (1996) B. S. Kerner and H. Rehborn, Physical Review E 53, 4275 (1996).
 Crutchfield and Young (1989) J. P. Crutchfield and K. Young, Physical Review Letters 63, 105 (1989).
 Shalizi (2001) C. R. Shalizi, PhD Thesis (2001).
 Crutchfield (2011) J. P. Crutchfield, Nature Physics 8, 17 (2011).
 Hanson and Crutchfield (1997) J. E. Hanson and J. P. Crutchfield, Physica D: Nonlinear Phenomena 103, 169 (1997).
 Gonçalves et al. (1998) W. M. Gonçalves, R. D. Pinto, J. C. Sartorelli, and M. J. De Oliveira, Physica A 257, 385 (1998).
 Park et al. (2007) J. B. Park, J. Won Lee, J. S. Yang, H. H. Jo, and H. T. Moon, Physica A 379, 179 (2007).
 Haslinger et al. (2010) R. Haslinger, K. L. Klinkner, and C. R. Shalizi, Neural Computation 22, 121 (2010).
 Montanaro (2016) A. Montanaro, npj Quantum Information 2, 15023 (2016).
 Bennett and Brassard (2014) C. H. Bennett and G. Brassard, Theoretical Computer Science 560, 7 (2014).
 Gu et al. (2012) M. Gu, K. Wiesner, E. Rieper, and V. Vedral, Nature Communications 3, 762 (2012).
 Mahoney et al. (2016) J. R. Mahoney, C. Aghamohammadi, and J. P. Crutchfield, Scientific Reports 6, 20495 (2016).
 Riechers et al. (2016) P. M. Riechers, J. R. Mahoney, C. Aghamohammadi, and J. P. Crutchfield, Physical Review A 93, 052317 (2016).
 Thompson et al. (2017) J. Thompson, A. J. P. Garner, V. Vedral, and M. Gu, npj Quantum Information 3, 6 (2017).
 Elliott and Gu (2018) T. J. Elliott and M. Gu, npj Quantum Information 4, 18 (2018).
 Binder et al. (2018) F. C. Binder, J. Thompson, and M. Gu, Physical Review Letters 120, 240502 (2018), arXiv:1709.02375 .
 Elliott et al. (2019a) T. J. Elliott, A. J. P. Garner, and M. Gu, New Journal of Physics 21, 013021 (2019a).
 Liu et al. (2019) Q. Liu, T. J. Elliott, F. C. Binder, C. Di Franco, and M. Gu, Physical Review A 99, 062110 (2019).
 Garner et al. (2017) A. J. P. Garner, Q. Liu, J. Thompson, and V. Vedral, New Journal of Physics 19, 103009 (2017).
 Aghamohammadi et al. (2017a) C. Aghamohammadi, J. R. Mahoney, and J. P. Crutchfield, Scientific Reports 7, 6735 (2017a).
 Thompson et al. (2018) J. Thompson, A. J. Garner, J. R. Mahoney, J. P. Crutchfield, V. Vedral, and M. Gu, Physical Review X 8, 031013 (2018).
 Elliott et al. (2019b) T. J. Elliott, C. Yang, F. C. Binder, A. J. P. Garner, J. Thompson, and M. Gu, arXiv preprint arXiv:1909.02817 (2019b).
 Palsson et al. (2017) M. S. Palsson, M. Gu, J. Ho, H. M. Wiseman, and G. J. Pryde, Science Advances 3 (2017), 10.1126/sciadv.1601302.
 Jouneghani et al. (2017) F. G. Jouneghani, M. Gu, J. Ho, J. Thompson, W. Y. Suen, H. M. Wiseman, and G. J. Pryde, (2017), arXiv:1711.03661 .
 Ghafari et al. (2019) F. Ghafari, N. Tischler, J. Thompson, M. Gu, L. K. Shalm, V. B. Verma, S. W. Nam, R. B. Patel, H. M. Wiseman, and G. J. Pryde, Physical Review X 9, 041013 (2019).
 Tan et al. (2014) R. Tan, D. R. Terno, J. Thompson, V. Vedral, and M. Gu, European Physical Journal Plus 129:191 (2014), 10.1140/epjp/i2014141912.
 Suen et al. (2017) W. Y. Suen, J. Thompson, A. J. P. Garner, V. Vedral, and M. Gu, Quantum 1, 25 (2017).
 Aghamohammadi et al. (2017b) C. Aghamohammadi, J. R. Mahoney, and J. P. Crutchfield, Physics Letters A 381, 1223 (2017b).
 Suen et al. (2018) W. Y. Suen, T. J. Elliott, J. Thompson, A. J. P. Garner, J. R. Mahoney, V. Vedral, and M. Gu, (2018), arXiv:1812.09738 .

Shalizi and Klinker (2004)
C. R. Shalizi and K. L. Klinker, in
Uncertainty in Artificial Intelligence: Proceedings of the Twentieth Conference (UAI 2004)
, edited by Max Chickering and J. Y. Halpern (AUAI Press, Arlington, Virginia, 2004) pp. 504–511.  Strelioff and Crutchfield (2014) C. C. Strelioff and J. P. Crutchfield, Physical Review E 89, 042119 (2014).
 Racca et al. (2007) E. Racca, F. Laio, D. Poggi, and L. Ridolfi, Physical Review E 75, 011126 (2007).
 Mahoney et al. (2011) J. R. Mahoney, C. J. Ellison, R. G. James, and J. P. Crutchfield, Chaos 21, 037112 (2011).
 Jozsa (2000) R. Jozsa, Physical Review A 62, 012301 (2000).
 Horn and Johnson (2012) R. A. Horn and C. R. Johnson, Matrix Analysis, 2nd ed. (Cambridge University Press, 2012).
 Nielsen and Chuang (2010) M. A. Nielsen and I. L. Chuang, Cambridge (2010).
 Crutchfield et al. (2009) J. P. Crutchfield, C. J. Ellison, and J. R. Mahoney, Physical Review Letters 103, 94101 (2009).
 Mahoney et al. (2009) J. R. Mahoney, C. J. Ellison, and J. P. Crutchfield, Journal of Physics A: Mathematical and Theoretical 42, 362002 (2009).
 Loomis and Crutchfield (2019) S. P. Loomis and J. P. Crutchfield, Journal of Statistical Physics 176, 1317 (2019).
 Yang et al. (2019) C. Yang, F. C. Binder, M. Gu, and T. J. Elliott, arXiv preprint arXiv:1909.08366 (2019).
Comments
There are no comments yet.