Extreme Quantum Advantage for Rare-Event Sampling

07/29/2017
by   C. Aghamohammadi, et al.
University of California-Davis
0

We introduce a quantum algorithm for efficient biased sampling of the rare events generated by classical memoryful stochastic processes. We show that this quantum algorithm gives an extreme advantage over known classical biased sampling algorithms in terms of the memory resources required. The quantum memory advantage ranges from polynomial to exponential and when sampling the rare equilibrium configurations of spin systems the quantum advantage diverges.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

11/04/2019

Thermal Efficiency of Quantum Memory Compression

Quantum coherence allows for reduced-memory simulators of classical proc...
12/01/2021

Revisiting dequantization and quantum advantage in learning tasks

It has been shown that the apparent advantage of some quantum machine le...
05/07/2020

Faster classical Boson Sampling

Since its introduction Boson Sampling has been the subject of intense st...
11/11/2020

Optimization under rare chance constraints

Chance constraints provide a principled framework to mitigate the risk o...
11/07/2019

Robust inference of memory structure for efficient quantum modelling of stochastic processes

A growing body of work has established the modelling of stochastic proce...
12/01/2021

Quantum advantage in learning from experiments

Quantum technology has the potential to revolutionize how we acquire and...
05/14/2021

Quantum coarse-graining for extreme dimension reduction in modelling stochastic temporal dynamics

Stochastic modelling of complex systems plays an essential, yet often co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

From earthquakes to financial market crashes, rare events are associated with catastrophe—from decimated social infrastructure and the substantial loss of life to global economic collapse. Though rare, their impact cannot be ignored. Prediction and modeling such rare events is essential to mitigating their effects. However, this is particularly challenging, often requiring huge datasets and massive computational resources, precisely because the events of interest are rare.

Ameliorating much of the challenge, biased or extended sampling [1, 2] is an effective and now widely-used method for efficient generation and analysis of rare events. The underlying idea is simple to state: transform a given distribution to a new one where previously rare events are now typical. This concept was originally proposed in 1961 by Miller to probe the rare events generated by discrete-time, discrete-value Markov stochastic processes [3]. It has since been extended to address non-Markovian processes [4]. The approach was also eventually adapted to continuous-time first-order Markov processes [5, 6, 7]. Today, the statistical analysis of rare events is a highly developed toolkit with broad applications in sciences and engineering [8]. Given this, it is perhaps not surprising that the idea and its related methods appear under different appellations, depending on the research arena. For example, large deviation theory refers to the s-ensemble method [9, 10], the exponential tilting algorithm [11, 12], or as generating twisted distributions.

In 1997, building on biased sampling, Torrie and Valleau introduced umbrella sampling into Monte Carlo simulation of systems whose energy landscapes have high energy barriers and so suffer particularly from poor sampling [13]. Since then, stimulated by computational problems arising in statistical mechanics, the approach was generalized to Ferrenberg-Swendsen reweighting, later still to weighted histogram analysis [14], and more recently to Wang-Landau sampling [15].

When generating samples for a given stochastic process one can employ alternative types of algorithm. There are two main types—Monte Carlo or finite-state machine algorithms. Here, we consider finite-state machine algorithms based on Markov chains (MC)

[16, 17] and hidden Markov models (HMM) [18, 19, 20]. For example, if the process is Markovian one uses MC generators and, in more general cases, one uses HMM generators.

When evaluating alternative approaches the key questions that arise concern algorithm speed and memory efficiency. For example, it turns out there are HMMs that are always equally or more memory efficient than MCs. There are many finite-state HMMs for which the analogous MC is infinite-state [21]. And so, when comparing all HMMs that generate the same process, one is often interested in those that are most memory efficient. For a generic stochastic process, the most memory efficient classical HMM known currently is the of computational mechanics [22]. The memory it requires is called the process’ statistical complexity [23].

Today, we have come to appreciate that several important mathematical problems can be solved more efficiently using a quantum computer. Examples include quantum algorithms for integer factorization [24], search [25], eigen-decomposition [26], and solving linear systems [27]. Not long ago and for the first time, Ref. [28] provided a quantum algorithm that can perform stochastic process sample-generation using less memory than the best-known classical algorithms. Recently, using a stochastic process’ higher-order correlations, a new quantum algorithm—the q-machine—substantially improved this efficiency and extended its applicability [29]. More detailed analysis and a derivation of the closed-form quantum advantage of the q-machine is given in a sequel [30]. Notably, the quantum advantage has been verified experimentally for a simple case [31].

The following brings together techniques from large deviation theory, classical algorithms for stochastic process generation, computational complexity theory, and the newly introduced quantum algorithm for stochastic process generation to propose a new, memory efficient quantum algorithm for the biased sampling problem. We show that there can be an extreme advantage in the quantum algorithm’s required memory compared to the best known classical algorithm. Two examples are analyzed here. The first is the simple, but now well-studied perturbed coin process. The second is a more physical example—a stochastic process that arises from the Ising next-nearest-neighbor spin system in contact with thermal reservoir.

Ii Classical Algorithm

The object for which we wish to generate samples is a discrete-time, discrete-value stochastic process [32, 18]

: a probability space

, where is a probability measure over the bi-infinite chain

, each random variable

takes values in a finite, discrete alphabet , and is the -algebra generated by the cylinder sets in . For simplicity we consider only ergodic stationary processes: that is, is invariant under time translation— for all , —and over successive realizations.

Figure 1: Hidden Markov model generator of a stochastic process with infinite-range statistical dependencies that requires an HMM with only six states. To generate the same process via a Markov chain requires one with an infinite number of states and so infinite memory.

Sampling or generating

a given stochastic process refers to producing a finite realization that comes from the process’ probability distribution. Generally, generating a process via its probability measure

is impossible due to the vast number of allowed realizations and, as a result, this prosaic approach requires an unbounded amount of memory. Fortunately, there are more compact ways than specifying in-full the probability measure on the sequence sigma algebra. This recalls the earlier remark that HMMs can be arbitrarily more compact than alternative algorithms for the task of generation.

An HMM is specified by a tuple . In this, is a finite set of states, is a finite alphabet, and is a set of substochastic symbol-labeled transition matrices whose sum

is a stochastic matrix.

As an example, consider the HMM state-transition diagram shown in Fig. 1, where , , and we have three substochastic matrices , , and . Each edge is labeled denoting the transition probability and a symbol which is emitted during the transition. In this HMM, of the two edges exiting state , one enters state and the other enters state . The edges from to and to are labeled by and . This simply means that if the HMM is in the state , then with probability it goes to the state and emits the symbol and with probability it goes to state and emits symbol . Following these transition rules in succession generates realizations in the HMM’s process.

How does this generation method compare to generating realizations of the same process via a finite Markov chain. It turns out that this cannot be implemented, since generating a symbol can depend on the infinite history. That is, the process has infinite Markov order. As a result, to generate a realization using a Markov chain one needs an infinite number of Markovian states. In other words, implementing the Markov chain algorithm to generate process samples on a conventional computer requires an infinite amount of memory.

To appreciate the reason behind the process’ infinite Markov order, refer to Fig. 1’s HMM. There are two length- state-loops consisting of the edges colored red (right side of state-transition diagram) and those colored maroon (left side). Note that if the HMM generates s in a row, we will not know the HMM’s current state, only that it is either , , or . This state uncertainty (entropy) is bounded away from . The observation holds for the other loop and its sequences of symbol and the consequent ambiguity among states , , and . Thus, there exist process realizations from which we cannot determine the future statistics, independent of the number of symbols seen. This means that the process statistics depend on infinite past sequences—the process has infinite Markov order. To emphasize, implementing a MC algorithm for this requires infinite memory. The contrast with the finite HMM method is an important lesson: HMMs are strictly more powerful generators, as a class of algorithms, than Markov chain generators.

For any given process , there are an infinite number of HMMs that generate it. Therefore, one is compelled to ask, Which algorithm requires the least memory for implementation? The best known implementation, and provably the optimal predictor, is known as the [33, 22]. The states of the are called causal states; we denote this set .

The average memory required for to generate process is given by the process’ statistical complexity [23]. To calculate it:

  1. Compute the stationary distribution over causal states.

    is the left eigenvector of the state-transition matrix

    with eigenvalue

    : .

  2. Calculate the state’s Shannon entropy .

Thus, measures the (ensemble average) memory required to generate the process.

Another important, companion measure is , the process’ metric entropy (or Shannon entropy rate) [34]:

Although sometimes confused, it is important to emphasize that describes randomness in the realizations, while describes the required memory for process generation.

Iii Quantum memory advantage

Recently, it was shown that a quantum algorithm for process generation can use less memory than the best known classical algorithm () [28]. We refer to the ratio of required classical memory to quantum memory as the quantum advantage. Taking into account a process’ higher-order correlations, a new quantum algorithm—the q-machine—was introduced that substantially improves the original quantum algorithm and is, to date, the most memory-efficient quantum algorithm known for process generation [29]. Closed-form expressions for the quantum advantage are given in [30].

Importantly, the quantum advantage was recently verified experimentally for the simple perturbed coins process [31]. It has been found that the q-machine sometimes confers an extreme quantum-memory advantage. For example, for generation of ground-state configurations (in a Dyson-type spin model with -nearest-neighbor interactions at temperature ), the quantum advantage scales as [35, 36].

One consequence of this quantum advantage arises in model selection [37]. Statistical inference of models for stochastic systems often involves controlling for model size or memory. The following applies this quantum advantage to find gains in the setting of biased sampling of a process’ rare events. In particular, we will develop tools to determine how the memory requirements of classical and quantum algorithms vary over rare-event classes.

Iv Quantum algorithm

We define the quantum machine of a stochastic process , by , where denotes the Hilbert space in which quantum states reside, is the same alphabet as the given process’, and is a set of Kraus operators we use to specify the measurement protocol for states [38].111We adopt a particular form for the Kraus operators. In general, they are not unique. Assume we have the state (or density matrix) in hand. We perform a measurement and, as a result, we measure . The probability of yielding symbol is:

After measurement with outcome , the new quantum state is:

Repeating these measurements generates a stochastic process. The process potentially could be nonergodic, depending on the initial state . Starting the machine in the stationary state defined by:

and doing a measurements over and over again leads to generating a stationary stochastic process over . For any given process, can be calculated by the method introduced in Ref. [30].

Our immediate goal is to design a quantum generator of a given classical process. (Section VI will then take the given process to represent a rare-event class of some other process.) For now, we start with the process’ . The construction consists of three steps, as follows.

First: Map every causal state to a pure quantum state . Each signal state encodes the set of length- sequences that may follow , as well as each corresponding conditional probability:

where denotes a length- sequence, , and is the process’ the Markov order. The resulting Hilbert space is with size , the number of length- sequences, with basis elements .

Second: Form a matrix by assembling the signal states:

From here on out, we assume all the s are linearly independent. (This holds for general processes except for some special cases, which we discuss elsewhere.) Define new bra states :

That is, we design the new bra states such that we obtain the identity:

Third: Define Kraus operators s via:

Using the quantum generator , the required average memory for generating process is , where denotes the von Neumann entropy [38]. References [29, 35] explain why is the quantum machine’s required memory.

V Typical Realizations

At this point, we established classical and quantum representations of processes and characterized their respective memory requirements. Our purpose now turns to this set-up to monitor the classical and quantum resources required to generate probability classes of a process’ realizations.

The concept of a stochastic process is quite general. Any physical system that exhibits stochastic dynamics in time or space may be thought of as generating a stochastic process. In the spatial setting one considers not time evolution, but rather the spatial “dynamic”. For example, consider a one-dimensional noninteracting Ising spin-½ chain with classical Hamiltonian in contact with a thermal reservoir at temperature . After thermalizing, a spin configuration at one instant of time may be thought of as having been generated left-to-right (or equivalently right-to-left). The probability distribution over these spatial-translation invariant configurations defines a stationary stochastic process—a simple Markov random field.

For , one can ask for the probability of seeing

up spins. The Strong Law of Large Numbers

[39] guarantees that for large , the ratio almost surely converges to . That is:

Informally, a typical sequence is one that has close to spin ups. However, this does not preclude seeing other kinds of rare long runs, e.g., all up-spins or all down-spin. It simply means that the latter are rare events.

Now let us formally define the concept of typical realizations and, consequently, rare ones. Consider a given process and let denote its set of length- realizations. Then, for an arbitrary the process’ typical set [40, 41, 42] is defined:

(1)

where is the process’ Shannon entropy rate, introduced above.

According to the Shannon-McMillan-Breiman theorem [43, 44, 45], for a given and sufficiently large :

(2)

There are two important lessons here. First, from Eq. (1) we see that all sequences in the typical set have approximately the same probability. More precisely, the probability of typical sequences decays at the same exponential rate. The following adapts this to use decay rates to identify distinct sets of rare events. Second, coming from Eq. (2), for large the probability of sequences falling outside the typical set is close to zero—these are the sets of rare sequences.

Figure 2: For a given process, the space of all sequences is partitioned into those that are forbidden by the process, sequences in the typical set, and sequences not forbidden nor typical—the atypical or rare sequences.

Another important consequence of the theorem is that sequences generated by a stationary ergodic process fall into one of three partitions; see Fig. 2. The first contains those that are never generated; they fall in the the forbidden set. For example, the HMM in Fig. 1 never generates sequences that have consecutive s. The second partition consists of those in the typical set—the set with probability close to one, as in Eq. (1). And, the last contains sequences in a family of atypical sets—realizations that are rare to different degrees. We now refine this classification by dividing the atypical set into identifiable subsets, each with their own characteristic rarity.

Mirroring the familiar Boltzmann weight in statistical physics [46], in the limit, we define the subsets for a process as:

(3)

This partitions into disjoint subsets in which all have the same probability decay rate . Physics vernacular would speak of the sequences having the same energy density .222, considered as a random variable, is sometimes called a self process [47]. Figure 3 depicts these subsets as “bubbles” of equal energy. Equation (1) says the typical set is that bubble with energy equal to the process’ Shannon entropy rate: . All the other bubbles contain rare events, some rarer than others. They exhibit faster or slower probability decay rates.

Employing a process’ HMM to generate realizations produces sequences in the typical set with probability close to one and, rarely, atypical sequences. Imagine that one is interested in a particular class of rare sequences, say, those with energy (). (One might be concerned about the class of large-magnitude earthquakes or the emergence of major instabilities in the financial markets, for example.) How can one efficiently generate these rare sequences? We now show that there is a new process whose typical set is and this returns us directly to the challenge of biased sampling.

Figure 3: The space of all sequences partitioned into s—isoenergy or equal probability-decay-rate bubbles—in which all sequences in the same have the same energy . The typical set is one such bubble with energy equal to Shannon entropy rate: . Another important class is the forbidden set, in which all sequences do not occur. The forbidden set can also be interpreted as the subset of sequences with infinite positive energy. By applying the map to the process and changing continuously from to (excluding ) one can generate any rare class of interest . corresponds to the most probable sequences with the largest energy density , corresponds to the typical set and corresponds to the least probable sequences with the smallest energy density .

Vi Biased Sampling

Consider a finite set of configurations with probabilities specified by distribution and an associated set of weighting factors. Consider the procedure of reweighting that introduces a new distribution over configurations where:

Given a process and its  , How do we construct an   that generates ’s atypical sequences at some energy density or, as we denoted it, the set ? Here, we answer this question by constructing a map from the original process to a new one . The map is parametrized by which indexes the rare set of interest. (We use for convenience here, but it is related to by a function introduced shortly.) Both processes and are defined on the same measurable sequence space. The measures differ, but their supports (allowed sequences) are the same. For simplicity we refer to as the -map.

Assume we are given . We showed that for every probability decay rate or energy density , there exists a particular such that typically generates the words in for large [48]. The -map which establishes this is calculated by a construction that relates to —the HMM that generates :

  1. For each , construct a new matrix for which .

  2. Form the matrix .

  3. Calculate ’s maximum eigenvalue and corresponding right eigenvector .

  4. For each , construct new matrices for which:

Having constructed the new process by introducing its generator, we use the latter to produce some rare set of interest .

In the limit , within the new process the probability of generating realizations from the set converges to one:

where:

(4)

In addition, in the same limit the process assigns equal energy densities over all the members of the set .

See Ref. [48].

As a result, for large the process typically generates the set with the specified energy . The process is sometimes called the auxiliary, driven, or effective process [49, 50, 51]. Examining the form of the energy, one sees that there is a one-to-one relationship between and . And so, we can equivalently denote the process by . More formally, every word in with probability measure one is in the typical set of process .

The -map construction guarantees that the HMMs and have the same states and transition topology: . The only difference is in their transition probabilities. is not necessarily an —the most memory-efficient classical algorithm that generates the process. Typically, though, is an  and there are only finitely many s for which it is not. (More detailed development along these lines will appear in a sequel.)

Vii Quantum and Classical Costs of Biased Sampling

Having introduced the necessary background to compare classical versus quantum models and to appreciate typical versus rare realizations, we are ready to investigate the quantum advantage when generating a given process’ rare events.

The last section concluded that the memory required by the classical algorithm to generate rare sequences with energy density is:

where and are related via . Similarly, the memory required by the quantum algorithm to generate the rare class with energy density is:

For simplicity, we denote these two quantities by and .

Figure 4:  generator of the Perturbed Coins Process. Edges are labeled with conditional transition probabilities and emitted symbols. For example, for the self-loop on state , indicates the transition is taken with probability and the symbol is emitted.
Figure 5: Classical memory and quantum memory versus for biased sampling of Perturbed Coins Process’ rare sequence classes: See Fig. 4, with and . As the inset shows, for large both classical and quantum memories decay exponentially with , but the quantum memory decays faster.

vii.1 Advantage for a Simple Markov Process

Consider the case where we have two biased coins, call them and , and each has a different bias and both for Heads (symbol ), respectively. When we flip a coin, if the result is Heads, then on the next flip we choose coin . If the result is Tails, we choose coin . Flipping the coins over and over again results in a process called the Perturbed Coins Process [28]. Figure 4 shows the process’  generator , where and .

One can also produce this process with a quantum generator . Using the construction introduced in Sec. IV, it has Kraus operators:

and:

where . For its stationary state distribution we have:

where .

Figure 5 shows the classical and quantum memory costs to generate rare realizations: and versus for different -classes. Surprisingly, the two costs exhibit completely different behaviors. For example, , while . More interestingly, as the inset demonstrates, even though both and vanish exponentially fast, in the limit of goes to zero noticeably faster.

We define the quantum advantage of biased sampling as the ratio of classical to quantum memory:

Figure 6 graphs the quantum advantage and shows how it divides into three distinct scaling regimes. First, for small (high-temperature) the quantum algorithm exhibits a polynomial advantage . Second, for large positive (low-temperature) the quantum algorithm samples the rare classes with exponential advantage. The advantage grows as as one increases and where is a function of and . Third, for large negative (negative low-temperature regime) there is no quantum advantage. Since we are analyzing finite-state processes, this regime appears and is the analog of population inversion. And so, formally there are -class events with negative temperature.

Such is the quantum advantage for the Perturbed Coins Process at and . The features exhibited—the different scaling regimes—are generic for any , though. Moreover, for Perturbed Coins Processes with , the positive and negative low temperature behaviors switch.

Figure 6: Quantum memory advantage for generating the rare realizations of the Perturbed Coins Process with and when employing its q-machine instead of it’s (classical) . Three different advantages occur: (i) near the polynomial advantage scales as , (ii) large positive , there is an exponential advantage , and (iii) no advantage at large negative .

vii.2 Spin System Quantum Advantage

Let us analyze the quantum advantage in a more familiar physics setting. Consider a general one-dimensional ferromagnetic next-nearest-neighbor Ising spin-½ chain [52, 53] defined by the Hamiltonian:

(5)

in contact with thermal bath at temperature . The spin at site takes on values .

After thermalizing, a spin configuration at one instant of time may be thought of as having been generated left-to-right (or equivalently right-to-left). The probability distribution over these spatial-translation invariant configurations defines a stationary stochastic process. Reference [54, Eqs. ] showed that for any finite and nonzero temperature , this process has Markov order . More to the point, the  that generates this process has causal states and those states are in one-to-one correspondence with the set of length- spin configurations.

Figure 7:  that generates the spin configurations occurring in the one-dimensional ferromagnetic next-nearest-neighbor Ising spin chain with the Hamiltonian in Eq. (5).

Figure 7 displays the parametrized  that generates this family of spin-configuration processes. To simulate the process, the generator need only remember the last two spins generated. This means the  has four states, , , , and . If the last two observed spins are for example, then the current state is . We denote the probability of generating a spin given that the previous two spins were by . If the generator is in the state and generates a spin, then the generator state changes to .

To determine the  transition probabilities , we first compute the transfer matrix for the Hamiltonian of Eq. (5) at temperature and then extract conditional probabilities, following Ref. [54] and Ref. [35]’s appendix.

Figure 8: Classical generators of four important rare classes: (Top-left) Negative zero-temperature limit. (Top-right) positive zero temperature limit. (Bottom-left) Negative infinite temperature limit. (Bottom-right) positive temperature limit. Gray edges and states denotes them being rarely visited.
Figure 9: Quantum advantage for biased sampling of Ising spin configurations: versus decay rate for bias sampling of equal-energy spin configurations. Vertical lines locate s corresponding to particular s. Note the extreme advantage indicated by the divergence in at corresponding to .

What are the classical and quantum memory costs for bias sampling of the rare spin-configuration class with decay rate , as defined in Eq. (3)? First, note that is not a configuration’s actual energy density. If we assume the system is in thermal equilibrium and thus exhibits a Boltzmann distribution over configurations, then and are related via:

where:

This simply tells us that if a stochastic process describes thermalized configurations of a physical system with some given Hamiltonian, then every rare-event bubble in Fig. 3 can be labeled either with , , or . Moreover, there is a one-to-one mapping between every such variable pair.

Figure 9 plots versus —the quantum advantage of generating rare configurations with decay rate . To calculate for a given process , first we determine the process’ classical generator using the method introduced in Ref. [33]. Second, for every , using the map introduced in Sec. VI, we find the new classical generator . Third, using the construction introduced in Sec. III, we find . Fourth, using Thm. VI we find the corresponding for the chosen . Using these results gives . By varying in the range we cover all the energy density s. Practically, to calculate in Fig. (9), we chose .

As pointed out earlier, always corresponds to the process itself. And, one obtains its typical sequences. As one sees in Fig. 9, the quantum advantage . This simply means that, though there is a quantum advantage generating typical sequences, it is not that notable. However, the figure highlights four other interesting regimes.

First, there is the positive zero-temperature limit () corresponding to the rare class with minimum energy density equal to . From Eq. (5) it is easy to see that this rare bubble only has two configurations as members: all up-spins or all down-spins. Let us consider finite but large that corresponds to the rare class with a low energy density close to . Figure 8(top-left) shows a general  for this process. Low color intensity for both edges and states means that the process rarely visits them during generation. This means, in turn, that a typical realization consists of large blocks of all up-spins and all down-spins. These large blocks are joined by small segments.

Second, there is the negative zero-temperature limit () that corresponds to the rare class with maximum energy density equal to . From Eq. (5) it is easy to see that this rare bubble only has one configuration as a member: a periodic repetition of spin down and spin up. Consider finite corresponding to a rare class with a high energy density close to . Figure 8(top-right) shows the general  for the associated process. The typical configuration consists of large blocks tiled with spin-up and spin-down pairs which are connected by other short segments.

Third, there is the positive infinite-temperature limit (). In this limit we expect to see completely random spin-up/spin-down configurations. Figure 8(bottom-right) shows the  for this class labeled with nonzero small . The transition probability for the edges labeled is and for the edges labeled is , where is a small positive number. As one can see, even though each transition probability is close to one-half, the self-loops are slightly favored.

Fourth and finally, there is the negative infinite-temperature limit (). The generator here, Figure 8(bottom-left), is similar to that at positive infinite temperature, except that the edge-sign labels are reversed. This means that the self-loops are slightly less favored.

Generating a rare bubble with is sometimes called unphysical sampling since there exists no physical temperature at which the system generates this rare class. As a result, the left part of the Fig. 9 corresponds to physical sampling and the right part to unphysical sampling. That said, there is no impediment to “unphysical” sampling from a numerical standpoint. In addition, as we noted, negative temperatures correspond physically to population inversion, a well-known phenomenon.

Remarkably, the advantage diverges at , where — both the positive and negative high temperature limit. Moreover, the advantage diverges as in both limits and, as a result, there is a polynomial-type advantage. For this specific example one does not find a region with exponential advantage.

Viii Conclusions

We introduced a new quantum algorithm for sampling the rare events of classical stochastic processes. The algorithm often confers a significant memory advantage when compared to the best known classical algorithm. We explored two example systems. In the first, a simple Markov process, we found that one gains either exponential or polynomial advantage. In the second, an Ising chain, we found a polynomial memory advantage for rare classes in both positive and negative high-temperature regimes.

Let us address an important point about the optimality of the classical and quantum algorithms. Consider the integer factorization problem. In this case Shor’s algorithm scales polynomially [24], while the best classical algorithm currently known scales exponentially [55] with problem size. While neither algorithm has been proven optimal, many believe that the separation in scaling is real [56]. Similarly, proving optimality for a rare-event sampling algorithm is challenging in both classical and quantum settings. However, with minor restrictions, one can show that the current quantum algorithm is almost always more efficient than the classical [29].

Acknowledgments

The authors thank Leonardo Duenas-Osorio for stimulating discussions on risk estimation in networked infrastructure. JPC thanks the Santa Fe Institute for its hospitality during visits as an External Faculty member. This material is based upon work supported by, or in part by, the John Templeton Foundation grant 52095, the Foundational Questions Institute grant FQXi-RFP-1609, and the U. S. Army Research Laboratory and the U. S. Army Research Office under contracts W911NF-13-1-0390 and W911NF-13-1-0340.

References

  • [1] A. R. Leach. Molecular modelling: Principles and applications. Pearson Education, Boston, Massachusetts, 2001.
  • [2] D. Frenkel and B. Smit. Understanding Molecular Simulation: From Algorithms to Applications. Academic Press, New York, second edition, 2007.
  • [3] H. D. Miller. A convexity property in the theory of random variables defined on a finite Markov chain. An. Math. Stat., 32(4):1260–1270, 1961.
  • [4] K. Young and J. P. Crutchfield. Fluctuation spectroscopy. Chaos, Solitons, and Fractals, 4:5 – 39, 1994.
  • [5] V. Lecomte, C. Appert-Rolland, and F. van Wijland. Chaotic properties of systems with Markov dynamics. Phys. Rev. Lett., 95(1):010601, 2005.
  • [6] V. Lecomte, C. Appert-Rolland, and F. Van Wijland. Thermodynamic formalism for systems with markov dynamics. J. Stat. Physics, 127(1):51–106, 2007.
  • [7] R. Chetrite and H. Touchette. Nonequilibrium microcanonical and canonical ensembles and their equivalence. Phys. Rev. Lett., 111(12):120601, 2013.
  • [8] S. R. S. Varadhan. Large deviations and applications. SIAM, Philadelphia, Pennsylvannia, 1984.
  • [9] J. P. Garrahan, R. L. Jack, V. Lecomte, E. Pitard, K. van Duijvendijk, and F. van Wijland.

    First-order dynamical phase transition in models of glasses: an approach based on ensembles of histories.

    J. Phys. A: Math. Theo., 42(7):075007, 2009.
  • [10] L. O. Hedges, R. L. Jack, J. P. Garrahan, and D. Chandler. Dynamic order-disorder in atomistic models of structural glass formers. Science, 323(5919):1309–1313, 2009.
  • [11] J. Van Campenhout and T. M. Cover. Maximum entropy and conditional probability. IEEE Trans. Info. Th., 27(4):483–489, 1981.
  • [12] I. Csiszár. Sanov property, generalized I-projection and a conditional limit theorem. Ann. Prob., 12(3):768–793, 1984.
  • [13] G. M. Torrie and J. P. Valleau. Nonphysical sampling distributions in monte carlo free-energy estimation: Umbrella sampling. J. Comp. Physics, 23(2):187–199
  • [14] S. Kumar, J. M. Rosenberg, D. Bouzida, R. H. Swendsen, and P. A. Kollman. The weighted histogram analysis method for free energy calculations on biomolecules. I. The method. J. Comp. Chemistry, 13(8):1011–1021
  • [15] F. Wang and D. P. Landau. Efficient, multiple-range random walk algorithm to calculate the density of states. Phys. Rev. Let., 86(10):2050, 2001.
  • [16] D. A. Levin, Y. Peres, and E. L. Wilmer. Markov Chains and Mixing Times. American Mathematical Society, Providence, Rhode Island, 2009.
  • [17] J. R. Norris. Markov Chains, volume 2. Cambridge University Press, Cambridge, United Kingdom, 1998.
  • [18] D. R. Upper. Theory and Algorithms for Hidden Markov Models and Generalized Hidden Markov Models. PhD thesis, University of California, Berkeley, 1997. Published by University Microfilms Intl, Ann Arbor, Michigan.
  • [19] L. R. Rabiner and B. H. Juang. An introduction to hidden Markov models. IEEE ASSP Magazine, January, 1986.
  • [20] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77(2):257–286
  • [21] J. P. Crutchfield and D. P. Feldman. Regularities unseen, randomness observed: Levels of entropy convergence. CHAOS, 13(1):25–54, 2003.
  • [22] J. P. Crutchfield. Between order and chaos. Nature Physics, 8(1):17–24, 2012.
  • [23] J. P. Crutchfield and K. Young. Inferring statistical complexity. Phys. Rev. Let., 63:105–108, 1989.
  • [24] P. W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Review, 41(2):303–332
  • [25] L. K. Grover. A fast quantum mechanical algorithm for database search. In

    Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing

    , pages 212–219
  • [26] D. S. Abrams and S. Lloyd. Quantum algorithm providing exponential speed increase for finding eigenvalues and eigenvectors. Phys. Rev. Let., 83(24):5162, 1999.
  • [27] A. W. Harrow, A. Hassidim, and S. Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Let., 103(15):150502, 2009.
  • [28] M. Gu, K. Wiesner, E. Rieper, and V. Vedral. Quantum mechanics can reduce the complexity of classical models. Nature Comm., 3:762, 2012.
  • [29] J. R. Mahoney, C. Aghamohammadi, and J. P. Crutchfield. Occam’s quantum strop: Synchronizing and compressing classical cryptic processes via a quantum channel. Sci. Reports, 6, 2016.
  • [30] P. M. Riechers, J. R. Mahoney, C. Aghamohammadi, and J. P. Crutchfield. Minimized state complexity of quantum-encoded cryptic processes. Phys. Rev. A, 93(5):052317, 2016.
  • [31] M. S. Palsson, M. Gu, J. Ho, H. M. Wiseman, and G. J. Pryde. Experimental quantum processing enhancement in modelling stochastic processes. arXiv:1602.05683, 2015.
  • [32] N. F. Travers. Exponential bounds for convergence of entropy rate approximations in hidden markov models satisfying a path-mergeability condition. Stochastic Proc. Appln., 124(12):4149–4170, 2014.
  • [33] C. R. Shalizi and J. P. Crutchfield. Computational mechanics: Pattern and prediction, structure and simplicity. J. Stat. Phys., 104:817–879, 2001.
  • [34] G. Han and B. Marcus. Analyticity of entropy rate of hidden Markov chains. IEEE Trans. Info. Th., 52(12):5251–5266, 2006.
  • [35] C. Aghamohammdi, J. R. Mahoney, and J. P. Crutchfield. Extreme quantum advantage when simulating strongly coupled classical systems. Sci. Reports, 7(6735):1–11, 2017.
  • [36] A. J. P. Garner, Q. Liu, J. Thompson, V. Vedral, and M. Gu. Unbounded memory advantage in stochastic simulation using quantum mechanics. arXiv:1609.04408, 2016.
  • [37] C. Aghamohammadi, J. R. Mahoney, and J. P. Crutchfield. The ambiguity of simplicity in quantum and classical simulation. Phys. Lett. A, 381(14):1223–1227, 2017.
  • [38] J. Preskill. Lecture notes for physics 229: Quantum information and computation, volume 16. California Institute of Technology, Pasadena, California, 1998.
  • [39] R. Durrett. Probability: theory and examples. Cambridge University Press, Cambridge, United Kingdom, 2010.
  • [40] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, New York, second edition, 2006.
  • [41] S. Kullback. Information Theory and Statistics. Dover, New York, 1968.
  • [42] R. W. Yeung. Information Theory and Network Coding. Springer, New York, 2008.
  • [43] C. E. Shannon. A mathematical theory of communication. Bell Sys. Tech. J., 27:379–423, 623–656, 1948.
  • [44] B. McMillan. The basic theorems of information theory. Ann. Math. Stat., 24:196–219, 1953.
  • [45] L. Breiman. The individual ergodic theorem of information theory. Ann. Math. Statistics, 28(3):809–811, 1957.
  • [46] L. Boltzmann. Lectures on gas theory. University of California Press, Berkeley, California, 1964.
  • [47] H. Touchette. The large deviation approach to statistical mechanics. Physics Reports, 478:1–69, 2009.
  • [48] C. Aghamohammadi and J. P. Crutchfield. Minimum memory for generating rare events. Phys. Rev. E, 95(3):032101, 2017.
  • [49] R. L. Jack and P. Sollich. Large deviations and ensembles of trajectories in stochastic models. Prog. Theo. Physics Suppl., 184:304–317, 2010.
  • [50] J. P. Garrahan and I. Lesanovsky. Thermodynamics of quantum jump trajectories. Phys. Rev. Lett., 104(16):160601, 2010.
  • [51] R. Chetrite and H. Touchette. Nonequilibrium Markov processes conditioned on large deviations. In Annales Henri Poincaré, volume 16, pages 2005–2057. Springer, 2015.
  • [52] R. J. Baxter. Exactly solved models in statistical mechanics. Academic Press, New York, New York, 2007.
  • [53] A. Aghamohammadi, C. Aghamohammadi, and M. Khorrami. Externally driven one-dimensional Ising model. J. Stat. Mech., 2012(02):P02004
  • [54] D. P. Feldman and J. P. Crutchfield. Discovering non-critical organization: Statistical mechanical, information theoretic, and computational views of patterns in simple one-dimensional spin systems. Santa Fe Institute, 1998. Santa Fe Institute Paper 98-04-026.
  • [55] C. Pomerance. Fast, rigorous factorization and discrete logarithm algorithms. In Discrete Algo. Complexity, Proceedings of the Japan-US Joint Seminar, June 4-6, 1986, Kyoto, Japan, pages 119–143. Academic Press, New York, New York, 1987.
  • [56] A. Bouland. Establishing quantum advantage. XRDS: Crossroads, The ACM Magazine for Students, 23(1):40–44