DeepAI
Log In Sign Up

Detecting quantum speedup by quantum walk with convolutional neural networks

Quantum walks are at the heart of modern quantum technologies. They allow to deal with quantum transport phenomena and are an advanced tool for constructing novel quantum algorithms. Quantum walks on graphs are fundamentally different from classical random walks analogs, in particular, they walk faster than classical ones on certain graphs, enabling in these cases quantum algorithmic applications and quantum-enhanced energy transfer. However, little is known about the possible speedups on arbitrary graphs not having explicit symmetries. For these graphs one would need to perform simulations of classical and quantum walk dynamics to check if the speedup occurs, which could take a long computational time. Here we present a new approach for the solution of the quantum speedup problem, which is based on a machine learning algorithm that detects the quantum advantage by just looking at a graph. The convolutional neural network, which we designed specifically to learn from graphs, observes simulated examples and learns complex features of graphs that lead to a quantum speedup, allowing to identify graphs that exhibit quantum speedup without performing any quantum walk or random walk simulations. Our findings pave the way to an automated elaboration of novel large-scale quantum circuits utilizing quantum walk based algorithms, and to simulating high-efficiency energy transfer in biophotonics and material science.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/18/2019

Quadratic speedup for finding marked vertices by quantum walks

A quantum walk algorithm can detect the presence of a marked vertex on a...
01/15/2020

Machine learning transfer efficiencies for noisy quantum walks

Quantum effects are known to provide an advantage in particle transfer a...
12/31/2020

Bosonic Random Walk Networks for Graph Learning

The development of Graph Neural Networks (GNNs) has led to great progres...
01/14/2020

Quantum Walk and Dressed Photon

A physical model called dressed photons, a composite system of photons a...
09/27/2022

Generalized Quantum Google PageRank Algorithm with Arbitrary Phase Rotations

The quantization of the PageRank algorithm is a promising tool for a fut...
06/14/2021

Quantum diffusion map for nonlinear dimensionality reduction

Inspired by random walk on graphs, diffusion map (DM) is a class of unsu...
12/09/2019

A Unified Framework of Quantum Walk Search

The main results on quantum walk search are scattered over different, in...

Results

Quantum and classical random walks processes have different dynamics, which leads to a difference in how fast particles traverse graphs from an initial vertex to a target vertex. This difference depends not only on the nature of the particles, but also on the graph on which the particles walk. Importantly, the graph is specified not only by the way vertices are connected, but also by the positions of the initial and the target vertices. It is known that, e.g., quantum particles on line graphs reach target vertices on distance quadratically faster in  ambainis2001one . But if initial and target vertex are not far from each other, it is not easy to determine which particle is faster. To give an instructive example, let us consider line graphs, as random walks on lines are one of the simplest and most extensively studied stochastic processes rajeev1995randomized . In the case of three vertices, there are three inequivalent graphs shown in the first row of Fig. 1(a). Complementary to graphs , two additional rows of graphs are depicted: and . These graphs are modifications of , and correspond to the physical implementation of for quantum () and classical () walks. In the classical case, the target vertex is connected to the neighboring vertices by directed edges. In the quantum case, the sink vertex connected to the target vertex is used to measure the quantum particle, the rest of the graph is unchanged. The measurement process hence changes the dynamics of the quantum system.

Figure 1: Quantum and classical walks on line graphs. (a) Inequivalent line graphs with three vertices are depicted in three different colors (blue, green, and gray). The graphs and graphs are modifications of the graphs that take into account different aspects of the physical implementation of quantum and classical walks, respectively. (b) The quantum (solid) and the classical (dashed) walk dynamics on three different line graphs is shown. Black line at the value of

is the probability threshold at which particle is considered to be detected.

Figure 2: The machine learning approach that is used for detecting the quantum speedup. (a) The process of training CQCNN. (b) The process of testing CQCNN. (c) A simplified scheme of the CQCNN architecture.

Figure 1(b) represents the results on quantum (solid lines) and classical (dashed lines) random walk simulations for all three graphs (blue, green, and gray). We can see that in two cases the classical walker is faster than the quantum one (green and gray cases), and the quantum particle is faster in one case (blue). From this toy example it is clear that the quantum transport speedup is only present in case of the initial and the target vertices being on opposite sites of the graph; and the classical particles are faster if these two vertices are directly connected to each other.

We next describe how the neural network, CQCNN, can learn this for larger graphs and show the results of the learning processes. The learning setup that we use in the paper is depicted in Fig. 2. Fig. 2(a) shows schematically how CQCNN is trained using examples of graphs. CQCNN at each step takes a graph as an input in the form of an adjacency matrix, and outputs a prediction about the class this graph belongs to (quantum or classical). Having a correct label, the loss value is computed. Fig. 2(b) depicts the testing procedure. The difference from the training process is that CQCNN does not receive any feedback on its prediction. In the testing process the network is not modified. The neural network architecture is shown in Fig. 2

(c). CQCNN has a standard layout with convolutional and fully connected layers, and two output neurons that specify two possible output classes.

Detecting quantum speedup for line graphs

We apply the described machine learning methodology to different sets of graphs. In order to understand how our approach works in a systematic way, we first analyze the neural network performance on line graphs. We take the simplest design of CQCNN in Fig. 2 and apply it to line graphs with up to vertices. We trained CQCNN over epochs with a single batch of examples per epoch. The results of these simulations are shown in Fig. 3. Eight lines of four different colors in Fig. 3(a) demonstrate the results of training the neural network on line graphs; each color corresponds to a specific size of a graph with vertices. For the simulations we used datasets with all possible line graph labeling: of which is used to train (dashed lines) CQCNN, and

are used to test (solid lines) its generalization capabilities. The performance of CQCNN on the training graphs is defined by the cross entropy loss function. The loss on a test example

is defined relative to the correct class (classical or quantum, or ) of this example:

(1)

where is the total fraction of examples from this class in the dataset, and are the values of the output neurons. In Fig. 3(a) one can see that CQCNN learns to represent the training graphs as the loss defined by Eq. (1) goes down (dashed curves). But most importantly CQCNN constructed a function that generalizes over seen graphs to unseen graphs, as the classification accuracy222Classification accuracy is the fraction of correct predictions. goes up (solid curves).

Figure 3: (a) Learning performance of CQCNN. The dataset consists of line graphs with and vertices, and the corresponding classical and quantum labels. These results are the average over independent CQCNNs. (b) The average values of CQCNN weights. Blue and red bars correspond to weights that connect feature maps to the “quantum” class and the “classical” class, respectively. The results are the average over

independent CQCNNs. Mean squared deviation is shown as a vertical line for each bar. The zeroth component of the feature vector is the bias. The first feature for each vertex corresponds to the number of edges this vertex has. The second feature to the total number of neighboring edges of all edges leading to the vertex. The third feature gives one if the vertex is connected to the initial vertex by an edge, and zero otherwise. The fourth feature does the same relative to the target vertex.

Our results in Fig. 3(a) demonstrate that it is possible for CQCNN to learn a function that maps graphs to their quantum walk properties. In order to understand the predictive capacity of CQCNN, we analyze the weights of the fully connected layer of the simple CQCNN employed for this classification problem. These weights are visualized in Fig. 3(b) for different number and vertices. These weights correspond to the feature vector which is divided into parts, each corresponding to a specific vertex of the graph333The components of the feature vector are zero for vertices that are not present in smaller graphs.. By looking at the weights of CQCNN, we observe that the designed neural network learned several properties of the quantum speedup on these line graphs. First, we observe that the contributions to the quantum (blue) and classical (red) classes are symmetric: whatever is a positive indication of the quantum class – it is a negative indication of the classical class. Second, the weights are different for different vertices, and this difference explains the classification outcome as we describe next. The graph shows no quantum speedup if the initial vertex is connected to the target vertex (the feature for the vertex , and the feature for the vertex ). It is also discouraged if the target vertex is well connected to the rest of the graph (the features and for the vertex ). And, although the weights of the other features do not strongly define the role of these features, the more connected these vertices – the better for the quantum speedup.

Figure 4: (a) Learning performance of CQCNN. The dataset consists of random graphs with and vertices, examples for each , and the corresponding classical and quantum labels. CQCNN was simulated during epochs, mini batches each with the batch size of examples. The neural network was tested on random graphs for each . (b)-(c) Two An examples of two graphs from the test set which were correctly classified by the neural network. On graph (b) the classical particle is faster, whereas on the graph (c) the quantum particle is faster. The initial and the target vertices are marked in yellow and red, respectively.

The landscape of weights changes when the graph size grows (growing in Fig. 3(b)), but not drastically. The described correlations hold for all studied graph sizes. In addition to this consistency, we see that the deviation of weights from their average is quite small – all CQCNNs learned very similar weights. By looking at vertices , and , we observe that the weights are almost identical: all these vertices contribute identically to the classification. Indeed, as it turns out, the dynamics of particles is invariant under relabelling of the vertices apart from the initial and the target vertices. Hence CQCNN autonomously realized that many graph examples are isomorphic.

Learning all these graph properties helps the network to correctly classify graphs of the same size which were not seen previously. CQCNN can go a step further, and apply the learned data representation to graphs of larger sizes. This can be seen in Fig. 3(c) where the training is done on line graphs with and , but tested on graphs with vertices. The classification accuracy on larger graph sizes is between and , which is significantly better than a random guess. Note that the generalization performance is not , as we observed that for different graph sizes there are always new cases that are not derived from the smaller graphs.

Detecting quantum speedup for random graphs

CQCNN was shown to be able to classify line graphs. Next, we estimate how well the presented methodology works on other graphs. In general, the more symmetries the graph has – the better we would expect CQCNN’s performance is, as there are more ways to learn graph properties from examples. For this reason, random graphs should be one of the most challenging sets for our method. Especially for random graphs, we do not expect that training examples generalize well to test examples as both sets could be very independent. Even given enough training examples, we expect there always will be graphs that do not share common properties with any other graph.

We simulated CQCNN’s learning process for random graphs, each sampled uniformly from the set of all possible graphs with vertices and edges. The learning performance results are shown in Fig. 4 for and , is chosen uniformly from to . In our simulations we observe that the loss after training is close to zero (below ) for all these random graphs. In Fig. 4(a) we see that both recall and precision444Recall quantifies the fraction of correct predictions in a particular class, whereas precision identifies the fraction of a particular class predictions that turned out to be correct. are for the “classical” part of the set, and is in the range of for the “quantum” part of the set. Overall, we see that our method made it possible to classify random graphs correctly much better than a random guess555The random guess will guess “classical” (or “quantum”) class correctly in of the cases. without performing any quantum walk dynamics simulations. Examples of correctly classified graphs are shown in Fig. 4(b)-(c).

Discussion

Recently speedup problem extensively has been discussed in the framework of quantum computation purposed to accelerate solution of familiar optimization problems by using quantum hardware Kechedzhi2016 ; Albash2018 . However, detecting a quantum speedup in this hardware represents a complex problem that depends on many physical parameters including size and topology of the system Alodjants2017 ; Lewenstein2017 ; Hamze2014 ; Smolin2014

. In this paper we proposed a new machine learning method to detect a speedup of quantum transport. This method is based on training a discriminative classifier, that is, a specially designed convolutional neural network (CQCNN). We have generated the training examples, each consisting of an adjacency matrix and a corresponding label (“classical” or “quantum”), by simulating the random walk dynamics of classical and quantum particles. The generated examples were used to train CQCNN with a stochastic gradient descent algorithm.

By training CQCNNs we demonstrated in Fig. 3 that the neural network is able to learn classifying the quantum speedup, and to match the results obtained by our simulations. First, CQCNN learns to approximate given examples very well by representing the quantum and classical properties of graphs in its weights: CQCNN compresses up to adjacency matrices with entries each666Which is the case for line graphs with vertices as the training set consisted of of the total number of line graphs, see Fig. 3(b) for . into just real parameters. Second, CQCNN automatically learns what graph features are important for quantum speedup. We identified that for line graphs these correlations correspond to well-explainable graph properties. Additionally, the neural network learns that many graphs are isomorphic, with no indication of over-fitting on adjacency matrix features. Third, we demonstrated good generalization capacity of the constructed CNN. The neural network was correctly classifying not only previously unseen graphs of the same size, but also of sizes that were never given to train the network. For the line graphs of the same size the average accuracy was shown to be above , and in the case of the larger graph sizes. We believe that this performance is strong as we know that test examples do not necessarily share any structural similarities with training examples.

Finally, the presented approach was applied to random graphs with up to vertices. Although the space of possible labeled graphs is more than graphs (see Ref. slone1964online for vertices), with only randomly generated training examples we proved that it is possible to significantly improve over the random guess.

The presented machine learning methodology can be used to find novel topologically large-scale graphs and circuits which exhibit maximal quantum speedup. At the same time our results might be specifically important in material science and biophotonics for a deeper understanding and designing of novel materials with unique quantum transport properties.

Methods

In this section we give additional details on the machine learning methodology and the learning methods.

Quantum walks on graphs

In the following, we describe the quantum walk dynamics on graphs, and give more details on simulations that were performed in this paper.

We consider adjacency matrices that describe undirected connected graphs with vertices on which classical and quantum walks are simulated. A graph is specified by the set of vertices and the set of edges . All edges are described by a pair of vertices . As the graphs that we consider are undirected, and all matrices are symmetric: . Without the loss of generality, we label the vertices and as the “initial” and the “target” vertices. Given an adjacency matrix , we simulate classical and quantum continuous-time walks during the time , which depends on the probability of detecting a particle. The results of the simulations are classical and quantum dependencies of the probability of detecting a particle in at time . From these two dynamics we obtain the information about the time particle is in with threshold probability . Given the two time values we can detect if there exists some quantum advantage of using a quantum particle for reaching on a given graph.

The classical continuous-time random walk (CTRW) is simulated by solving the following differential equation

(2)

where is a vector of probabilities of detecting a classical particle in vertices of the graph;

is the identity matrix of size

. The transition matrix is a matrix of probabilities for a particle to jump from to . As we would like to “catch” the particle in , the edges that lead to are made directed. This modification is implemented by introducing a new adjacency matrix which is equal to apart from the column : , , and . The transition matrix can be obtained from the corresponding adjacency matrix by dividing all entries in a -th column of by the in-degree of the vertex , for all . This introduced modification of effectively makes the underlying graph directed such that a classical particle cannot escape once it is there.

The solution of the differential equation in Eq. (2) is

(3)

where is a probability vector corresponding to a classical particle initially located in . The dynamics in Eq. (3) is known as node-centric CTRW aldous2002reversible ; masuda2017random . Node-centric CTRWs have a property that a particle leaves a vertex with the same rate for all vertices . In the considered case the trajectories are statistically the same as those of the discrete-time random walk (DTRW), hense the dynamics of in Eq. (2) can be viewed as a “continuization” of the DTRW dymanics.

The continuous-time quantum walk (CTQW) dynamics is simulated by solving the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation

(4)

with the Hamiltonian . is an adjacency matrix of size and is equal to apart from adding an -th row and an -th column of zeros: . The new matrix corresponds to a graph with an additional “sink” vertex . This sink vertex serves as an auxilary vertex where a quantum particle is kept captured once it ends there. The only way the particle can end there is by decaying from , this process is mathematically taken care of by the operator . Physically, introduces incoherence in the unitary CTQW dynamics described by , by moving the quantum particle from to with rate . In general, the rate dramatically influences the CTQW dynamics: if – the dynamics is coherent and we will never observe the particle in , if the value of is large (e.g., ) – we might never observe the particle in 777This effect is known as the Zero effect, a vertex is measured to frequently so the particle never appears there.. Because there is no universally best value for the parameter for all graphs , we use throughout the paper.

We solve to the GKSL equation numerically with the initial condition and observe the dynamics of that is equal to the population in at time . The function is a positive and an increasing function of time. Note that, opposite to the case of the CTRW, in the CTQW the probability of detecting the particle does not necessarily go to one with time.

We next compare and against . The time at which or is called the hitting time for quantum or classical particle, respectively.

Generation of training and test examples

Here we explain how the instances for training and testing the convolutional neural network were generated for the example of line graphs. Given a fixed number of vertices , one can construct line graphs with the different labeling of vertices. One-half of these graphs have the same adjacency matrices as the other half, because they are each others mirror images.

In the simplest examples that we study here, line graphs, there are possible graphs with only the following inequivalent ones: a graph with a set of edges , , and . These three graphs are demonstrated in Fig. 1 together with the graphs and that correspond to modifications of for the CTQW and the CTRW, respectively. The results of the CTQW and the CTRW simulations are the detection probability functions and , which are shown as solid and dashed curves in Fig. 1. From these simulations, one can see that for the graph (blue) with edges a quantum particle can be detected with probability faster than a classical particle. For the two other graphs, (green) with edges and (gray) with edges a classical particle reaches the “target” vertex faster.

Convolutional neural network architecture

In this section we describe in detail how the convolutional neural network, which is used in this paper, is constructed.

We are using a specifically designed convolutional neural network, CQCNN, to learn from different graphs. The architecture of this neural network is shown in Fig. 2(c). CQCNN, which we specifically designed to work with graphs, consists of a two-dimensional input layer that takes one graph represented by an adjacency matrix . This layer is connected to several convolutional layers, the number of which depends on the number of vertices of the input graph. The first convolutional layer consists of six filters (or, feature detectors) that define three different ways of processing the input graph. These three ways are marked by different colors (green, red, blue) in Fig. 2(c). The weights and types of filters determine what specific features are detected. The first type of filters detects how well the vertex is connected to the rest of the graph by extracting features from the matrices, where are integer numbers. The second type of convolutions detects the same, but for the vertex. The third filter type looks at connectivities within the graph and detects how well each vertex is connected to other vertices. These three filter types are applied in several layers together with identity filters that propagate extracted features further. These layers are followed by a filter that deletes symmetric parts of all the matrices. It is done to eliminate redundant information, as all the matrices are still symmetric after being processed by all these fixed filters. At the next layer we apply filters of the fixed

size with variable parameters in order to find relations between different edges. The last layer of filters summarizes all the information about the edges in the vertices description, by that decreasing the number of neuron values to a polynomially smaller number of next layer’s neuron values. The extracted features are next flattened and connected to two fully connected layers on neurons. Neurons in the first fully connected layer have a rectified linear unit (ReLU) activation function, which helps to construct a nonlinear function, and let the last layer map the learned features to

or label (two output neurons in Fig. 2(c)).

CQCNN makes a choice between classical and quantum classes based on the values of two output neurons. The predicted class is defined as an index of a neuron with the largest output value:

(5)

The network learns by stochastic gradient descent algorithm that takes the cross entropy loss function in Eq. (1).

The filters that we constructed in the described neural network architecture are essential to the success of learning. First, the edge-to-edge (ETE) filter allows the network to see how many neighboring edges each edge has. The process of obtaining a feature map from an input “image” using the edge-to-edge filter is shown in Fig. 5(a). Given an input matrix the ETE filter outputs the following matrix with components:

(6)
Figure 5: (a) The working principle of the edge-to-edge filter. An example of processing the adjacency matrix of a line graph with vertices is shown. A feature map shows that there are four edges with one neighboring edge, and two edges with two neighboring edges. (b) The working principle of the edge-to-vertex filter. An example of processing the adjacency matrix (already without its symmetric part) of a line graph with vertices is shown. A feature map shows that there are two vertices with one neighboring edge, and two vertices with two neighboring edges.

The second important filter is the edge-to-vertex (ETV) filter. This filter allows summarizing information about the edges in the vertices. The filtering procedure takes an input matrix and outputs a vector with components:

(7)

The working principle of this filter is visualized in Fig. 5(b).

Acknowledgment

This work was financially supported by Government of Russian Federation, Grant 08-08 and by RFBR grant No.19-52-52012 MHT-a.

References

  • (1) A. Montanaro, “Quantum algorithms: an overview,” npj Quantum Inf., vol. 2, p. 15023, 2016.
  • (2) T. F. Ronnow, Z. Wang, J. Job, S. Boixo, S.V.Isakov, D. Wecker, J. M. Martinis, D. A. Lidar, and M. Troyer, “Defining and detecting quantum speedup,” Science, vol. 345, no. 6195, pp. 420–424, 2014.
  • (3) V. Dunjko, Y. Ge, and J. I. Cirac, “Computational speedups using small quantum devices,” Phys. Rev. Lett., vol. 121, p. 250501, 2018.
  • (4) S. Boixo, S. V. Isakov, V. N. Smelyanskiy, R. Babbush, N. Ding, M. B. Zhang Jiang, J. Martinis, and H. Neven, “Characterizing quantum supremacy in near-term devices,” Nat. Phys., vol. 14, no. 11, p. 595–600, 2018.
  • (5) R. Motwani and P. Raghavan, Randomized Algorithms, ch. 6. New York, USA: Cambridge University Press, 1995.
  • (6) F. Wang and D. P. Landau, “Efficient, multiple-range random walk algorithm to calculate the density of states,” Phys. Rev. Lett., vol. 86, pp. 2050–2053, 2001.
  • (7) M. Szummer and T. Jaakkola, “Partially labeled classification with Markov random walks,” in Adv. Neural. Inf. Process. Syst. (T. G. Dietterich, S. Becker, and Z. Ghahramani, eds.), vol. 14, pp. 945–952, MIT Press, 2002.
  • (8) L. Grady, “Random walks for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 11, pp. 1768–1783, 2006.
  • (9) C. Gkantsidis, M. Mihail, and A. Saberi, “Random walks in peer-to-peer networks: Algorithms and evaluation,” Perform. Eval., vol. 63, no. 3, pp. 241 – 263, 2006.
  • (10) M. Kac, “Random walk and the theory of Brownian motion,” Am. Math. Mon., vol. 54, no. 7P1, pp. 369–391, 1947.
  • (11) T. Sottinen, “Fractional Brownian motion, random walks and binary market models,” Financ. Stoch., vol. 5, no. 3, pp. 343–355, 2001.
  • (12) F. Bartumeus, M. G. E. da Luz, G. M. Viswanathan, and J. Catalan, “Animal search strategies: A quantitative random-walk analysis,” Ecology, vol. 86, no. 11, pp. 3078–3087, 2005.
  • (13) D. Brockmann, L. Hufnagel, and T. Geisel, “The scaling laws of human travel,” Nature, vol. 439, no. 7075, p. 462, 2006.
  • (14) E. A. Codling, M. J. Plank, and S. Benhamou, “Random walk models in biology,” J. Royal Soc. Interface, vol. 5, no. 25, pp. 813–834, 2008.
  • (15) Y. Aharonov, L. Davidovich, and N. Zagury, “Quantum random walks,” Phys. Rev. A, vol. 48, pp. 1687–1690, 1993.
  • (16) J. Kempe, “Quantum random walks: An introductory overview,” Contemp. Phys., vol. 44, no. 4, pp. 307–327, 2003.
  • (17) S. E. Venegas-Andraca, Quantum walks for computer scientists, vol. 1 of Synthesis Lectures on Quantum Computing. Morgan & Claypool Publishers, 2008.
  • (18) S. E. Venegas-Andraca, “Quantum walks: a comprehensive review,” Quantum Inf. Process., vol. 11, pp. 1015–1106, 2012.
  • (19) A. Ambainis, “Quantum walks and their algorithmic applications,” Int. J. Quantum Inf., vol. 1, no. 4, pp. 507–518, 2003.
  • (20) A. Ambainis, “Quantum walk algorithm for element distinctness,” SIAM J. Comput., vol. 37, no. 1, pp. 210–239, 2007.
  • (21) A. M. Childs, “Universal computation by quantum walk,” Phys. Rev. Lett., vol. 102, p. 180501, 2009.
  • (22) F. Magniez, A. Nayak, J. Roland, and M. Santha, “Search via quantum walk,” SIAM J. Comput., vol. 40, no. 1, pp. 142–164, 2011.
  • (23) A. M. Childs, D. Gosset, and Z. Webb, “Universal computation by multiparticle quantum walk,” Science, vol. 339, no. 6121, pp. 791–794, 2013.
  • (24) R. Portugal, Quantum walks and search algorithms. Springer, 2013.
  • (25) A. A. Melnikov and L. E. Fedichkin, “Quantum walks of interacting fermions on a cycle graph,” Sci. Rep., vol. 6, p. 34226, 2016.
  • (26) S. Chakraborty, L. Novo, A. Ambainis, and Y. Omar, “Spatial search by quantum walk is optimal for almost all graphs,” Phys. Rev. Lett., vol. 116, p. 100501, 2016.
  • (27)

    H. J. Briegel and G. De las Cuevas, “Projective simulation for artificial intelligence,”

    Sci. Rep., vol. 2, p. 400, 2012.
  • (28)

    G. Paparo, V. Dunjko, A. Makmal, M. Miguel, and H. J. Briegel, “Quantum speedup for active learning agents,”

    Phys. Rev. X, vol. 4, no. 3, p. 031002, 2014.
  • (29) G. S. Engel, T. R. Calhoun, E. L. Read, T.-K. Ahn, T. Mančal, Y.-C. Cheng, R. E. Blankenship, and G. R. Fleming, “Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems,” Nature, vol. 446, no. 7137, p. 782, 2007.
  • (30) M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspuru-Guzik, “Environment-assisted quantum walks in photosynthetic energy transfer,” J. Chem. Phys., vol. 129, no. 17, p. 174106, 2008.
  • (31) T. Scholak, F. de Melo, T. Wellens, F. Mintert, and A. Buchleitner, “Efficient and coherent excitation transfer across disordered molecular networks,” Phys. Rev. E, vol. 83, p. 021912, 2011.
  • (32) D. Manzano, M. Tiersch, A. Asadian, and H. J. Briegel, “Quantum transport efficiency and fourier’s law,” Phys. Rev. E, vol. 86, p. 061118, 2012.
  • (33) A. Asadian, D. Manzano, M. Tiersch, and H. J. Briegel, “Heat transport through lattices of quantum harmonic oscillators in arbitrary dimensions,” Phys. Rev. E, vol. 87, p. 012109, 2013.
  • (34) G. F. Lawler, “Expected hitting times for a random walk on a connected graph,” Discrete Math., vol. 61, no. 1, pp. 85 – 92, 1986.
  • (35) L. Lovász, “Random walks on graphs: A survey,” in Combinatorics: Paul Erdõs is Eighty, Bolyai Soc. Math. Stud., vol. 2, pp. 353––397, János Bolyai Math. Soc., Budapest, 1996.
  • (36) A. Ambainis, E. Bach, A. Nayak, A. Vishwanath, and J. Watrous, “One-dimensional quantum walks,” in

    Proceedings of the 33rd Annual ACM Symposium on Theory of Computing

    , STOC ’01, (New York, NY, USA), pp. 37–49, ACM, 2001.
  • (37) D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani, “Quantum walks on graphs,” in Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, STOC ’01, (New York, NY, USA), pp. 50–59, ACM, 2001.
  • (38) D. Solenov and L. Fedichkin, “Continuous-time quantum walks on a cycle graph,” Phys. Rev. A, vol. 73, p. 012313, 2006.
  • (39) L. Fedichkin, D. Solenov, and C. Tamon, “Mixing and decoherence in continuous-time quantum walks on cycles,” Quantum Inf. Comput., vol. 6, no. 3, pp. 263–276, 2006.
  • (40) J. Kempe, “Discrete quantum walks hit exponentially faster,” Probab. Theory Relat. Fields, vol. 133, no. 2, pp. 215–235, 2005.
  • (41) H. Krovi and T. A. Brun, “Hitting time for quantum walks on the hypercube,” Phys. Rev. A, vol. 73, p. 032341, 2006.
  • (42) R. Santos and R. Portugal, “Quantum hitting time on the complete graph,” Int. J. Quantum Inf., vol. 08, no. 05, pp. 881–894, 2010.
  • (43) A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A. Spielman, “Exponential algorithmic speedup by a quantum walk,” in Proceedings of the 35th Annual ACM Symposium on Theory of Computing, STOC ’03, (New York, NY, USA), pp. 59–68, ACM, 2003.
  • (44) A. Makmal, M. Zhu, D. Manzano, M. Tiersch, and H. J. Briegel, “Quantum walks on embedded hypercubes,” Phys. Rev. A, vol. 90, p. 022314, 2014.
  • (45) A. Makmal, M. Tiersch, C. Ganahl, and H. J. Briegel, “Quantum walks on embedded hypercubes: Nonsymmetric and nonlocal cases,” Phys. Rev. A, vol. 93, p. 022322, 2016.
  • (46) N. J. A. Sloane, “The on-line encyclopedia of integer sequences, Sequence A001187.”
  • (47) J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, “Quantum machine learning,” Nature, vol. 549, no. 7671, p. 195, 2017.
  • (48) V. Dunjko and H. J. Briegel, “Machine learning & artificial intelligence in the quantum domain: a review of recent progress,” Rep. Prog. Phys., vol. 81, no. 7, p. 074001, 2018.
  • (49) G. Carleo and M. Troyer, “Solving the quantum many-body problem with artificial neural networks,” Science, vol. 355, no. 6325, pp. 602–606, 2017.
  • (50) J. Carrasquilla and R. G. Melko, “Machine learning phases of matter,” Nat. Phys., vol. 13, no. 5, p. 431, 2017.
  • (51) K. Ch’ng, J. Carrasquilla, R. G. Melko, and E. Khatami, “Machine learning phases of strongly correlated fermions,” Phys. Rev. X, vol. 7, p. 031038, 2017.
  • (52) A. A. Melnikov, H. Poulsen Nautrup, M. Krenn, V. Dunjko, M. Tiersch, A. Zeilinger, and H. J. Briegel, “Active learning machine learns to create new quantum experiments,” Proc. Natl. Acad. Sci. U.S.A., vol. 115, no. 6, pp. 1221–1226, 2018.
  • (53)

    M. Bukov, A. G. R. Day, D. Sels, P. Weinberg, A. Polkovnikov, and P. Mehta, “Reinforcement learning in different phases of quantum control,”

    Phys. Rev. X, vol. 8, p. 031086, 2018.
  • (54) T. Fösel, P. Tighineanu, T. Weiss, and F. Marquardt, “Reinforcement learning with neural networks for quantum feedback,” Phys. Rev. X, vol. 8, p. 031084, 2018.
  • (55) L. O’Driscoll, R. Nichols, and P. Knott, “A hybrid machine-learning algorithm for designing quantum experiments,” arXiv:1812.03183, 2018.
  • (56) H. Poulsen Nautrup, N. Delfosse, V. Dunjko, H. J. Briegel, and N. Friis, “Optimizing quantum error correction codes with reinforcement learning,” arXiv:1812.08451, 2018.
  • (57) R. Iten, T. Metger, H. Wilming, L. del Rio, and R. Renner, “Discovering physical concepts with neural networks,” arXiv:1807.10300, 2018.
  • (58) Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  • (59)

    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,”

    Nature, vol. 521, no. 7553, p. 436, 2015.
  • (60)

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in

    Adv. Neural Inf. Process. Syst. (F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, eds.), vol. 25, pp. 1097–1105, Curran Associates, Inc., 2012.
  • (61) P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” in Proceedings of the 7th International Conference on Document Analysis and Recognition, vol. 02, p. 958, 2003.
  • (62) S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Transactions on Neural Networks, vol. 8, no. 1, pp. 98–113, 1997.
  • (63) A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in

    2014 IEEE Conference on Computer Vision and Pattern Recognition

    , pp. 1725–1732, 2014.
  • (64) J. Kawahara, C. J. Brown, S. P. Miller, B. G. Booth, V. Chau, R. E. Grunau, J. G. Zwicker, and G. Hamarneh, “BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment,” NeuroImage, vol. 146, pp. 1038 – 1049, 2017.
  • (65) V. M. Leli, S. Osat, T. Tlyachev, and J. D. Biamonte, “Deep learning super-diffusion in multiplex networks,” arXiv:1811.04104, 2018.
  • (66) F. Flamini, N. Spagnolo, and F. Sciarrino, “Photonic quantum information processing: a review,” Rep. Prog. Phys., vol. 82, no. 1, p. 016001, 2018.
  • (67) A. Aspuru-Guzik and P. Walther, “Photonic quantum simulators,” Nat. Phys., vol. 8, pp. 285–291, 2012.
  • (68) M. Gräfe, R. Heilmann, M. Lebugle, D. Guzman-Silva, A. Perez-Leija, and A. Szameit, “Integrated photonic quantum walks,” J. Opt., vol. 18, p. 103002, 2018.
  • (69)

    X. Qiang, X. Zhou, J. Wang, C. Wilkes, T. Loke, S. O’Gara, L. Kling, G. D. Marshall, R. Santagati, T. C. Ralph, J. B. Wang, a. M. G. T. J. L. O’Brien, and J. C. F. Matthews, “Large-scale silicon quantum photonics implementing arbitrary two-qubit processing,”

    Nat. Photonics, vol. 12, p. 534–539, 2018.
  • (70) K. Kechedzhi and V. N. Smelyanskiy, “Open-system quantum annealing in mean-field models with exponential degeneracy,” Phys. Rev. X, vol. 6, p. 021028, 2016.
  • (71) T. Albash and D. A. Lidar, “Adiabatic quantum computing,” Rev. Mod. Phys., vol. 90, p. 015002, 2018.
  • (72) M. E. Lebedev, D. A. Dolinina, K.-B. Hong, T.-C. Lu, A. V. Kavokin, and A. P. Alodjants, “Exciton-polariton josephson junctions at finite temperatures,” Sci. Rep., vol. 7, p. 9515, 2017.
  • (73) T. Grass and M. Lewenstein, “Hybrid annealing using a quantum simulator coupled to a classical computer,” Phys. Rev. A, vol. 95, p. 052309, 2017.
  • (74) H. G. Katzgraber, F. Hamze, and R. S. Andrist, “Glassy chimeras could be blind to quantum speedup: Designing better benchmarks for quantum annealing machines,” Phys. Rev. X, vol. 4, p. 021008, 2014.
  • (75) J. A. Smolin and G. Smith, “Classical signature of quantum annealing,” Front. Phys., vol. 2, p. 52, 2014.
  • (76)

    D. Aldous and J. Fill, “Reversible Markov chains and random walks on graphs,” 2002.

    unfinished monograph, recompiled 2014.
  • (77) N. Masuda, M. A. Porter, and R. Lambiotte, “Random walks and diffusion on networks,” Phys. Rep., vol. 716-717, pp. 1–58, 2017.