1 Introduction
1.1 Preliminaries
Quantum computing is an emerging paradigm in computation which leverages the quantum mechanical phenomena of superposition and entanglement to create states that scale exponentially with number of qubits, or quantum bits. Quantum algorithms have been proposed which have considerable speedups in a wide variety of algebraic and number theoretic problems such as factoring of large numbers
[Shor] and matrix multiplication [buhrman]. In 2013 Peruzzo et al. proposed a variational quantum eigenvalue solver, which was targeted towards finding the ground state of a Hamiltonian, specifically of a quantum chemical system. [Peruzzo] In this report, we will extend this work to analyze the spectra of the matrices associated with graphs, which are mathematical structures that denote relationships (via edges) between objects (via vertices).1.2 The Adjacency and the Laplacian
For a graph with vertex set , the adjacency matrix is a square matrix such that its element when there is an edge from vertex i to vertex j, and when there is no edge. The Laplacian matrix is a square matrix such that its element when there is an edge from vertex i to vertex j, when there is no edge, and , where is the vertex in . If the graph is directed, may correspond to the indegree or outdegree of ; we will explore both. [biggs]
We will be working with directed and undirected graphs without selfloops. is an eigenvalue
if for some nonzero vector
and matrix , . We order the eigenvalues of any adjacency matrix as , and the eigenvalues of an Laplacian matrix as . We quickly note that , so we will be more interested in finding .1.3 Variational Quantum Eigensolver (VQE)
The Variational Quantum Eigensolver (VQE) algorithm combines the ability of quantum computers to efficiently compute expectation values with a classical optimization routine in order to approximate ground state energies of quantum systems. VQE allows us to find the smallest eigenvalue (and corresponding eigenvector) of a matrix . It is based on the variational principle, which states that for any Hamiltonian H,
and
There are two subroutines involved with VQE. The quantum subroutine has two steps: first, we prepare an , or a quantum state parameterized by . Then, we measure the expectation value
The classical subroutine is as follows. We use a classical nonlinear optimizer such as the NelderMead method [nm] to minimize the expectation value by varying the ansatz parameters . Then, we iterate this step until convergence. It has been shown that VQE demonstrates polynomial scaling of each iteration with respect to system size, in contrast to exponential scaling of the current bestknown classical algorithm for the same task. Further, the update step of the NelderMead method scales linearly (or, at worst case, polynomially) in the number of parameters included in the minimization of the expectation of the above term [fletcher]. Let be the number of terms comprising the Hamiltonian , be the desired precision value, where where is a constant and is the
fold tensor product of Pauli operators acting on the system, and
be such that . The authors of [Peruzzo]estimate the total cost per iteration to be , for some small constant which is determined by the encoding of the quantum state and the classical minimization method (in our case, NelderMead). In contrast, the computation of the expectation value = using classical methods requires floating point operations, giving that the scaling of this procedure for a classical computer is roughly , which demonstrates a superpolynomial quantum speedup over the classical alternative. Note that we are not claiming a superpolynomial speedup over a method for finding spectra, as the authors of [fast_LA] and others have shown that polynomialtime algorithms for this exist. We are rather claiming that the method described is more efficient on a quantum processor, and has practical applications, as argued by the authors of [Peruzzo].1.4 Spectral Graph Theory Applications
We can explore multiple problems in spectral graph theory by representing the adjacency and Laplacian matrices of graphs as Pauli operators, and then leveraging the VQE algorithm to find the minimum (or maximum) eigenvalues and respective eigenvectors of these matrices. Note that given the largest eigenvalue and corresponding eigenvector of a symmetric matrix corresponding to the adjacency or Laplacian of an undirected graph, the largest eigenvalue of is the secondlargest eigenvalue of , since now has eigenvalue 0. Using this fact we can decompose the spectra of these matrices. This, in turn, will reveal interesting properties about the graphs. Here are a few applications, though the focus of our paper is on the process of gathering the eigenvalues:

If we take the sum of the squares of the distances between neighbors, then the eigenvector corresponding to the smallest nonzero eigenvalue will be minimizing tfhis sum of squared distances. Likewise, the maximum eigenvalue will try to maximize the discrepancy between neighbors’ values of v. ^{2}^{2}2https://web.stanford.edu/class/cs168/l/l11.pdf

The eigenvectors corresponding to small eigenvalues are, in some sense, trying to find good partitions of a graph. These low eigenvectors are trying to find ways of assigning different numbers to vertices, such that neighbors have similar values. Additionally, since they are all orthogonal, each eigenvector is trying to find a “different” or “new” such partition.

Many problems can be modeled as the problem of kcoloring a graph (assigning one of k colors to each vertex in a graph, where no two neighboring vertices have the same color). This problem of finding a kcolor, or even deciding whether a kcoloring of a graph exists, is NPhard in general. One natural heuristic is to embed the graph onto the eigenvectors corresponding to the highest eigenvalues. As one would expect, in these embeddings, points that are close together in the embedding tend to not be neighbors in the original graph.
[sgt] 
is bipartite.
2 Proposed Method
2.1 Pauli Representation
We are given an adjacency or Laplacian matrix of a graph, and want to represent it as an operation of Pauli matrices. To do this, we recursively break up a matrix into submatrices, and can consequently represent any matrix in as a Pauli sum/product. This allows us to represent any graph on
vertices, where some of the vertices can be used for padding. The procedure is as follows.
We may represent any matrix as a linear combination of the below constructor matrices :
We may then represent any adjacency matrix = as Pauli operators by representing its submatrices , , , and as Pauli operators. Note that
where we can construct , , , and recursively, where the base case is a linear combination of the .
To represent a Laplacian matrix an operation of Pauli matrices, let be the Pauli operation representing the adjacency matrix, and be the Pauli operation representing the Laplacian matrix. Then
where is matrix which is at (, ) and everywhere else. may be specified to be the indegree or outdegree of in the case of a directed graph.
For each matrix type, we may represent a graph on vertices by embedding it into a matrix and padding the unused rows and columns with zeros, as this does not affect the spectra. Resulting fold tensored Pauli operator construction is then simplified using the Pauli algebra rules.
2.2 Eigenvalue Estimation
Once we’ve represented our matrix as an operation of Pauli matrices, we can utilize the Variational Quantum Eigensolver to determine the minimum eigenvalue of the system. For the purposes of this experiment, we opted to use the layered ansatz proposed by [kandala]. Finding improved ansatzes that perform well specifically for matrices related to graphs is left as future work. The ansatz can be represented as follows. In 1, we concatenate parameterized layers of gates some number of times to produce an ansatz of a given depth .
Now, let be the number of qubits our ansatz is applied to. Apply RX followed by RZ to every qubit. Then apply CNOT for . This applies CNOT gates to entangle all of our qubits after doing some rotation that is parameterized by . This is seen in 2.
. The number of layers in the ansatz is a hyperparameter: the more layers that are present, the more computationally intensive the procedure is, but also generally the more accurate the estimation is. The Variational Quantum Eigensolver takes as input the Pauli representation of our matrix, acting on
qubits, as each tensor operation in our Hamiltonian construction step multiplies the number of rows and columns by 2. With this chosen ansatz, we opted to use the NelderMead method referenced in 1.C.3 Experimental Results and Analyses
To evaluate our approach, we chose to implement the algorithms in pyQuil, a library for generating Quil programs to be executed using the Rigetti Forest platform. Quil is a quantum instruction set architecture that first introduced a shared quantum/classical memory model. [quil] We first tested our algorithms on the Quantum Virtual Machine (QVM) before running them on Rigetti’s Quantum Cloud Service (QCS) platform, on lattices of varying topologies. For a comprehensive evaluation, we analyzed runtime and calculation error (by taking the absolute value of the difference between our output and the output of numpy’s linalg.eig function) comparisons between different choices of ansatz parameters, graph densities, and matrix types on graphs of between 4 and 64 vertices. In 3, 3, and 4, we see an example of a directed graph on 8 vertices, its (indegree) Laplacian, and the corresponding Pauli operator term.
3.1 Gate Complexity
A corollary to what was shown by Stuart Hadfield in [hadfield] is that if a function is efficiently representable as a Hamiltonian , then size is poly. Since is a fold tensor product with growing on the order of , it is efficiently representable. Moreover, Hadfield showed that the number of gates needed to represent is , where is the maximum locality of any term (number of qubits the term acts on), so we conjecture that the complexity of the gates in our system is poly, as well.
To test this, we randomly generated adjacency matrices undirected graphs of density with numbers of vertices being 4, 8, 16, 32, 64, 128, and 256, and selected the resulting largest Pauli representations of each size to gather “worstaverage case” values on the number of gates with respect to the input size. We then plotted these values and fitted a curve to test our conjecture.
The results in 5 seem to support our conjecture, as we were able to fit a quadratic curve to the data.
3.2 Graph Densities
Before we tested our eigenvalue estimation algorithms on QCS with respect to any other parameter, we wanted to determine the worstcase density of a graph. We tested the accuracy and runtime for our algorithm with 36 densities spaced uniformly between 0 and 1. The other parameters are as follows:
to 0.5 — X[c] — X[c] — Matrix Type & Undirected Adjacency
Number of Vertices & 8
Number of Trials per Test & 3
Number of Ansatz Layers & 3
Lattice & Aspen43QA
As we expected, we see in 6 that the algorithm performed very quickly and with low error on “trivial” graphs, those that were fully connected or empty. The runtime peaks with “halfconnected” graphs, where the density parameter is
. As a result, we chose this value as the density parameter for most of our other tests. For good measure to reduce variance, we ran this test with more examples on densities of 0, 0.1, 0.3, 0.5, 0.7, 0.9, and 1:
to 0.5 — X[c] — X[c] — Matrix Type & Undirected Adjacency
Number of Vertices & 4
Number of Trials per Test & 20
Number of Ansatz Layers & 5
Lattice & Aspen42QA
7 supports our previous conclusion about the relationship between density and runtime.
3.3 Ansatz Layers
We next wanted to determine the effect that the number of layers, , of our ansatz had on the runtime and accuracy of our algorithm. We chose to test ansatzes of 1, 2, 3, 4, 5, 7, 10, 15, and 20 layers.
to 0.5 — X[c] — X[c] — Matrix Type & Undirected Adjacency
Number of Vertices & 4
Number of Trials per Test & 20
Density & 0.5
Lattice & Aspen42QA
As we suspected, the error rate seen in 8 decreased as the number of ansatz layers increased, but the computation time also increased. We noted that at around 3 layers, there was a point of diminishing returns in terms of accuracy for 4vertex graph matrices, so we chose to use 3layer ansatzes in future experiments.
3.4 Graph and Matrix Types
Next, we wanted to test the effectiveness of our algorithm with respect to different graph and matrix types. In particular, we wanted to fix other variables and determine the runtime difference between the adjacency matrix of an undirected graph, the adjacency matrix of a directed graph, the Laplacian matrix of an undirected graph, the Laplacian matrix of an undirected graph (with outdegree), and the Laplacian matrix of an undirected graph (with indegree). In these tests, we found the maximum eigenvalue of each respective matrix, since the minimum eigenvalue of the Laplacian is .
to 0.5 — X[c] — X[c] — Number of Vertices & 8
Number of Trials per Test & 5
Density & 0.5
Number of Ansatz Layers & 3
Lattice & Aspen43QA
We weren’t sure how the asymmetry of directed graph matrices would affect the algorithm’s performance, but tests in 9 and 10 showed that matrices of undirected graphs where much easier to compute quickly and accurately than their directed graph counterparts. Additionally, the algorithm seems to find calculating the spectra of adjacency matrices slightly easier than calculating the spectra of Laplacian matrices.
3.5 Input Size
For our final test, we ran our algorithm on randomly generated matrices of 4, 5, 8, 9, 16, 32, and 64 vertices. The number of trials, designed for balance between variance and computational intensity, were 10, 5, 5, 2, 2, 1, and 1, respectively.
to 0.5 — X[c] — X[c] — Matrix Type & Undirected Adjacency
Density & 0.5
Number of Ansatz Layers & 3
Lattice & Aspen46QA
With the plots in 11 and 12, we can see that the error rate, given the ansatz, seems to grow on the order of , and the runtime to grown on the order of poly(). Note also that since we pad matrices with zeros to the next power of 2, the difference in runtime between a graph on 5 vertices and a graph on 8 vertices, as well as a graph on 9 vertices and a graph on 16 vertices, is very small. Indeed, when we curvefit the mean runtime with respect to the number of vertices (here, we plot powers of 2 for the number of vertices), we find that a quadratic fits the curve well:
13 seems to support the earlier theoretical claim of the runtime of each iteration being , with convergence of NelderMead being linear. We ran this experiment on a classical machine using the quantum virtual machine, and with this experiment, we observed an exponential curve fit for the runtime plot. We find that a quadratic does not fit the curve, which seems to grow on the order of exp, shown in Figure 14.
.
4 Conclusion
In this paper, we’ve presented an algorithm that represents an adjacency or Laplacian matrices of graphs as a Pauli operation and applies the Variational Quantum Eigensolver (VQE) algorithm to determine the spectra of these graphs. We’ve discussed theoretical results regarding the runtime of this procedure and have compared these with results gathered by testing our algorithm on a quantum computer (via Rigetti’s QCS). We’ve also observed and analyzed how our algorithm’s runtime and accuracy change with respect to graph density, number of ansatz layers, graph and matrix types, and number of vertices in the graph.
We’ve identified several avenues for future work. First, and perhaps most important, is the discovery of an ansatz that is particularly well suited for graph spectral matrix inputs. The current ansatz is not designed to scale, as the authors of [kandala] state. Previous ansatzes have been designed with deep experience in the systems they’re meant for in mind (e.g., quantum chemistry), so more work is needed here. One approach that we would like to try soon is to automate the ansatz search by either bruteforce search or a statistical learning procedure. This could greatly improve the usefulness and efficacy of the variational quantum eigensolver. Second is the comparison with algorithms for determining the entire spectra of matrices. Algorithms presented in [abrams]
use the Quantum Fast Fourier Transform (QFFT) to provide an exponential speedup for finding eigenvalues and eigenvectors. While we’ve shown that our algorithm can iteratively determine all of the eigenvalues and eigenvectors of an adjacency or Laplacian matrix, it remains to see which is faster in practice. Third is theoretical work on the growth of the error rate of our algorithm with respect to the input size, as well as work on the worst and averagecase gate complexity with respect to input size. Finally, we’d like to investigate how this algorithmic approach can be applied to probabilistic methods in graph theory.
Acknowledgements.
The authors acknowledge Rigetti Computing for providing quantum computing credit that was used for experiments in this work. They would also like to thank Will Zeng, Aaron Sidford, Jacob Fox, Erik Bates, and Nick Steele for their thoughts and input with the project, as well as Dan Boneh, Will Zeng, Duliger Ibeling, and Jonathan Braatz for advising them in this project.References
Appendix
For completeness, we simulate each of the experiments run on QCS on the quantum virtual machine (QVM). These experiments may be run on a classical computer; in this case, they were run on an macOS machine with a 2.9 GHz i9 processor and 32 GB of 2400 MHz DDR4 SDRAM without multithreading.
Densities
This experiment was carried out on 36 densities spaced uniformly between 0 and 1. The other parameters are as follows:
to 0.5 — X[c] — X[c] — Matrix Type & Undirected Adjacency
Number of Vertices & 8
Number of Trials per Test & 3
Number of Ansatz Layers & 3
The results are seen in Figure 15.
We then modeled the experiment with densities 0, 0.1, 0.3, 0.5, 0.7, 0.9, and 1 with a greater number of trials.
to 0.5 — X[c] — X[c] — Matrix Type & Undirected Adjacency
Number of Vertices & 4
Number of Trials per Test & 20
Number of Ansatz Layers & 5
The results are seen in Figure 16.
Ansatz Layers
Next, we tested ansatzes of 1, 2, 3, 4, 5, 7, 10, 15, and 20 layers with the following parameters.
to 0.5 — X[c] — X[c] — Matrix Type & Undirected Adjacency
Number of Vertices & 4
Number of Trials per Test & 20
Density & 0.5
The results are seen in Figure 17.
Graph and Matrix Types
Next, we experimented with the 5 aforementioned types of graphs and matrices using the following parameters.
to 0.5 — X[c] — X[c] — Number of Vertices & 8
Number of Trials per Test & 5
Density & 0.5
Number of Ansatz Layers & 3
Finally, we studied the behavior of our algorithm on the QVM with respect to the number of vertices. Interestingly, the results could not be fitted well with any degree 2 polynomial, and indeed exhibited exponential behavior. We didn’t anticipate the exponential factor to kick in at a number of vertices as small as this, but this experiment seems to provide evidence of this happening, which would back up our claim that our algorithm sees a superpolynomial quantum speedup over the classical method.
We tested this on 4, 5, 8, 9, 16, 32, and 64 vertices with the following parameters:
to 0.5 — X[c] — X[c] — Matrix Type & Undirected Adjacency
Density & 0.5
Number of Ansatz Layers & 3
The results are seen in Figure 20.
We see similar jumps in runtime and error from 4 to 5 and from 8 to 9 as well, since we pad an matrix padded with zeros to accommodate a matrix and likewise a matrix padded with zeros to accommodate a matrix.
Comments
There are no comments yet.