1 Introduction
In^{†}^{†}ss2310@cam.ac.uk, sjh227@cam.ac.uk classical graph theory and signal processing, sparsifying dense matrices and performing algorithms thereon to reduce the computational load is a key idea, which has received significant attention over the past several years. For instance, in the case where the matrix is a graph adjacency matrix, one of the primary motivations for spectral sparsification of the graph Laplacian in classical computer science is its utility for attacking cut problems [2, 3], and as a result the spectral properties of the sparsified graph’s Laplacian are frequently treated as the measure of interest to retain while reducing the number of edges in the graph. Sparsification has also found use in designing preconditioners for linear system solvers [1, 4, 5], and in studying constraint satisfaction problems [6].
Recently, the idea of transferring the wealth of results on the benefits of sparsity from classical signal processing to quantum algorithms, in particular with regards to Hamiltonian simulation, has been investigated in [7]; their results for analog simulation indicate that degree reduction and edge dilution does not work in the quantum setting for general graphs. On the other hand, digital quantum simulation may also benefit from sparsification, and this idea is yet to be explored to the best of our knowledge. For Hamiltonian simulation, we are ultimately interested in approximating the evolution under a Hermitian matrix . That is, given any state , we want to approximate the state , which can be done by approximating itself in spectral norm. We should then study and bound
(1) 
where is a spectral sparsifier for . This quatity is simply the spectral norm of the difference operator
, but the form above reminds us that we also need to keep track of the way the eigenspaces change in the process of sparsification.
In this paper we address three issues with transferring the classical sparsity results to the quantum setting directly: firstly, the classical sparsification techniques typically yield matrices that are sparse overall, but the quantum algorithm for Hamiltonian simulation requires the more restrictive condition of rowsparsity, that is quantum algorithms for Hamiltonian simulation in the query model, in which the input Hamiltonian is accessed via a unitary oracle that computes matrix entries in place, up to fixed precision, typically require the interaction graph of the Hamiltonian matrix to be row sparse and locally computable; secondly, it is necessary that the adjacency matrix, rather than Laplacian is convergent; and finally, it is necessary that any residual error in the sparsified matrix does not cause a catastrophic breakdown in the accuracy of the Hamiltonian simulation.
We show how each of these problems can be overcome, although we do still make some assumptions, namely that the Hamiltonian is real, that each row of the Hamiltonian is sampled sufficiently frequently in the sparsification method, and that the sparsified Hamiltonian commutes with actual Hamiltonian. In fact, we do not believe that any of these assumptions are necessary, rather that they are simply features of our proofs and that future analysis should reveal that the results hold when these are not made. Nevertheless, even with these assumptions, the analysis in this paper serves the purpose of better connecting classical sparsification with Hamiltonian simulation methods that assume rowsparse matrices, and shows the likely way forward to achieving a full general and unrestricted result.
A secondary purpose of this paper concerns the verification of rowsparsity, which is also an important question in its own right, and one which ostensibly would appear to lend itself to being sped up by a quantum algorithm. We show this is indeed the case by proposing two quantum algorithms that can decide whether a given matrix is rowsparse with fewer operations than are required classically. Whilst conceptualised from quite different starting points, both of these quantum algorithms require operations, compared to classically (for a matrix), and this coincidence of computational complexity raises two intriguing possibilities: firstly, it may be possible to combine the two algorithms in some way to achieve the sparsity verification in operations; or conversely, it may be that is a lowerbound.
The remainder of the paper is organised as follows: in Section 2 we give precise details of the problem we are going to solve, including analysis of the overall benefits that it brings to Hamiltonian simulation; in Section 3 we give our main results on relating the classical sparsification algorithm to the problem of Hamiltonian simulation; in Section 4 we propose two quantum sparsification verification algorithms; and finally in Section 5 we include a wideranging discussion covering, amongst other things, the physical meaning of a rowsparse Hamiltonian.
Contributions

From the literature we note that sparsification can reduce the computational load in several linear algebraic problems, whilst maintaining accuracy. However, we note a discrepancy between classical sparsification algorithms, which typically achieve edge sparsification or dilution, and quantum algorithms for sparse Hamiltonian simulation, which require row sparsity (degree reduction). To bridge this divide, we provide a necessary and sufficient condition for (general) sparsity to imply row sparsity.

The classical sparsification method upon which we base our analysis [1] provides a bound in terms of the Laplacian, whereas we require one in terms of the adjacency matrix. We therefore prove that the accuracy condition proved for the Laplacian implies that the adjacency matrix is also wellapproximated by the sparsified adjacency matrix, when the above rowsparsity condition is met.

We then show that this condition on the adjacency matrix being wellapproximated is sufficient for the Hamiltonian simulation with the sparsified matrix to wellapproximate the actual case.

We are also interested in the verification of sparsity, and to this end we propose two quantum algorithms that can verify whether or not a matrix is row sparse in time, for an matrix, which represents an improvement on the time required classically.
2 Setup and problem statement
We base our exposition on the method of spectral sparsification using effective resistance sampling developed in [1]. Given a graph with vertices, edges, and an diagonal matrix of edge weights, spectral sparsification generates another graph such that
(2) 
where represents the Laplacian matrix, and the second condition holds in the usual partial order on positive semidefinite matrices. The runtime of this classical algorithm is .
The simplest process of this kind can be described as sampling from the edge set according to some probability mass function (pmf) and hence populating . If for each the probability of picking is , we can marginalise out the edges and consider the pmf induced on the vertices, defining where denotes the edgevertex incidence matrix of the graph.
In this article, we show that the sparsifier generated by a spectral sparsification algorithm which uses sampling methods has under certain conditions, with high probability, the additional property that it is rowsparse, i.e., that the maximum degree of the sparsifier grows only as .
3 Relating sparsification to efficient Hamiltonian simulation
In this section we prove three results that allow us to make the connection between classical sparsification and efficient Hamiltonian simulation in the query complexity model: firstly, we prove a necessary and sufficient condition for sparsity to imply row sparsity; secondly, we show that the asymptotic convergence of the Laplacian proven by [1] is sufficient for the asymptotic convergence of the adjacency matrices that we require; and finally we show that the propagation of error from the sparsification process into the sparsified Hamiltonian in the sense described in the sparsification bounds does not lead to a significant increase in error in the Hamiltonian simulation.
3.1 A necessary and sufficient condition for row sparsity
A matrix is rowsparse if the number of nonzero entries in a row (for all rows) grows as polylog of the size of the matrix (at most). Clearly rowsparsity implies sparsity, but the opposite does not apply in general, for example a star network has a sparse adjacency matrix, but each element of the row corresponding to the point of the star is one (except for the leading diagonal entry), and therefore is obviously not row sparse.
In the effective resistance Hamiltonian sparsification method, each edge of a adjacency graph corresponding to the Hamiltonian is chosen with a certain probability, and a defined number of i.i.d. samples are drawn (that grows at most with ), to construct the sparsified Hamiltonian. Marginalising over the the edges incident to each node as described in Section 2
, to get a vertex selection probability distribution
^{1}^{1}1Note that each edge is connected to two vertices, vertex selection is not mutually exclusive so this will sum to more than one – a property later used in upperbounding., it is obvious that a necessary condition for rowsparsity is that no vertex is selected with probability growing faster than . We now show that this necessary condition is also sufficient, in both obvious senses of asymptotic statistical row sparsity:
As the probability that any row has a number of elements greater than tends to zero, for some .

As the expected maximum number of nonzero elements (across the rows of the matrix) grows as for some .
Hereafter these are termed the first and second properties of asymptotic row sparsity.
Proposition 3.1.
For a Hamiltonian sparsified by the effective resistance method, if no row is selected with probability greater than , then it is satisfies the first row sparsity property.
Proof.
Let the total number of samples drawn be , therefore the expected number of times that the row with maximum occupation is selected can be upperbounded:
(3) 
where is defined as .
Let be the number of nonzero entries in the row, and be number of nonzero entries in the row with greatest selection probability. By the Chernoff bound [8], for :
(4) 
letting (the term is included to ensure that the condition is met for sufficiently large ), and noticing that all the other nodes are less likely to be chosen and thus the Union Bound can be invoked:
(5) 
where this upperbound also uses the fact that all other rows are chosen with probability less than the row with maximum selection probability. Let :
(6) 
for sufficiently large , i.e., as . ∎
Proposition 3.2.
For a Hamiltonian sparsified by the effective resistance method, if no row is selected with probability greater than , then it is satisfies the second row sparsity property.
Proof.
The proof is similar to that of Proposition 3.1, and the same symbols are used. The proposition considers the expectation of the number of nonzero entries in the row with the most nonzero entries (note that this need not be the the row with highest selection probability). This can be upperbounded:
(7) 
i.e., where the second term in the first line uses the fact that, if the maximally chosen row is chosen more than times, it can still only be chosen a total of times, as this is the number of samples. The first term in the RHS of Eq. (21) is clearly , letting equal the second term in the RHS of Eq. (21), i.e.
(8) 
clearly, as , therefore . So it follows that . ∎
3.2 Laplacian convergence implies adjacency matrix convergence when row sparse
From [1], we have that a weighted graph with Laplacian is sparsfied to a graph with Laplacian , such that the following condition is met:
(9) 
for all vectors
and .We wish to express a similar condition for the adjacency matrix, , where is a diagonal matrix where each element is the sum of the elements in that row of the adjacency matrix. Our method to do this relies on the property for all – that is that the diagonal elements of the sparsified Laplacian are expected to be the same as those of the actual Laplacian (this can be seen in the analysis in [1]). Additionally, we must assume that each row is expected to be selected at least times, where . To an extent it is valid to criticise such a condition as unnecessarily restrictive, however we do expect the condition in Theorem 3.3 (or another very similar one) to hold even if this were not to be the case, albeit requiring a very different proof. The inclusion of this condition is therefore for reasons of exposition – it enables us to use a similar proof to the others in this section, and suffices to demonstrate the principle. Moreover, in Section 5 we discuss the physicality of such a restriction. With these restrictions in place, we can give the result as a theorem:
Theorem 3.3.
(10) 
for some .
Proof.
We start by substituting and into Eq. (9), and rearranging:
(11) 
where we define , which we address using the Chernoff bound. Addressing the upper limit of , we have that:
(12) 
using the Union bound, we have that
(13) 
Likewise for the lowerbound on we have that:
(14) 
and again using the Union bound:
(15) 
So we can see that the condition must hold in order for the closeness to hold asymptotically – i.e., as the probability that we have a ‘good’ sparsifier tends to one. This dominates the condition , and so we can set to complete the proof. ∎
3.3 Error propagation: from spectral sparsification to hamiltonian simulation
It remains to be shown that the fact that the adjacency matrix is well approximated by its sparsified version implies that the Hamiltonian simulation will also be well approximated when the sparsified version is used. To do so, we express Eq. (10) in slightly different (but equivalent) terms, namely that we are given a spectral approximation of a Hamiltonian, satisfying
(16) 
Consider the following
(17) 
where and for convenience. Let us first take the simple case when and commute; then , and so where . Now we can write
(18) 
where Eq. (16) gives , which also implies . In the last last line we bound , the expectation value of the Hamiltonian under the state , by the maximum value the energy can take, given by the spectral norm of .
When and do not commute, the first order SuzukiTrotter formula suggests that up to an error of order . The error term in the sum is then third order in . Thus the above analysis still holds for small times . However, for longer evolution times, a more detailed error analysis is required, and we are currently investigating this.
Runtime overhead:
From the above analysis of error propagation, it is clear that choosing
(19) 
ensures that to first order. Putting this back into the runtime expression for the spectral sparsification algorithm of [1], we can estimate the (onetime) classical runtime overhead required in order to use sparse Hamiltonian simulation for time , which is given by
(20) 
The presence of the spectral norm is to be expected as it sets the energy scales for the problem; evidently this method is only useful if . We expect this to be true for several systems of physical significance, e.g. moleular Hamiltonians, which typically have
terms in a tensor product of Pauli basis (expecting the coupling constants also to scale polynomially for most common molecules).
4 Sparsity testing
Testing if an input function or vector is sparse is a problem that has recently received some attention in the context of big data and machine learning algorithms
[11, 12]. Could a quantum algorithm for sparsity testing offer any advantages? Given an oracle that computes matrix entries in place, we could use a comparison oracle to flag an ancillary register as ‘1’ wherever there is a zero entry, and then use quantum amplitude estimation on the ancilla to estimate the number of zero entries. We demonstrate two quantum algorithms below for testing rowsparsity of an input matrix.4.1 Sparsity testing using quantum amplitude estimation
Given a matrix (we make this assumption of being real for simplicity of exposition, and to be consistent with the previous analysis, but the following should easily generalise to complex numbers) that we can access via a unitary quantum oracle that computes its entries in place (to some fixed precision), i.e.
(21) 
where are the row and column indices, and the third register contains the matrix entry to bits of precision (so ).
Another oracle that we will use is the comparator
(22) 
which can be implemented efficiently using quantum adder circuits [13].
Let us use the oracle to prepare a superposition over the entries of a chosen row
(23)  
Then we can adjoin two ancillary registers, , and using the oracle we can make the following series of transformations
where the support of row of is . Here we have assumed that the data register contains the magnitude of , so that we can just check if it is less than a small threshold in order to check if it is close to zero — the magnitude can be obtained easily by taking advantage of the signed fixedpoint representation of (e.g. by simply neglecting the sign bit). Now note that the amplitude of the subspace of the above state is proportional to the sparsity of row
(24) 
where is a projector onto the flag subspace. This can be estimated to additive precision using queries to , using the method of quantum amplitude estimation [14], which would give us a quantity satisfying
whence we see that choosing gives us an additive approximation of of to precision , using queries. Following the same procedure to estimate the sparsities of all rows, the overall sparsity (which for us is the maximum number of non zeros in any row or column) can be ascertained in queries; we can leave out of this consideration since it can be chosen to be of order unity.
4.2 Sparsity testing using quantum maximum finding
We still use oracle access as in Eq. (21
), and we assume the data register has enough qubits to store the sum of the entries in any row. We start by putting the rows in superposition:
(25) 
Now we iterate over calls to the oracle ( is initially 0):
(26) 
setting on each iteration, until , after which we have the state:
(27) 
in which the column register, now in state , can be dispensed with. Therefore in operations we have created a superposition of the sum of each of the rows, indexed accordingly. Quantum maximum finding methods [15] can make use of this state, preparing it times, to find the maximum in a further. Thus we have a quantum algorithm that takes oracle queries and additional quantum arithmetic operations. By contrast, a classical algorithm to check for row sparsity would have to sum over all rows ( operations) and then classically find the maximum ( operations). (note that it may be possible to do this slightly faster, but it would still be necessary to check over a number of elements growing linearly with for each row, and to check all of the rows).
We remark that the above algorithm that uses quantum maximum finding appears to rely on being a binary adjacency matrix. When an upper bound on is available, this limitation can be overcome by normalising the matrix entries by .
5 Discussion
In this paper we have shown how classical sparsifying techniques can be used as a preprocessing step to obtain a row computable sparse input matrix that can then be used with efficient quantum algorithms for sparse Hamiltonian simulation. The onetime classical overhead in runtime may be justified by the fact that the sparsified output matrix may be used for multiple applications (e.g. for simulating time evolution of several different states) which can be efficiently performed quantumly.
Usually we require every problem instance to be rowsparse in a quantum algorithm. What we have so far, using spectral sparsification, is a guarantee that as grows, the sparsified output is also row sparse with high probability. Therefore it is also necessary that the simulation has some sort of checking mechanism, such that the simulation is halted if too many iterations have been required (i.e., because the actual sparsified Hamiltonian that was generated was not, in fact, row sparse), and to start again with a fresh sparsified Hamiltonian. This should be easy to include in any implementation, and the first and second properties of row sparsity are sufficient in this case to guarantee good overall performance (that is, as the probability of needing to start again vanishes).
On a more general note, it is interesting to consider Hamiltonian sparsification in the context of a suite of simulation algorithms. For example, we have identified that star graphs are sparse, but not row sparse – thus we can see that physical systems that are dominated by a few components may yield sparsified Hamiltonian’s that are essentially a superposition of a number of star graphs. Thus, whilst the techniques presented in this paper will not apply, it may be possible to use other techniques such as lowrank approximations. Conversely, for physical systems in which a number of components barely make any impact on the whole (i.e., they have few and/or lowweight edges to other vertices), then it is likely to be safe to simply neglect these components. Informally, this can be seen as a justification for assuming that each row was sampled at least many times, as in Section 3.2.
Open problems
Finally, it is worth discussing the deficiencies of this paper – as identified in the introduction, we make three assumptions in the analysis: that the Hamiltonian is real, that its rows are all sampled at least many times, and that it commutes with its sparsifier. The first two of these essentially restrict the physical application of our work, and it would therefore be beneficial to show that the same results hold when these assumptions are removed, as well as tightening the various bounds where possible; the third assumption, however, is more fundamental – it is important to understand whether a sparsified Hamiltonian commutes with the actual Hamiltonian in general, and if not whether the discrepancy can be shown to be insignificant when the full simulation is analysed (for example by using Trotter formulas / the BCH expansion to quantify errors). However, such a question seems to be of more general relevance than simply to plug a gap in our analysis: the condition given in Eq. (16) seems to be an eminently reasonable general measure of approximation accuracy, which may be used for myriad Hamiltonian approximation methods, and it is therefore important to show that it does indeed lead to accurate Hamiltonian simulation.
References
 [1] Daniel A. Spielman and Nikhil Srivastava. Graph Sparsification by Effective Resistances. SIAM Journal on Computing, 40(6):1913–1926, jan 2011.
 [2] Joshua Batson, Daniel A. Spielman, Nikhil Srivastava, and ShangHua Teng. Spectral sparsification of graphs. Communications of the ACM, 56(8):87, aug 2013.
 [3] Joshua Batson, Daniel A. Spielman, and Nikhil Srivastava. Twiceramanujan sparsifiers. SIAM Review, 56(2):315–334, jan 2014.
 [4] Daniel A. Spielman and Shang Hua Teng. Spectral sparsification of graphs. SIAM Journal on Computing, 40(4):981–1025, 2011.
 [5] Daniel A. Spielmanm and Shang Hua Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM Journal on Matrix Analysis and Applications, 35(3):835–885, 2014.

[6]
Irit Dinur.
The PCP theorem by gap amplification.
Proceedings of the Annual ACM Symposium on Theory of Computing
, 2006(3):241–250, 2006.  [7] Dorit Aharonov and Leo Zhou. Hamiltonian Sparsification and GapSimulation. In Avrim Blum, editor, 10th Innovations in Theoretical Computer Science Conference (ITCS 2019), volume 124 of Leibniz International Proceedings in Informatics (LIPIcs), pages 2:1–2:21, Dagstuhl, Germany, 2018. Schloss Dagstuhl–LeibnizZentrum fuer Informatik.
 [8] Herman Chernoff. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Statist., 23(4):493–507, 12 1952.
 [9] Guang Hao Low and Isaac L Chuang. Optimal Hamiltonian Simulation by Quantum Signal Processing. Physical Review Letters, 118(1):010501, 1 2017.

[10]
András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe.
Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics.
6 2018.  [11] Siddharth Barman, Arnab Bhattacharyya, and Suprovat Ghoshal. Testing sparsity over known and unknown bases. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 491–500, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
 [12] Parikshit Gopalan, Ryan O’Donnell, Rocco A. Servedio, Amir Shpilka, and Karl Wimmer. Testing fourier dimensionality and sparsity. SIAM J. Comput., 40(4):1075–1100, July 2011.
 [13] Craig Gidney. Halving the cost of quantum addition. (1):4–7, 9 2017.
 [14] Gilles Brassard, Peter Hoyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification and estimation. Quantum Computation and Information, Contemporary Mathematics, 305:53–74, 2002.
 [15] Christoph Durr and Peter Hoyer. A quantum algorithm for finding the minimum, 1996.
Comments
There are no comments yet.