Randomized Linear Algebra Approaches to Estimate the Von Neumann Entropy of Density Matrices

01/03/2018
by   Eugenia-Maria Kontopoulou, et al.
Purdue University
0

The von Neumann entropy, named after John von Neumann, is the extension of classical entropy concepts to the field of quantum mechanics and, from a numerical perspective, can be computed simply by computing all the eigenvalues of a density matrix, an operation that could be prohibitively expensive for large-scale density matrices. We present and analyze two randomized algorithms to approximate the von Neumann entropy of density matrices: our algorithms leverage recent developments in the Randomized Numerical Linear Algebra (RandNLA) literature, such as randomized trace estimators, provable bounds for the power method, and the use of Taylor series and Chebyschev polynomials to approximate matrix functions. Both algorithms come with provable accuracy guarantees and our experimental evaluations support our theoretical findings showing considerable speedup with small

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/12/2020

Quantum algorithms for spectral sums

We propose and analyze new quantum algorithms for estimating the most co...
12/24/2017

Lectures on Randomized Numerical Linear Algebra

This chapter is based on lectures on Randomized Numerical Linear Algebra...
05/07/2020

Determinantal Point Processes in Randomized Numerical Linear Algebra

Randomized Numerical Linear Algebra (RandNLA) uses randomness to develop...
04/29/2021

Photonic co-processors in HPC: using LightOn OPUs for Randomized Numerical Linear Algebra

Randomized Numerical Linear Algebra (RandNLA) is a powerful class of met...
12/27/2021

Computationally Efficient Approximations for Matrix-based Renyi's Entropy

The recently developed matrix based Renyi's entropy enables measurement ...
10/01/2021

Randomized block Krylov methods for approximating extreme eigenvalues

Randomized block Krylov subspace methods form a powerful class of algori...
02/08/2021

Learning with Density Matrices and Random Features

A density matrix describes the statistical state of a quantum system. It...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Entropy is a fundamental quantity in many areas of science and engineering. The von Neumann entropy, named after John von Neumann, is the extension of classical entropy concepts to the field of quantum mechanics, and its foundations can be traced to von Neumann’s work on Mathematische Grundlagen der Quantenmechanik111Originally published in German in 1932; published in English under the title Mathematical Foundations of Quantum Mechanics in 1955.. In his work, Von Neumann introduced the notion of the density matrix, which facilitated the extension of the tools of classical statistical mechanics to the quantum domain in order to develop a theory of quantum measurements.

From a mathematical perspective (see Section 1.1 for details) the density matrix is a symmetric positive semidefinite matrix in with unit trace. Let , be the eigenvalues of in decreasing order; then, the entropy of is defined as222 is symmetric positive semidefinite and thus all its eigenvalues are non-negative. If is equal to zero we set to zero as well.

(1)

The above definition is a proper extension of both the Gibbs entropy and the Shannon entropy to the quantum case and implies an obvious algorithm to compute by first computing the eigendecomposition of ; known algorithms for this task necessitate time [8]. Clearly, as grows, such running times are impractical. For example, [21] describes an entangled two-photon state generated by spontaneous parametric down-conversion, which can result in a density matrix with .

Motivated by the above discussion, we seek numerical algorithms that approximate the von Neumann entropy of large density matrices, e.g., symmetric positive definite matrices with unit trace, faster than the trivial approach. Our algorithms build upon recent developments in the field of Randomized Numerical Linear Algebra (RandNLA), an interdisciplinary research area that exploits randomization as a computational resource to develop improved algorithms for large-scale linear algebra problems. Indeed, our work here focuses at the intersection of RandNLA and information theory, delivering novel randomized linear algebra algorithms and related quality-of-approximation results for a fundamental information-theoretic metric.

1.1 Background

We will focus on finite-dimensional function (state) spaces. In this setting, the density matrix represents the statistical mixture of pure states, and has the form

(2)

The vectors

for represent the pure states and can be assumed to be pairwise orthogonal and normal while the

’s correspond to the probability of each state and satisfy

and . From a linear algebraic perspective, eqn. (2) can be rewritten as

(3)

where is the matrix whose columns are the vectors and is a diagonal matrix whose entries are the (positive) ’s. Given our assumptions for , ; also is symmetric positive semidefinite with its eigenvalues equal to the and corresponding left/right singular vectors equal to the ’s; and . Notice that eqn. (3

) essentially reveals the (thin) Singular Value Decomposition (SVD) 

[8] of . The Von Neumann entropy of , denoted by is equal to (see also eqn. (1))

(4)

The second equality follows from the definition of matrix functions [11]. More precisely, we overload notation and consider the full SVD of , namely , where

is an orthogonal matrix whose top

columns correspond to the pure states and the bottom columns are chosen so that . Here is a diagonal matrix whose bottom diagonal entries are set to zero. Let for any and let . Then, using the cyclical property of the trace and the definition of ,

(5)

1.2 Our contributions

We present and analyze three randomized algorithms to approximate the von Neumann entropy of density matrices. The first two algorithms (Sections 2 and 3) leverage two different polynomial approximations of the matrix function : the first approximation uses a Taylor series expansion while the second approximation uses Chebyschev polynomials. Both algorithms return, with high probability, relative-error approximations to the true entropy of the input density matrix, under certain assumptions. More specifically, in both cases, we need to assume that the input density matrix has non-zero eigenvalues, or, equivalently, that the probabilities , , corresponding to the underlying pure states are non-zero. The running time of both algorithms is proportional to the sparsity of the input density matrix and depends (see Theorems 1 and 3 for precise statements) on, roughly, the ratio of the largest to the smallest probability (recall that the smallest probability is assumed to be non-zero), as well as the desired accuracy.

The third algorithm (Section 4) is fundamentally different, if not orthogonal, to the previous two approaches. It leverages the power of random projections [6, 22] to approximate numerical linear algebra quantities, such as the eigenvalues of a matrix. Assuming that the density matrix has exactly non-zero eigenvalues, e.g., there are pure states with non-zero probabilities , , the proposed algorithm returns, with high probability, relative error approximations to all probabilities . This, in turn, implies an additive-relative error approximation to the entropy of the density matrix, which, under a mild assumption on the true entropy of the density matrix, becomes a relative error approximation (see Theorem 5 for a precise statement). The running time of the algorithm is again proportional to the sparsity of the density matrix and depends on the target accuracy, but, unlike the previous two algorithms, does not depend on any function of the .

From a technical perspective, the theoretical analysis of the first two algorithms proceeds by combining the power of polynomial approximations, either using Taylor series or Chebyschev polynomials, to matrix functions, combined with randomized trace estimators. A provably accurate variant of the power method is used to estimate the largest probability . If this estimate is significantly smaller than one, it can improve the running times of the proposed algorithms (see discussion after Theorem 1). The third algorithm leverages a powerful, multiplicative matrix perturbation result that first appeared in [5]. Our work in Section 4 is a novel application of this inequality to derive bounds for RandNLA algorithms.

Finally, in Section 5, we present a detailed evaluation of our algorithms on synthetic density matrices of various sizes, most of which were generated using Matlab’s QETLAB toolbox [13]. For some of the larger matrices that were used in our evaluations, the exact computation of the entropy takes hours, whereas our algorithms return approximations with relative errors well below in only a few minutes.

1.3 Prior work

The first non-trivial algorithm to approximate the von Neumann entropy of a density matrix appeared in [21]. Their approach is essentially the same as our approach in Section 3. Indeed, our algorithm in Section 3 was inspired by their approach. However, our analysis is somewhat different, leveraging a provably accurate variant of the power method as well as provably accurate trace estimators to derive a relative error approximation to the entropy of a density matrix, under appropriate assumptions. A detailed, technical comparison between our results in Section 3 and the work of [21] is delegated to Section 3.3.

Independently and in parallel with our work, [15]

presented a multipoint interpolation algorithm (building upon 

[10]) to compute a relative error approximation for the entropy of a real matrix with bounded condition number. The proposed running time of Theorem 35 of [15] does not depend on the condition number of the input matrix (i.e., the ratio of the largest to the smallest probability), which is a clear advantage in the case of ill-conditioned matrices. However, the dependency of the algorithm of Theorem 35 of [15] on terms like or (where represents the number of non-zero elements of the matrix ) could blow up the running time of the proposed algorithm for reasonably conditioned matrices.

We also note the recent work in [3], which used Taylor approximations to matrix functions to estimate the log determinant of symmetric positive definite matrices (see also Section 1.2 of [3] for an overview of prior work on approximating matrix functions via Taylor series). The work of [9] used a Chebyschev polynomial approximation to estimate the log determinant of a matrix and is reminiscent of our approach in Section 3 and, of course, the work of [21].

We conclude the section by noting that our algorithms will use two tools (described, for the sake of completeness, in the Appendix) that appeared in prior work. The first tool is the power method, with a provable analysis that first appeared in [19]. The second tool is a provably accurate trace estimation algorithm for symmetric positive semidefinite matrices that appeared in [2].

2 An approach via Taylor series

Our first approach to approximate the von Neumann entropy of a density matrix uses a Taylor series expansion to approximate the logarithm of a matrix, combined with a relative-error trace estimator for symmetric positive semi-definite matrices and the power method to upper bound the largest singular value of a matrix.

2.1 Algorithm and Main Theorem

Our main result is an analysis of Algorithm 1 (see below) that guarantees relative error approximation to the entropy of the density matrix , under the assumption that has pure states with for all .

1:  INPUT: , accuracy parameter , failure probability , and integer .
2:  Estimate using Algorithm 6 (see Appendix) with and .
3:  Set .
4:  Set .
5:  Let be i.i.d. random Gaussian vectors.
6:  OUTPUT: return
Algorithm 1 A Taylor series approach to estimate the entropy.

The following theorem is our main quality-of-approximation result for Algorithm 1.

Theorem 1.

Let be a density matrix such that all probabilities , satisfy . Let be computed as in Algorithm 1 and let be the output of Algorithm 1 on inputs , , and ; Then, with probability at least ,

by setting . The algorithm runs in time

A few remarks are necessary to better understand the above theorem. First, could be set to , the smallest of the probabilities corresponding to the pure states of the density matrix . Second, it should be obvious that in Algorithm 1 could be simply set to one and thus we could avoid calling Algorithm 6 to estimate by and thus compute . However, if is small, then could be significantly smaller than one, thus reducing the running time of Algorithm 1, which depends on the ratio . Third, ideally, if both and were used instead of and , respectively, the running time of the algorithm would scale with the ratio .

2.2 Proof of Theorem 1

We now prove Theorem 1, which analyzes the performance of Algorithm 1. Our first lemma presents a simple expression for using a Taylor series expansion.

Lemma 2.

Let be a symmetric positive definite matrix with unit trace and whose eigenvalues lie in the interval , for some . Then,

Proof.

From the definition of the von Neumann entropy and a Taylor expansion,

Eqn. (2.2) follows since has unit trace and from a Taylor expansion: indeed, for a symmetric matrix whose eigenvalues are all in the interval . We note that the eigenvalues of are in the interval , whose upper bound is strictly less than one since, by our assumptions, . ∎

We now proceed to prove Theorem 1. We will condition our analysis on Algorithm 6 being successful, which happens with probability at least . In this case, is an upper bound for all probabilities . For notational convenience, set . We start by manipulating as follows:

We now bound the two terms and separately. We start with : the idea is to apply Lemma 10 on the matrix with . Hence, with probability at least :

(7)

A subtle point in applying Lemma 10 is that the matrix must be symmetric positive semidefinite. To prove this, let the SVD of be , where all three matrices are in and the diagonal entries of are in the interval . Then, it is easy to see that and , where the diagonal entries of are non-negative, since the largest entry in is upper bounded by . This proves that is symmetric positive semidefinite for any , a fact which will be useful throughout the proof. Now,

which shows that the matrix of interest is symmetric positive semidefinite. Additionally, since is symmetric positive semidefinite, its trace is non-negative, which proves the second inequality in eqn. (7) as well.

We proceed to bound as follows:

(8)
(9)
(10)

To prove eqn. (8), we used von Neumann’s trace inequality333Indeed, for any two matrices and , , where (respectively ) denotes the -th singular value of (respectively ). Since (its largest singular value), this implies that ; if is symmetric positive semidefinite, .. Eqn. (8) now follows since is symmetric positive semidefinite444This can be proven using an argument similar to the one used to prove eqn. (7).. To prove eqn. (9), we used the fact that for any . Finally, to prove eqn. (10), we used the fact that since the smallest entry in is at least by our assumptions. We also removed unnecessary absolute values since is non-negative for any positive integer .

Combining the bounds for and gives

We have already proven in Lemma 2 that

where the last inequality follows since . Collecting our results, we get

Setting

and using (), guarantees that and concludes the proof of the theorem. We note that the failure probability of the algorithm is at most (the sum of the failure probabilities of the power method and the trace estimation algorithm).

Finally, we discuss the running time of Algorithm 1, which is equal to . Since and , the running time becomes (after accounting for the running time of Algorithm 6)

3 An approach via Chebyschev polynomials

Our second approach is to use a Chebyschev polynomial-based approximation scheme to estimate the entropy of a density matrix. Our approach follows the work of [21], but our analysis uses the trace estimators of [2] and Algorithm 6 and its analysis. Importantly, we present conditions under which the proposed approach is competitive with the approach of Section 2.

3.1 Algorithm and Main Theorem

The proposed algorithm leverages the fact that the von Neumann entropy of a density matrix is equal to the (negative) trace of the matrix function and approximates the function by a sum of Chebyschev polynomials; then, the trace of the resulting matrix is estimated using the trace estimator of [2].

Let with , , and for . Let and be the Chebyschev polynomials of the first kind for any integer . Algorithm 2 computes (an upper bound estimate for the largest probability of the density matrix ) and then computes and estimates its trace. We note that the computation can be done efficiently using Clenshaw’s algorithm; see Appendix C for the well-known approach.

1:  INPUT: , accuracy parameter , failure probability , and integer .
2:  Estimate using Algorithm 6 (see Appendix) with and .
3:  Set .
4:  Set .
5:  Let be i.i.d. random Gaussian vectors.
6:  OUTPUT:
Algorithm 2 A Chebyschev polynomial-based approach to estimate the entropy.

Our main result is an analysis of Algorithm 2 that guarantees a relative error approximation to the entropy of the density matrix , under the assumption that has pure states with for all . The following theorem is our main quality-of-approximation result for Algorithm 2.

Theorem 3.

Let be a density matrix such that all probabilities , satisfy . Let be computed as in Algorithm 1 and let be the output of Algorithm 2 on inputs , , and ; Then, with probability at least ,

by setting . The algorithm runs in time

The similarities between Theorems 1 and 3 are obvious: same assumptions and directly comparable accuracy guarantees. The only difference is in the running times: the Taylor series approach has a milder dependency on , while the Chebyschev-based approximation has a milder dependency on the ratio , which controls the behavior of the probabilities . However, for small values of (),

Thus, the Chebyschev-based approximation has a milder dependency on but not necessarily when compared to the Taylor-series approach. We also note that the discussion following Theorem 1 is again applicable here.

3.2 Proof of Theorem 3

We will condition our analysis on Algorithm 6 being successful, which happens with probability at least . In this case, is an upper bound for all probabilities . We now recall (from Section 1.1) the definition of the function for any real , with . Let be the density matrix, where both and are matrices in . Notice that the diagonal entries of are the s and they satisfy for all .

Using the definitions of matrix functions from [11], we can now define , where is a diagonal matrix in with entries equal to for all . We now restate Proposition 3.1 from [21] in the context of our work, using our notation.

Lemma 4.

The function in the interval can be approximated by

where , , and for . For any ,

for .

In the above, for any integer and . Notice that the function essentially maps the interval , which is the interval of interest for the function , to , which is the interval over which Chebyschev polynomials are commonly defined. The above theorem exploits the fact that the Chebyschev polynomials form an orthonormal basis for the space of functions over the interval .

We now move on to approximate the entropy using the function . First,

(11)

Recall from Section 1.1 that . We can now bound the difference between and . Indeed,

(12)

The last inequality follows by the final bound in Lemma 4, since all ’s are the in the interval .

Recall that we also assumed that all s are lower-bounded by and thus

(13)

We note that the upper bound on the s follows since the smallest is at least and thus the largest cannot exceed . We note that we cannot use the upper bound in the above formula, since could be equal to one; is always strictly less than one but it cannot be a priori computed (and thus cannot be used in Algorithm 2), since is not a priori known.

We can now restate the bound of eqn. (12) as follows:

(14)

where the last inequality follows by setting

(15)

Next, we argue that the matrix is symmetric positive semidefinite (under our assumptions) and thus one can apply Lemma 10 to estimate its trace. We note that

which trivially proves the symmetry of and also shows that its eigenvalues are equal to for all . We now bound

where the inequalities follow from Lemma 4 and our choice for from eqn. (15). This inequality holds for all and implies that

using our upper () and lower () bounds on the s. Now proves that are non-negative for all and thus is a symmetric positive semidefinite matrix; it follows that its trace is also non-negative.

We can now apply the trace estimator of Lemma 10 to get

(16)

For the above bound to hold, we need to set

(17)

We now conclude as follows:

The first inequality follows by adding and subtracting and using sub-additivity of the absolute value; the second inequality follows by eqns. (14) and (16); the third inequality follows again by eqn. (14); and the last inequality follows by using .

We note that the failure probability of the algorithm is at most (the sum of the failure probabilities of the power method and the trace estimation algorithm). Finally, we discuss the running time of Algorithm 2, which is equal to . Using the values for and from eqns. (15) and (17), the running time becomes (after accounting for the running time of Algorithm 6)

3.3 A comparison with the results of [21]

The work of [21] culminates to the error bounds described in Theorem 4.3 (and the ensuing discussion). In our parlance, [21] first derives the error bound of eqn. (12). It is worth emphasizing that the bound of eqn. (12) holds even if the s are not necessarily strictly positive, as assumed by Theorem 3: the bound holds even if some of the s are equal to zero.

Unfortunately, without imposing a lower bound assumption on the s it is difficult to get a meaningful error bound and an efficient algorithm. Indeed, the error implied by eqn. (12) (without any assumption on the s) necessitates setting to at least (perhaps up to a logarithmic factor, as we will discuss shortly). To understand this, note that the entropy of the density matrix ranges between zero and , where is the rank of the matrix , i.e., the number of non-zero ’s. Clearly, and thus is an upper bound for . Notice that if is smaller than , the error bound of eqn. (12) does not even guarantee that the resulting approximation will be positive, which is, of course, meaningless as an approximation to the entropy.

In order to guarantee a relative error bound of the form via eqn. (12), we need to set to be at least

(18)

which even for “large” values of (i.e., values close to the upper bound ) still implies that is . Even with such a large value for , we are still not done: we need an efficient trace estimation procedure for the matrix . While this matrix is always symmetric, it is not necessarily positive or negative semi-definite (unless additional assumptions are imposed on the s, like we did in Theorem 3). Unfortunately, we are not aware of any provably accurate, relative error approximation algorithms for the trace of just symmetric matrices: the results of [2, 18] only apply to symmetric positive (or negative) semidefinite matrices. The work of [21] does provide an analysis of a trace estimator for general symmetric matrices (pioneered by Hutchinson in [12]). However, in our notation, in order to achieve a relative error bound, the final error bound of [21] (see eqns. (19) and (20) in [21]), necessitates setting to the value of eqn. (18). However, in that case, (the number of random vectors to be generated in order to estimate the trace of a general symmetric matrix) grows as a function of555Up to logarithmic factors. (see eqn. (20) in [21]), which is prohibitively large, as with that many random vectors the running time of just the trace estimation algorithm blows to , which could easily exceed the trivial running time to exactly compute .

4 An approach via random projection matrices

Finally, we focus on perhaps the most interesting special case: the setting where at most (out of , with ) of the probabilities of the density matrix of eqn. (2) are non-zero. In this setting, we prove that elegant random-projection-based techniques achieve relative error approximations to all probabilities , . The running time of the proposed approach depends on the particular random projection that is used and can be made to depend on the sparsity of the input matrix.

4.1 Algorithm and Main Theorem

The proposed algorithm uses a random projection matrix to create a “sketch” of in order to approximate the s.

Algorithm 3 Approximating the entropy via random projection matrices
1:  INPUT: Integer (dimensions of matrix ) and integer (with rank of at most , see eqn. (2)).
2:  Construct the random projection matrix (see Section 4.2 for details on and ).
3:  Compute .
4:  Compute and return the (at most) non-zero singular values of , denoted by , .
5:  OUTPUT: , and .

In words, Algorithm 3 creates a sketch of the input matrix by post-multiplying by a random projection matrix; this is a well-known approach from the RandNLA literature (see [6] for details). Assuming that has rank at most , which is equivalent to assuming that at most of the probabilities in eqn. (2) are non-zero (e.g., the system underlying the density matrix has at most pure states), then the rank of is also at most . In this setting, Algorithm 3 returns the non-zero singular values of as approximations to the , .

The following theorem is our main quality-of-approximation result for Algorithm 3.

Theorem 5.

Let be a density matrix with at most non-zero probabilities and let be an accuracy parameter. Then, with probability at least .9, the output of Algorithm 3 satisfies

for all . Additionally,

Algorithm 3 (combined with Algorithm 5 below) runs in time

Comparing the above result with Theorems 1 and 3, we note that the above theorem does not necessitate imposing any constraints on the probabilities , . Instead, it suffices to have non-zero probabilities. The final result is an additive-relative error approximation to the entropy of (as opposed to the relative error approximations of Theorems 1 and 3); under the mild assumption , the above bound becomes a true relative error approximation666Recall that ranges between zero and ..

4.2 Two constructions for the random projection matrix

We now discuss two constructions for the matrix and we cite two bounds regarding these constructions from prior work that will be useful in our analysis. The first construction is the subsampled Hadamard Transform, a simplification of the Fast Johnson-Lindenstrauss Transform of [1]; see [7, 20] for details. We do note that even though it appears that Algorithm 5 is always better than Algorithm 4 (at least in terms of their respective theoretical running times), both algorithms are worth evaluating experimentally: in particular, prior work [17] has reported that Algorithm 4 often outperforms Algorithm 5 is terms of empirical accuracy and running time when the input matrix is dense, as is often the case in our setting. Therefore, we choose to present results (theoretical and empirical) for both well-known constructions of (Algorithms 4 and 5).

Algorithm 4 The subsampled Randomized Hadamard Transform
1:  INPUT: integers with .
2:  Let be an empty matrix.
3:  For (i.i.d. trials with replacement) select uniformly at random an integer from .
4:  If is selected, then append the column vector to , where is the -th canonical vector.
5:  Let be the normalized Hadamard transform matrix.
6:  Let be a diagonal matrix with
7:  OUTPUT: .

The following result has appeared in [7, 20, 22].

Lemma 6.

Let such that and let be constructed by Algorithm 4. Then, with probability at least 0.9,

by setting .

Our second construction is the input sparsity transform of [4]. This major breakthrough was further analyzed in [14, 16] and we present the following result from [14, Appendix A1].

Algorithm 5 An input-sparsity transform
1:  INPUT: integers with .
2:  Let be an empty matrix.
3:  For (i.i.d. trials with replacement) select uniformly at random an integer from .
4:  If is selected, then append the row vector to , where is the -th canonical vector.
5:  Let be a diagonal matrix with
6:  OUTPUT: .
Lemma 7.

Let such that and let be constructed by Algorithm 5. Then, with probability at least 0.9,

by setting