Time-dependent Hamiltonian simulation with L^1-norm scaling

06/17/2019 ∙ by Dominic W. Berry, et al. ∙ 0

The difficulty of simulating quantum dynamics depends on the norm of the Hamiltonian. When the Hamiltonian varies with time, the simulation complexity should only depend on this quantity instantaneously. We develop quantum simulation algorithms that exploit this intuition. For the case of sparse Hamiltonian simulation, the gate complexity scales with the L^1 norm ∫_0^tdτ H(τ)_, whereas the best previous results scale with t_τ∈[0,t] H(τ)_. We also show analogous results for Hamiltonians that are linear combinations of unitaries. Our approaches thus provide an improvement over previous simulation algorithms that can be substantial when the Hamiltonian varies significantly. We introduce two new techniques: a classical sampler of time-dependent Hamiltonians and a rescaling principle for the Schrödinger equation. The rescaled Dyson-series algorithm is nearly optimal with respect to all parameters of interest, whereas the sampling-based approach is easier to realize for near-term simulation. By leveraging the L^1-norm information, we obtain polynomial speedups for semi-classical simulations of scattering processes in quantum chemistry.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Simulating the Hamiltonian dynamics of a quantum system is one of the most promising applications of a quantum computer. The apparent classical intractability of simulating quantum dynamics led Feynman [25] and others to propose the idea of quantum computation. Quantum computers can simulate various physical systems, including condensed matter physics [3], quantum field theory [29], and quantum chemistry [2, 14, 47]. The study of quantum simulation has also led to the discovery of new quantum algorithms, such as algorithms for linear systems [28], differential equations [9], semidefinite optimization [11], formula evaluation [22], quantum walk [15], and ground-state and thermal-state preparation [42, 20].

Let be a Hamiltonian defined for . The problem of Hamiltonian simulation is to approximate the evolution using a quantum circuit comprised of elementary quantum gates, where denotes the time-ordered matrix exponential. If the Hamiltonian does not depend on time, the evolution operator can be represented in closed form as . Then the problem can be greatly simplified and it has been thoroughly studied by previous works on quantum simulation [32, 1, 4, 8, 5, 7, 34, 36, 33, 10, 13, 30, 37, 17, 18, 19].

Simulating a general time-dependent naturally subsumes the time-independent case, and can be applied to devising quantum control schemes [39], describing quantum chemical reactions [12], and implementing adiabatic quantum algorithms [23]. However, the problem becomes considerably harder and there are fewer quantum algorithms available. Wiebe, Berry, Høyer, and Sanders designed a time-dependent Hamiltonian simulation algorithm based on higher-order product formulas [49]. They assume that is smooth up to a certain order and they give an example in which a desired approximation cannot be achieved due to the non-differentiability of the Hamiltonian. The smoothness assumption is relaxed in subsequent work by Poulin, Qarry, Somma, and Verstraete [41]

based on techniques of Hamiltonian averaging and Monte Carlo estimation. The fractional-query algorithm of Berry, Childs, Cleve, Kothari, and Somma can also simulate time-dependent Hamiltonians

[6], with an exponentially improved dependence on precision and only logarithmic dependence on the derivative of the Hamiltonian. A related quantum algorithm for time-dependent Hamiltonian simulation was suggested by Berry, Childs, Cleve, Kothari, and Somma based on the truncated Dyson series [7], which is analyzed explicitly in [30, 37].

In this paper, we study time-dependent Hamiltonian simulation based on a simple intuition: the difficulty of simulating a quantum system should depend on the integrated norm of the Hamiltonian. To elaborate, first consider the special case of simulating a time-independent Hamiltonian. The complexity of such a simulation depends on [16], where is a matrix norm that quantifies the size of the Hamiltonian. It is common to express the complexity in terms of the spectral norm (i.e., the Schatten -norm), which quantifies the maximum energy of .

In the general case where the Hamiltonian is time dependent, we expect a quantum simulation algorithm to depend on the Hamiltonian locally in time, and therefore to have complexity that scales with the integrated spectral norm . This is the norm of when viewed as a function of , so we say such an algorithm has -norm scaling. Surprisingly, the existing analysis of simulation algorithms fails to achieve this complexity; rather, their gate complexity scales with the worst-case cost . It is therefore reasonable to question whether our intuition is correct, or if there exist faster time-dependent Hamiltonian simulation algorithms that can exploit this intuition.111For the Dyson-series approach, Low and Wiebe claimed that the worst-case scaling may be avoided by a proper segmentation of the time interval [37, Section VI. A]. However, it is unclear how their analysis can be formalized to give an algorithm with complexity that scales with the norm. In Section 4, we propose a rescaling principle for the Schrödinger equation and develop a rescaled Dyson-series algorithm with -norm scaling.

Our work answers this question by providing multiple faster quantum algorithms for time-dependent Hamiltonian simulation. These algorithms have gate complexity that scales with the norm , in contrast to the best previous scaling of . As the norm inequality always holds but is not saturated in general, these algorithms provide strict speedups over existing algorithms. We further analyze an application to simulating scattering processes in quantum chemistry, showing that our improvement can be favorable in practice.

We introduce notation and terminology and state our assumptions in Section 2. Following standard assumptions about quantum simulation, we consider two different input models of Hamiltonians. The first is the sparse matrix (SM) model common for analyzing Hamiltonian simulation in general, in which the Hamiltonian is assumed to be sparse and access to the locations and values of nonzero matrix elements are provided by oracles. We quantify the complexity of a simulation algorithm by the number of queries and additional gates it uses. The second model, favorable for practical applications such as condensed matter physics and quantum chemistry simulation, assumes that the Hamiltonian can be explicitly decomposed as a linear combination of unitaries (LCU), where the coefficients are efficiently computable on a classical computer and the summands can be exponentiated and controlled on a quantum computer. We ignore the cost of implementing the coefficient oracle and focus mainly on the gate complexity. Quantum simulation algorithms can sometimes work more efficiently in other input models, but we study these two models since they are versatile and provide a fair comparison of the gate complexity.

Reference [6] claims that the fractional-query algorithm can simulate time-dependent Hamiltonians with -norm scaling. However, it is not hard to see that its query complexity in fact scales with the norm. While we do not show how to achieve this scaling for the gate complexity, our analysis is simple and suggests that such a result might be possible. We analyze the query complexity of the fractional-query algorithm in Section 2.5.

We develop two new techniques to simulate time-dependent Hamiltonians with -norm scaling. Our first technique is a classical sampling protocol for time-dependent Hamiltonians. In this protocol, we randomly sample a time and evolve under the time-independent Hamiltonian

, where the probability distribution is designed to favor those

with large . Campbell introduced a discrete sampling scheme for time-independent Hamiltonian simulation [13] and our protocol can be viewed as its continuous analog, which we call “continuous qDRIFT”. We show that continuous qDRIFT is universal, in the sense that any Hamiltonian simulable by [13] can be simulated by continuous qDRIFT with the same complexity. In addition, we shave off a multiplicative factor in the analysis of [13] by explicitly evaluating the rate of change of the evolution with respect to scaling the Hamiltonian. Continuous qDRIFT and its analysis are detailed in Section 3. Our algorithm is also similar in spirit to the approach of Poulin et al. [40] based on Hamiltonian averaging and Monte Carlo estimation, although their algorithm does not have -norm scaling. We discuss the relationship between these two approaches in Appendix A.

We also present a general principle for rescaling the Schrödinger equation in Section 4. In the rescaled Schrödinger equation, the time-dependent Hamiltonian has the same norm at all , so the norm inequality holds with equality. Using this principle, we show that the simulation algorithm based on the truncated Dyson series [7, 37, 30] can also be improved to have -norm scaling.

To illustrate how our results might be applied, we identify a specific problem in quantum chemistry for which our -norm improvement is advantageous: semi-classical scattering of molecules in a chemical reaction. For such a simulation, changes dramatically throughout the evolution, so its norm can be significantly smaller than its norm. Detailed analysis shows that algorithms with -norm scaling offer a polynomial speedup over previous results, as discussed in Section 5.

Finally, we conclude in Section 6 with a brief discussion of the results and some open questions.

2 Preliminaries

2.1 Time-dependent Hamiltonian evolution

Let be a time-dependent Hamiltonian defined for . By default, we assume that is continuously differentiable and everywhere, and we defer the discussion of pathological cases to Section 6. If the Hamiltonian is time independent, the evolution is given in closed form by the matrix exponential . However, there exists no such closed-form expression for a general and we instead represent the evolution by , where denotes the time-ordered matrix exponential. We have

(1)

If is another time-dependent Hamiltonian, the evolutions generated by and have distance bounded by the following lemma.

Lemma 1 (-norm distance bound of time-ordered evolutions [44, Appendix B]).

Let and be time-dependent Hamiltonians defined on the interval . Then,

(2)

Here, denotes the spectral norm.

We will abbreviate the evolution operator as when there is no ambiguity. In the special case where is time independent, the evolution only depends on the time duration so we denote . Therefore, we have the differential equation

(3)

We may further obtain an integral representation of . To this end, we apply the fundamental theorem of calculus to the Schrödinger equation and obtain

(4)

Equivalently, satisfies the integral equation

(5)

For any , the evolution operator satisfies the multiplicative property

(6)

The operator with is understood as the inverse evolution operator

(7)

For a thorough mathematical treatment of time-dependent Hamiltonian evolution, we refer the reader to [21]. Finally, the quantum channel corresponding to the unitary evolution is denoted as and is defined by

(8)

For time-independent Hamiltonians, we denote .

2.2 Notation for norms

We introduce norm notation for vectors, matrices, operator-valued functions, and linear maps on the space of matrices.

Let be an -dimensional vector. We use to represent the vector norm of . Thus,

(9)

For a matrix , we define to be the Schatten -norm of [46, 50]. We have

(10)

Finally, if is a continuous function, we use to mean the norm of the function . Thus,

(11)

We combine these norms to obtain norms for vector-valued and operator-valued functions. Let be a continuous vector-valued function, with the th coordinate at time denoted . We use to mean that we take the norm for every and compute the norm of the resulting scalar function. For example,

(12)

Note that is continuous as a function of , so is well defined and is indeed a norm for vector-valued functions. Similarly, we also define for a continuous operator-valued function by taking the Schatten -norm for every and computing the norm of the resulting scalar function. In rare cases, we will also encounter time-dependent linear combinations of operators of the form , and we write to mean that we take the Schatten -norm of each summand, and apply the norm and norm to the resulting vector-valued functions. For example,

(13)

We also define as the largest matrix element of in absolute value,

(14)

The norm is a vector norm of but does not satisfy the submultiplicative property of a matrix norm. It relates to the spectral norm by the inequality [16, Lemma 1]

(15)

If is a continuous operator-valued function, we interpret in a similar way as above. Therefore,

(16)

Finally, we define a norm for linear maps on the space of matrices. Let be a linear map on the space of matrices on . The diamond norm of is

(17)

where the maximization is taken over all matrices on satisfying . Below is a useful bound on the diamond-norm distance between two unitary channels.

Lemma 2 (Diamond-norm distance between unitary channels [8, Lemma 7]).

Let and be unitary matrices, with associated quantum channels and . Then,

(18)

The sampling-based algorithm (Section 3) produces a quantum channel close to , and its error is naturally quantified by the diamond-norm distance. Other simulation algorithms such as the Dyson-series approach (Section 4) produce operators that are close to the unitary , and we quantify their error in terms of the spectral norm. For a fair comparison one may instead describe all simulation algorithms using quantum channels and use the diamond-norm distance as the unified error metric. By Lemma 2, we lose at most a factor of in this conversion.

2.3 Hamiltonian input models

Quantum simulation algorithms may have different performance depending on the choice of the input model of Hamiltonians. In this section, we describe two input models that are extensively used in previous works: the sparse matrix (SM) model and the linear-combination-of-unitaries (LCU) model. We also discuss other input models that will be used in later sections.

Let be a time-dependent Hamiltonian defined for . In the SM model, we assume that is -sparse in the sense that the number of nonzero matrix elements within each row and column throughout the entire interval is at most . We assume that the locations of the nonzero matrix elements are time independent. Access to the Hamiltonian is given through the oracles

(19)

Here, returns the column index of the th element in the th row that may be nonzero over the entire time interval . We quantify the complexity of a quantum simulation algorithm by the number of queries it makes to and , together with the number of additional elementary gates it requires. Such a model includes many realistic physical systems and is well-motivated from a theoretical perspective [28].

As the following lemma shows, a -sparse time-independent Hamiltonian can be efficiently decomposed as a sum of -sparse terms.

Lemma 3 (Decomposition of sparse Hamiltonians [6, Lemma 4.3 and 4.4]).

Let be a time-independent -sparse Hamiltonian accessed through the oracles and . Then

  1. there exists a decomposition , where each is -sparse with , and a query to any can be simulated with queries to ; and

  2. for any , there exists an approximate decomposition222Reference [6] uses [6, Lemma 4.3] and the triangle inequality to show that . However, this bound can be tightened to , since the max-norm distance depends on the largest error from rounding off the -sparse matrices. , where , each is

    -sparse with eigenvalues

    , and a query to any can be simulated with queries to .

For the LCU model, we suppose that the Hamiltonian admits a decomposition

(20)

where the coefficients are continuously differentiable and nonzero everywhere, and the matrices are both unitary and Hermitian. We assume that the coefficients can be efficiently computed by a classical oracle , and we ignore the classical cost of implementing such an oracle. We further assume that each can be implemented with gate complexity , and each for an arbitrarily large can be performed with gates. Such a setting is common in the simulation of condensed matter physics and quantum chemistry. We quantify the complexity of a simulation algorithm by the number of elementary gates it uses.

Quantum simulation algorithms can sometimes work in other input models. For example, the continuous qDRIFT protocol introduced in Section 3 requires only that the Hamiltonians have the form

(21)

where the Hermitian-valued functions are continuous, nonzero everywhere, and can be efficiently exponentiated on a quantum computer. We call this the linear-combination (LC) model. On the other hand, the Dyson-series algorithm can be described in terms of the Select operation

(22)

irrespective of how this operation is implemented [30]. We consider the SM and LCU models for all the time-dependent simulation algorithms so that we can give a fair comparison of their complexity.

2.4 Simulation algorithms with -norm scaling

We now explain the meaning of -norm scaling in the SM and the LCU models. Let be a time-dependent Hamiltonian defined for . We say that an algorithm in the SM model simulates with -norm scaling if, given any continuously differentiable upper bound , the algorithm has query complexity and gate complexity that scale with up to logarithmic factors. The norm bound , together with other auxiliary information, must be accessed by the quantum simulation algorithm; we assume such quantities can be computed efficiently.

In the LCU model, we are given a time-dependent Hamiltonian with the decomposition . We say that an algorithm has -norm scaling if, for any continuously differentiable vector-valued function with , the algorithm has query and gate complexity that scale with up to logarithmic factors.

For better readability, we express the complexity of simulation algorithms in terms of the norm of the original Hamiltonian, such as and , instead of the upper bounds and . We also suppress logarithmic factors using the notation when the complexity expression becomes too complicated. Table 1 compares the results of this paper with previous results on simulating time-dependent Hamiltonians.

Algorithms SM LCU
Fractional-query [6] N/A
Dyson-series [7, 37, 30]
Continuous qDRIFT (Section 3.3)
Rescaled Dyson-series (Section 4.2)
Table 1: Complexity comparison of previous algorithms (top two) and the algorithms introduced in this paper (bottom two) for simulating time-dependent Hamiltonians. Logarithmic factors are suppressed by notation and the (non-query) gate complexities are compared. The product formula algorithm of [49] is omitted as its gate complexity scales polynomially with high-order derivatives and is not directly comparable to other algorithms in the table. The Monte Carlo estimation algorithm [41] does not have an explicit complexity analysis and its implementation is streamlined by continuous qDRIFT (see Appendix A). The fractional-query algorithm [6] does not have an explicit implementation for Hamiltonians in the LCU model, and its implementation in the SM model is streamlined by the Dyson-series approach [7, 37, 30].

Our goal is to develop simulation algorithms that scale with the -norm with respect to the time variable , for both query complexity and gate complexity. We start by reexamining the fractional-query approach. It was mentioned in [6] that this approach can simulate time-dependent Hamiltonians with -norm scaling, but we find that its query complexity scales with the norm. We give this improved analysis in the next section.

2.5 Query complexity with -norm scaling

We begin by reviewing the result of [6] for simulating time-independent Hamiltonians. We assume that the Hamiltonian is given by a linear combination of unitaries with nonnegative coefficients . Here, are both unitary and Hermitian, so they are reflections and their eigenvalues are .

We say that a quantum operation is a fractional-query algorithm if it is of the form

(23)

where is unitary with eigenvalues , are unitary operations, and . Here, we regard as the oracle and as non-query operations, so this algorithm has fractional-query complexity . A quantum algorithm that makes (discrete) queries to is a fractional-query algorithm with . Conversely, any fractional-query algorithm can be efficiently simulated in the discrete query model. In particular, an algorithm with fractional-query complexity can be simulated with error at most using discrete queries [6, Lemma 3.8].

To apply the fractional-query approach, we approximate the evolution under using the first-order product formula

(24)

Observe that are unitary operations with eigenvalues , so can be viewed as a fractional-query algorithm with query complexity , provided that we can make fractional queries to multiple oracles . This can be realized by a standard fractional-query algorithm accessing the single oracle

(25)

with the same query complexity [6, Theorem 4.1].

To simulate with accuracy , we set to ensure that

(26)

We now convert this multi-oracle algorithm to a single-oracle algorithm with the same fractional-query complexity and, with precision , implement it in the discrete query model. Altogether, this approach makes

(27)

queries to the operation .

As mentioned in [6], the fractional-query approach can also be used to simulate time-dependent Hamiltonians by replacing (24) with a product-formula decomposition of the time-ordered evolution. However, [6] only gives a brief discussion of this issue and the claimed complexity has only scaling. We now give an improved analysis of this algorithm for the SM model, showing that its query complexity achieves -norm scaling.

Theorem 4 (Fractional-query algorithm with -norm scaling (SM)).

A -sparse time-dependent Hamiltonian acting on qubits can be simulated for time with accuracy using

(28)

queries to the oracles , .

Proof.

For readability, we assume that , , and are the norm upper bounds provided to the algorithm. We first decompose into a product of evolutions of time-independent Hamiltonians , each evolving for time . By Lemma 1, we have

(29)

which implies

(30)

To approximate with precision , it suffices to choose

(31)

We then decompose the evolution under each time-independent sparse Hamiltonian for time with precision . By Lemma 3, can be decomposed into a sum of terms such that . Furthermore, each is -sparse Hermitian with eigenvalues and can be accessed using queries to . We choose so that

(32)

This implies and the fractional query complexity is

(33)

We apply the first-order product formula to obtain

(34)

Therefore, it is possible to choose as

(35)

such that the error of the first-order product-formula decomposition is at most

(36)

By choosing as the maximum of (31) and (35), we ensure that the error in each of the time steps is , so the total error is . Altogether, we find a fractional-query algorithm with total query complexity

(37)

and error

(38)

We now convert this multi-oracle algorithm to a single-oracle algorithm with the same fractional-query complexity and, with precision , implement it in the discrete query model. Altogether, we have made

(39)

discrete queries.

We now show how the query complexity of this approach achieves -norm scaling. The intuition is that the total query complexity should be close to when is sufficiently large. Specifically,

(40)

To achieve an additive error of , it suffices to choose

(41)

Since can be made arbitrarily close to , we have the total query complexity of

(42)

as claimed. ∎

The above analysis shows that the fractional-query algorithm can simulate time-dependent Hamiltonians with query complexity that scales with the -norm. However, further analysis would be required to give a similar bound for the gate complexity. In particular, the time indexing needs to change between each of the segments and, as such, an explicit implementation with the desired gate complexity is highly nontrivial. Instead, we develop other quantum algorithms that achieve -norm scaling for not only the query complexity but also the gate complexity. We employ two main techniques: the continuous qDRIFT sampling protocol (Section 3) and a rescaling principle for the Schrödinger equation (Section 4).

3 Continuous qDRIFT

In this section, we introduce our first technique to achieve -norm scaling of the gate complexity for time-dependent Hamiltonian simulation: a classical sampling protocol for time-dependent Hamiltonians. Recently, Campbell proposed a discrete sampling protocol for simulating time-independent Hamiltonians, which he called “qDRIFT” [13]. This approach inspires the design of our protocol, which we call “continuous qDRIFT”. We analyze the performance of this approach in Section 3.1. We show in Section 3.2 that continuous qDRIFT is universal, in the sense that any time-independent Hamiltonian simulable by the algorithm of [13] can be simulated by our protocol. We then discuss the simulation complexity in both the SM and the LCU models in Section 3.3.

The continuous qDRIFT protocol also has similarities with the approach of Poulin et al. [40] based on Hamiltonian averaging and Monte Carlo sampling, although their approach does not have -norm scaling. We give a detailed comparison between these two approaches in Appendix A.

3.1 A classical sampler of time-dependent Hamiltonians

Let be a time-dependent Hamiltonian defined for . For this section only, we relax our requirements on the Hamiltonians: we assume that is nonzero everywhere and is continuous except on a finite number of points. We further suppose that each can be directly exponentiated on a quantum computer. The ideal evolution under for time is given by and the corresponding quantum channel is

(43)

The high-level idea of the sampling algorithm is to approximate the ideal channel by a mixed unitary channel

(44)

where

is a probability density function defined for

. This channel can be realized by a classical sampling protocol. With a proper choice of , this channel approximates the ideal channel and can thus be used for quantum simulation.

We begin with a full definition of . Inspired by [13], we choose to be biased toward those with large . A natural choice is

(45)

Note that is a valid quantum channel (in particular, can never be zero). Furthermore, it can be implemented with unit cost: for any input state , we randomly sample a value according to and perform . Note also that in the exponential implicitly depends on . Indeed, includes an integral over time, so decreases with the total evolution time . We call this classical sampling protocol and the channel it implements “continuous qDRIFT”.

This protocol assumes that the spectral norm is known a priori and that we can efficiently sample from the distribution . In practice, it is often easier to obtain a spectral-norm upper bound . Such an upper bound can also be used to implement continuous qDRIFT, provided that it has only finitely many discontinuities. Specifically, we define

(46)

so is a probability density function. Using to implement continuous qDRIFT, we obtain the channel

(47)

whose analysis is similar to that presented below. For readability, we assume that we can efficiently sample from and we analyze .

We show that continuous qDRIFT approximates the ideal channel with error that depends on the -norm.

Theorem 5 (-norm error bound for continuous qDRIFT, short-time version).

Let be a time-dependent Hamiltonian defined for ; assume it is continuous except on a finite number of points and nonzero everywhere. Define and let be the corresponding quantum channel. Let be the continuous qDRIFT channel

(48)

where . Then

(49)

To prove this theorem, we need a formula that computes the rate at which the evolution operator changes when the Hamiltonian is scaled. To illustrate the idea, consider the degenerate case where the Hamiltonian is time independent. Then the evolution under for time is given by . A direct calculation shows that

(50)

so the rate is in the time-independent case. This calculation becomes significantly more complicated for a time-dependent Hamiltonian. The following lemma gives an explicit formula for

(51)

We sketch the proof of this formula for completeness, but refer the reader to [21, p. 35] for mathematical justifications that are beyond the scope of this paper.

Lemma 6 (Hamiltonian scaling).

Let be a time-dependent Hamiltonian defined for and assume it has finitely many discontinuities. Denote . Then,

(52)
Proof sketch.

We first consider the special case where is continuous in . We invoke the variation-of-parameters formula [31, Theorem 4.9] to construct the claimed integral representation for . To this end, we need to find a differential equation satisfied by and the corresponding initial condition . We differentiate the Schrödinger equation with respect to to get

(53)

Invoking the variation-of-parameters formula, we find an integral representation

(54)

It thus remains to find the initial condition .

We start from the Schrödinger equation and apply the fundamental theorem of calculus with initial condition , obtaining the integral representation

(55)

Differentiating this equation with respect to gives

(56)

which implies

(57)

Combining (54) and (57) establishes the claimed integral representation for .

Now consider the case where is piecewise continuous with one discontinuity at . We use the multiplicative property to break the evolution at , so that each subevolution is generated by a continuous Hamiltonian. We have

(58)

The general case of finitely many discontinuities follows by induction. ∎

Note that our argument implicitly assumes the existence of the derivatives and that we can interchange the order of