References
I Block encoding
When discussing the number of queries to an oracle we do not distinguish between and its controlled version. The asymptotic notations , are used for the limit and . We use to mean
multiplied by a polylogarithmic part. Sometimes we do not distinguish between the different ways of measuring error, e.g. in terms of fidelity or 2norm distance of density matrices, since the query complexity is logarithmic in the error defined in both ways. Floatingpoint arithmetic is assumed to be exact for conciseness. If floatingpoint error is taken into account this will only lead to a logarithmic multiplicative overhead in the number of primitive gates, and a logarithmic additive overhead in the number of qubits needed.
The technique of blockencoding has been recently discussed extensively GilyenSuLowEtAl2019; LowChuang2019. Here we discuss how to construct blockencoding for which is used in eigenstate filtering, and , , and which are used in QLSP and in particular the Hamiltonian simulation of AQC. We first introduce a simple technique we need to use repeatedly.
Given , an blockencoding of where , we want to construct a block encoding of for some . This is in fact a special case of the linear combination of unitaries (LCU) technique introduced in ChildsKothariSomma2017. Let
and . Since , we have
where . Therefore Fig. S1 gives an blockencoding of .
Therefore we may construct an blockencoding of . We remark that here we do not need the phase shift gate since . This is at the same time a blockencoding of .
Now we construct a blockencoding of with . Let
be the reflection operator about the hyperplane orthogonal to
. Then is the reflection about the hyperplane orthogonal to . Note that . Therefore we can use the technique illustrated in Fig. S1 to construct a blockencoding of . Here . Since , we naturally obtain a blockencoding of . We denote the blockencoding asFor the blockencoding of , first note that
From the blockencoding of , we can construct the blockencoding of controlled by replacing all gates with their controlled counterparts. The block matrix in the middle is . For a sparse matrix , we have a blockencoding of , and therefore we obtain a blockencoding of . Then we can use the result for the product of blockencoded matrix (GilyenSuLowEtAl2019, Lemma 30) to obtain a blockencoding of , denoted as .
Ii Gatebased implementation of timeoptimal adiabatic quantum computing
Consider the adiabatic evolution
Where for and defined in (2). It is proved in AnLin2019 that the gap between and the rest of the eigenvalues of is lower bounded by . With this bound it is proved that in order to get an approximate solution of the QLSP for a positive definite we need to run for time using the optimal scheduling (AnLin2019, Theorem 1).
In order to carry out AQC efficiently using a gatebased implementation, we use the recently developed timedependent Hamiltonian simulation method based on truncated Dyson series introduced in LowWiebe2018. In Hamiltonian simulation, several types of input models for the Hamiltonian are in use. Hamiltonians can be input as a linear combination of unitaries BerryChildsCleveEtAl2015, using its sparsity structure AharonovTaShma2003; LowChuang2017, or using its blockencoding LowChuang2019; LowWiebe2018. For a timedependent Hamiltonian Low and Wiebe designed an input model based on blockencoding named HAMT (LowWiebe2018, Definition 2), as a blockencoding of where is a time step and is the Hamiltonian at this time step.
In the gatebased implementation of the timeoptimal AQC, we construct HAMT in Fig. S2. We need to use the blockencodings and introduced in the previous section. We denote and as the number of ancilla qubits used in the two blockencodings. We know that and . Our construction of HAMT satisfies
(S1) 
for any .
In this unitary HAMT we also need the unitary
(S2) 
to compute the scheduling function needed in the timeoptimal AQC, and the unitaries
(S3)  
where . Here is used for preparing the linear combination . Without the circuit would be a blockencoding of , but with it becomes a blockencoding, so that the normalizing factor is timeindependent, as is required for the input model in LowWiebe2018.
For the AQC with positive definite we have and . For indefinite case we have and .
Following Corollary 4 of LowWiebe2018, we may analyze the different components of costs in the Hamiltonian simulation of AQC. For time evolution from to , HAMT is a blockencoding of . With the scheduling function given in AnLin2019 we have and . We choose and by Theorem 1 of AnLin2019 we have . We only need to simulate up to constant precision, and therefore we can set . The costs are then

Queries to HAMT: ,

Qubits: ,

Primitive gates: .
Iii The matrix dilation method
In order to extend the timeoptimal AQC method to Hermitian indefinite matrices, we follow (AnLin2019, Theorem 2), where and are given by
(S4)  
Here and . The dimension of the dilated matrices is . The lower bound for the gap of then becomes SubasiSommaOrsucci2019. The initial state is and the goal is to obtain . After running the AQC we can remove the second qubit by measuring it with respect to the basis and accepting the result corresponding to . The resulting query complexity remains unchanged. We remark that the matrix dilation here is only needed for AQC. The eigenstate filtering procedure can still be applied to the original matrix of dimension .
For a general matrix, we may first consider the extended linear system. Define adjoint QLSP as , and consider an extended QLSP in dimension where
Here is a Hermitian matrix of dimension , with condition number and , and solves the extended QLSP. Therefore the timeoptimal AQC can be applied to the Hermitian matrix to prepare an approximation of and simultaneously. The dimension of the corresponding matrices is . Again the matrix dilation method used in Eq. (S4) is not needed for the eigenstate filtering step.
Iv Optimal Chebyshev filtering polynomial
Lemma 5.
For any satisfying for all , for all .
Proof.
We prove by contradiction. If there exists such that for all and there exists such that , then letting , we want to show has at least distinct zeros.
First note that there exist such that , and . Therefore there exist such that , and . In other words, maps each and to , and the mapping is bijective for each interval. Because , there exists for each such that . Therefore and give us distinct zeros. Another zero can be found at as . Therefore there are distinct zeros.
However is of degree at most . This shows . This is clearly impossible since . ∎
Therefore any , solves the minimax problem
This implies (i) of Lemma 2. To prove (ii), we need to use the following lemma:
Lemma 6.
Let be the th Chebyshev polynomial, then
for .
Proof.
The Chebyshev polynomial can be rewritten as for . Let , then . The choice of does not change the value of , so we choose . Since for , we have . Thus . ∎
We use this lemma to prove (ii). Since , when , we have . Thus by the above lemma we have . Since for , we have the inequality in (ii). (iii) follows straightforwardly from the monotonicity of Chebyshev polynomials outside of .
Comments
There are no comments yet.