A Quantum IP Predictor-Corrector Algorithm for Linear Programming

02/18/2019 ∙ by P. A. M. Casares, et al. ∙ Universidad Complutense de Madrid 0

We introduce a new quantum optimization algorithm for Linear Programming (LP) problems based on Interior Point (IP) Predictor-Corrector (PC) methods whose (worst case) time complexity is O(√(n)Ls^3 k ϵ^-1ϵ_s^-1) . This represents a quantum speed-up in the number n of variables in the cost function with respect to the comparable classical Interior Point (IP) algorithms that behave as O((n+m)√(nk)L s^3(ϵ^-1)ϵ_s^-1) or O(√(n)(n+m)L) depending on the technique employed, where m is the number of constraints and the rest of the variables are defined in the introduction. The average time complexity of our algorithm is O(√(n)s^3 k ϵ^-1ϵ_s^-1), which equals the behaviour on n of quantum Semidefinite Programming (SDP) algorithms based on multiplicative weight methods when restricted to LP problems and heavily improves on the precision ϵ^-1 of the algorithm. Unlike the quantum SDP algorithm, the quantum PC algorithm does not depend on size parameters of the primal and dual LP problems (R,r), and outputs a feasible and optimal solution whenever it exists.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Linear Programming (LP) problems are among the most basic optimization problems Nering-Tucker ; Padberg ; Murty

. Applications abound both at personal and professional fronts: improving a project delivery, scheduling of tasks, analyzing supply chain operations, shelf space optimization, designing better strategies and logistics and scheduling problems in general. LP is also used in Machine Learning where Supervised Learning works on the basics of linear programming. A system is trained to fit on a mathematical model of an objective (cost) function from the labeled input data that later can predict values from an unknown test data

Russell-Norvig ; MRT . More specifically, linear programming is a method to achieve the best outcome, such as maximum profit or lowest cost, in a mathematical model whose requirements are represented by linear relationships known as constraints. Semi-Definite Programming (SDP) is an extension of LP when the objective or cost function is formulated with a non-diagonal matrix and constraints contain more general inequalities Vandernberghe-Boyd ; Todd ; Laurent-Rendl ; deKlerk .

We are in the time of small quantum computers with reduced computational capabilities due to noisy physical qubits

Preskill ; Ions ; SCQ1 ; SCQ2 . The challenge of surpassing the power of current and foreseeable classical computers is attracting a lot of attention in the academia and in technological companies Preskill2 ; Aaronson-Arkhipov . This motivates the endeavour of searching for new quantum algorithms beyond the standard ones that spurred the field of quantum computation in the mid 90s (Shor, Grover, etc.) Shor ; Grover ; Nielsen-Chuang ; GMD . Only recently, SDP problems have been given a quantization procedure by Brandão and Svore providing us with the first quantum advantage for these optimization problems Brandao-Svore ; QuantumSDP1 ; QuantumSDP2 ; QuantumSDP3 ; QuantumSDP4 .

The development of methods to solve LP problems has a long tradition starting with the Simplex Method Murty . Interior Point Methods (IP) stand out among the large variety of available methods Potra-Wright . In turn, IP methods represent a whole class of solving strategies for LP optimization. Among them, the Predictor-Corrector Method Predictor-CorrectorI ; Predictor-Corrector is arguably one of the best procedures to achieve an extremely well-behaved solution. Here we present a quantum algorithm that relies on the quantization of a Predictor-Corrector Method. One important feature of our quantum IP algorithm is that it is a hybrid algorithm: partially classical, partially quantum. This feature has become very common and a similar situation occurs with the Brandão-Svore algorithm in SDP or the Quantum Eigen-Solver for quantum chemistry Aspuru ; Mezzacapo ; Solano ; Innsbruck ; ReviewQCh , and many others. The core of this quantum IP algorithm relies on the quantization of a crucial step of the Predictor-Corrector method by means of the HHL quantum algorithm for solving linear systems of equations HHL . More precisely, by means of a Quantum Linear System Algorithm (QLSA) QLSAchilds that has improved features w.r.t. the original HHL algorithm. In order to apply the QLSA in the context of LP programming, we have to solve several caveats since the straightforward application of it is doom to failure.

The quantum IP algorithm we propose benefits from several fundamental properties inherited from the classical Predictor-Corrector algorithm, and has a better performance than other classical IP algorithms. In particular Predictor-Corrector :

  1. The Predictor-Corrector method can solve the LP problem without assuming the existence of feasible or optimal solutions.

  2. If LP has solution, the loop of this interior point algorithm approaches feasibility and optimality at the same time, and if the problem is infeasible or unbounded the algorithm detects infeasibility for either the primal or dual problem.

  3. The algorithm can start from any point near the center of the positive orthant, and does not use any big- penalty or lower bound (except in our case in the first iteration, as we will see).

The notions of feasible, optimal solutions etc. are defined in Sec. II where a self-contained review of the Predictor-Corrector method is presented.

The time complexity of the proposed quantum IP algorithm is and the space complexity is , where is the number of variables of the cost function, is the number of constraints, is the size of the encoded data (see eq.(1)), is the sparsity of the matrix of constraints, is a precondition parameter to be specified later, is the precision of the algorithm and a precision coming from a sparsification procedure that we will explain afterwards. This represents a quantum speed-up in with respect to the best classical IP comparable algorithm (preconditioned conjugate gradient descent) with efficiency , or , and space complexity, if we are using the customary Cholesky decomposition numerical_recipies ; Potra-Wright ; Parallel_cholesky .

On the other hand the average time complexity of the quantum IP algorithm is , which equals the efficiency of the quantum SDP algorithm of Brandão-Svore when restricted to LP problems. A precise comparison among different classical and quantum algorithms can be found in Table I.

It is worth mentioning that our quantization approach to LP problems is radically different from the method of Brandão and Svore and this comes with several benefits. Namely, the problem of quantising linear programming using multiplicative weight methods Arora-Kale as in Brandão-Svore is that they yield an efficiency depending on parameters and of the primal and dual problems. In fact, they might depend on the sizes of the cost function, thereby the real time complexity of the algorithm remains hidden. Moreover and generically, unless specified, these parameters cannot be computed before hand, but after running the algorithm. Thus, the real efficiency of the quantum algorithm is masqueraded by overhead refactors behaving badly on R and r. This situation is clearly not satisfactory and quantization methods with a clean quantum advantage are desirable like the ones proposed here. The quantum IP algorithm has a better behaviour in the precision as compared to the strong power-like behaviour as in the most recent improvement of the Brandão-Svore algorithm QuantumSDP3 (although we should also take into account a precision coming from the sparsification process ). On the contrary, the space complexity of the latter is , whereas both the classical and quantum IP algorithms have space complexity , or if the classical algorithms uses Cholesky decomposition.

Next, we present a more detailed description of our main results and the structure of the paper.

i.1 Results

This article combines the Predictor-Corrector algorithm Predictor-Corrector with the Quantum Linear System Algorithm (QLSA) QLSAchilds with the aim of obtaining an interior point hybrid (quantum-classical) algorithm for linear programming, that runs on average as , which is optimal as explained in Brandao-Svore . The major price to pay is a space complexity, in the sense of classical parallelism.

The main constraint with using this algorithm is that the matrix

and vectors

and from (2) and (3) must be sparse. This is because one way to control the condition number parameter of the Quantum Linear System Algorithm is using a preconditioner (Sparse Approximate Inverse SPAI ), which will have complexity , for the sparsity parameter (the number of non-zero terms).

The dependence on other parameters is comparatively rather good. We obtain

coming from the sparsification procedure and the Amplitude Estimation algorithm

Amplitude_estimation for the readout procedure of QLSA. The sparsification may also lead to numerical precision problems in the first iteration of the algorithm, due to some terms of size . The dependence on is worse than in comparable classical algorithms where one would get , but other quantum algorithms do not reach this bound either and they are even worse.

Our algorithm inherits the nice properties of the Predictor-Corrector algorithm, since we have successfully implemented the QLSA algorithm at the core of solving the various linear systems of equations appearing in this classical Interior Point algorithm.

i.2 Structure of the paper.

The paper has two main sections. The first reviews the Predictor-Corrector algorithm from Predictor-Corrector . It is itself divided in subsections where we explain how to initialize and terminate the algorithm, and the main loop.

In the second section we explain the changes we carry out to be able to use the QLSA from QLSAchilds . In particular we start with two subsections discussing the condition number and sparsity. Then we focus on how to prepare the initial quantum states for the QLSA, following SPAI-QLSA , and read out the results using Amplitude Estimation Amplitude_estimation . Finally we explain the QLSA, comment on the possibility of quantizing the termination of the algorithm, and devote one subsection to the complexity of the overall algorithm and its comparison with other alternatives.

We recommend the reader to look at the algorithms where we have described the precise process step by step. There are also informative figures depicting the flow of the algorithm. Since they can help understand the overall process, we recommend first to understand them before going to the details of each subsection.

Ii The Predictor-corrector algorithm

In this section we review the Predictor-Corrector algorithm of Mizuno, Todd and Ye for solving Linear Programming problems Predictor-Corrector . As stated in the original article, we will see that it performs iterations of the main loop in the worst case scenario, where is the number of variables and the length of the encoding of the input data:

(1)

Note that the smallest value can take is . However, on the average case the number of iterations will not depend on , rather Average_performance .

ii.1 Initialization

The linear programming problem we want to solve is denoted as (LP): Given , and , find such that:

(2a)
(2b)

If the matrix A fulfills some weak requirements (the duality gap being 0) its dual problem has the same solution. The dual problem is (DP): finding such that

(3a)
(3b)

It is common to define the slack (dual) variable to the constraint (3). However, in order to avoid confusion with the sparsity of , we will call it (zeta) in similarity with the notation used sometimes in semidefinite programming ().

To solve the previous problem, we set another which is artificial or auxiliary, homogeneous (in the sense that there is a single non-zero constraint), and self-dual (its dual problem is itself). This allows us to apply primal-dual interior-point algorithms without doubling the dimension of the linear system solved at each iteration. Therefore, given any , , and , formulate (HLP)

(4a)
such that ():
(4b)
(4c)
(4d)
(4e)

with

(5)

The constraint (4e) is used to impose self-duality. It is also important to remark that , and (which in Predictor-Corrector is denoted by ) indicate the infeasibility of the initial primal and dual points, and the dual gap, respectively.

Recall also that we use slack variables to convert inequality constraints into equality constraints. Those slack variables indicate the amount by which the original constraint deviates from an equality. As we have two inequality constraints, we define slack variables for (4c) and for (4d):

(6)
(7)

This implies that we can rewrite (4e) as

(8)

Once we have defined these variables, Theorem 2 of Predictor-Corrector indicates that

  1. The dual (HLD) of (HLP) is again (HLP).

  2. We can check that a feasible solution to this new system is:

    (9)
  3. There is an optimal solution such that

    (10)

    which we call strictly self-complementary. Self-complementary indicates that it solves (HLP) and strictly indicates that the inequalities are fulfilled strictly (with rather than ).

Therefore, we can choose any point fulfilling (9) as a feasible starting point for our algorithm. A particularly simple one would be

(11)

ii.2 Main loop

In this section we explain how to set up an iterative method that allows us to get close to the optimal point, following a path along the interior of the feasible region. The original references are Predictor-CorrectorI ; Predictor-Corrector . To define that path, theorem 5 of Predictor-Corrector indicates that

  • For each , there is a unique point , where is the set of feasible points of (HLP), such that

    (12)

    where . We will later see that we use the analog notation of .

  • Let be a point in the null space of the constraint matrix of (HLP) with variables and , i.e.

    (13a)
    (13b)
    (13c)
    (13d)

    Then

    (14)

This theorem defines the following path in (HLP)

(15)

and its neighbourhood

(16)

In consequence, the algorithm goes as follows: starting from an interior feasible point and given the following system of equations for variables and :

(17)
(18)

which can be written as , i.e.

(19)

perform the following steps iteratively:

Predictor step: Solve (19) with for the previously calculated point . Then find the biggest step length such that

(20)

is in , and update the values accordingly.

Corrector step: Solve (19) with and set

(21)

that will be in .

ii.3 Termination

The loop from the previous section will run over until one of the following two criteria are fulfilled: For small numbers, either

(22)

or

(23)

We will have to iterate up to times, with .

If the termination is due to condition (23), then we know that there is no solution fulfilling . We will then consider, following Predictor-Corrector , that either (LP) or (LD) are infeasible or unbounded.

However, if termination is due to (22), denote by the index set . Let also B the columns of M such that their index is in , and the rest by C.

Case 1: If solve for

(24a)
such that
(24b)

Case 2: If and we solve for and from

(25a)
such that
(25b)

The result of either of these two calculations will be the output of our algorithm, and the estimate of the solution of the (HLP) problem. In particular, will be the calculated in the least square projection together with , and will be the calculated again in the least square projection.

Iii The quantum algorithm

The aim of this section is to explain how the Quantum Linear System Algorithm (QLSA) can help us efficiently run this algorithm, in the same spirit of, for example, FEM solving the problem of the Finite Element Method. This is due to the fact that solving (19) is clearly the most computationally expensive part of each step. We will use the following result (algorithm):

Theorem 1 QLSAchilds : Let M be an Hermitian matrix (if the matrix is not hermitian it can be included as a submatrix of a Hermitian one) with

(that is, the condition number: the ratio between the highest and smallest eigenvalue for positive definite matrices), and M having an sparsity

(at most nonzero entries in each row). The usual notation for the condition number is , but since we want to avoid confusion with the slack variable from the Predictor-Corrector algorithm, we will call the former .

Let be an N-dimensional unit vector, and assume that there is an oracle which produces the state , and another which, taking as input, outputs the location and value of the i’th nonzero entry in row of . Let

(26)

Then, QLSAchilds construct an algorithm relying on Hamiltonian simulation that outputs the state up to accuracy

, with bounded probability of failure, and makes

(27)

uses of and

(28)

of ; and has overall time complexity

(29)

In our case the variable is the size of the matrix of (19), that is , so the time complexity of running their proposed algorithm is , considering that .

Using the QLSA is not straightforward, so let us look first into the parameters involved: the condition number and the sparsity. The sparsity will play an important role on the Hamiltonian simulation necessary in the QLSA, while the condition number will appear when we try to implement a linear combination that calculates as we shall see.

iii.1 The condition number .

We have seen that the QLSA is linear in . We also know, from Ambainis , that this dependence is optimal. Therefore it is important to check that does not depend (except maybe polylogarithmically) on .

The main problem, however, is that we should be able to control on each of the iterations, in order to be able to calculate the overall complexity beforehand. This is not easy since the matrix of (19) will be updated each time we solve the system. For that reason we will use the Sparse Approximate Inverse (SPAI) algorithm SPAI as suggested in SPAI-QLSA , to precondition the system and control . Other preconditioning methods not discussed here might also be possible.

Let us follow SPAI . The procedure consists in finding a matrix that minimizes in the Frobenius norm.

(30)

This problem can be separated into least squares problems

(31)

to be solved in parallel, where the hat means that columns and rows with all zeros have been removed and being a Kronecker delta between the row index and .

The key of the algorithm relies on choosing the sparsity pattern of . SPAI is a recursive algorithm itself, so this can be done in first place taking with the same sparsity pattern as . In subsequent applications of the preconditioning algorithm, when we manage to find out the best pattern for , we can use that pattern in the rest of the recursions of the main loop of the predictor corrector algorithm, which will make it run faster.

This means solving independent least squares problems in parallel, with an iteration procedure times, with and the row and column sparsity respectively. Note that it is customary to refer to row sparsity as sparsity, but due to the (anti)symmetry of the matrix , . This can be done in operations with calls to the oracle of matrix , , and the additional coming from the iteration procedure of SPAI.

Now define the residues

(32)

Making can be done inverting the matrix so that . The classical complexity of inverting a matrix is (it is another way of obtaining that complexity bound) or we can fix a more modest objective of attaining

(33)

If so, we have a theorem SPAI indicating that if , then the condition number is bounded as

(34)

Since in QLSAchilds the complexity dependence on and is the same, we want to find out which is necessary in order to have

(35)

That means

(36)

so if then we can take .

The preconditioning will be carried out classically. Overall the time complexity of SPAI is and the space complexity is , since we need to solve systems of equations in parallel.

iii.2 The sparsity

In the previous section we have seen that the sparsity plays an important role, and in the QLSA the time complexity of this parameter is lineal, whereas in the preconditioning it is cubic.

Luckily, the sparsity of the system does not vary when we update (19). However, we need to ensure that from the start it is a sparse system. Obviously, needs to be sparse. Otherwise we might be able to employ the QLSA for dense matrices DenseHHL but not the one that we have been using, the QLSA for sparse systems QLSAchilds . For the time being let us suppose that is in fact sparse. What else do we need? Since , , and also appear in matrix , it is important that those vectors are sparse too.

Let us assume then that both and are sparse. Otherwise we might want to sparsify our matrix, using for example the sparsifier given in arora_sparsifier , but it would be of little help since it would take the sparsity to , for . However, the problem is that we want , so it would not be a good procedure. If , we can make by solving the linear system of equations for the variable (see (5)). This trick will not be available for though, because , so we will abandon this procedure in favour of a more powerful one.

The strategy we use is setting . We can take to make things simpler. The idea is dropping all those factors (because they are small) in and (check (5)). In particular, and . Let us now calculate the error we incur when we effectively make and .

First recall that we define

(37)

The difference in this case between the sparsed matrix and the original matrix is terms in a column and those same terms in the corresponding row. It can be calculated that expression (37) for our matrix will be maximized for a vector with all (relevant) entries equal except the last one, that can take any value. is defined such that there are of those entries in the vector, and of non-zero entries concentrated in one column and one row of .

It is then easy to see that . In brief, we can choose , , and sparsify our matrix setting and . We have seen that then .

Let us finally see that this causes an error up to when solving the linear system of equations. Suppose that for any , we have a such that . If instead of , we use , then, by (37),

(38)

Applying ,

(39)

and we obtain

(40)

where the subindex will be used to indicate that the error is associated with the sparsifying procedure. Therefore, we can see that the complexity on the precision of calculating the inverse is also bounded by .

Using the previously explained procedure may lead to numerical representation problems in the first iteration, but will be solved once . We must nevertheless take into account that this problem does not only affect the quantum algorithm since any classical algorithm will have dependence , and the preconditioning will need as well.

Finally, it might be possible to bypass the sparsity problem using DenseHHL , although we have not explored it here, and would worsen the dependence on of the overall algorithm. It would also require a different preconditioning procedure. Using our procedure requires , and to be sparse.

iii.3 Quantum state preparation

If we want to use the QLSA QLSAchilds we need to be able to prepare the quantum state and after the procedure, read out the solution. We will follow the procedure of SPAI-QLSA to carry out the former procedure, and we will explain it in this section.

The main problem that preparing a quantum state has is that no arbitrary quantum state preparation procedure is known SPAI-QLSA . However, it is possible to prepare a quantum state of the form

(41)

where is a notation for a linear combination such that .

For preparing it, we will use algorithm 1 from SPAI-QLSA which has a query complexity , and the only condition it gives is that we need an oracle that efficiently takes for each , and similarly another one taking for each too, in superposition.

1:procedure Quantum state preparation
2:     Direct preparation of an arbitrary quantum state is not usually possible. Instead consider creating the state
(42)
This can be done as follows SPAI-QLSA .
3:     Initialize
(43)
4:     Query the oracle that calculates the amplitude and phase of the vector
(44)
controlled by the first register. Then
(45)
5:     Apply a controlled phase to controlled by .
6:     Apply a rotation to the same ancilla qubit, controlled by . The state is now
(46)
where .
7:     Uncompute registers 2 and 3 with the inverse oracles. The result is
(47)
This means that
(48)
Algorithm 1 State preparation.

Once we have this state we could in principle prepare several copies, and postselect the result on the ancilla being measured on state . However, this last step will not be necessary. We will instead carry out the computations in the state and we will see how to take care of the ancilla being in the right state during the readout procedure.

Figure 1: Quantum state preparation procedure. This procedure allows to create the state .

The explicit process is explained in algorithm 1, and depicted in figure 1.

iii.4 Readout of the solution of QLSA: Amplitude Estimation

In the same way that we need some procedure to prepare the quantum state that feeds in the QLSA, we need some way to read out the information in , defined as in equation (26). For this, first remember that we are only interested in the amplitude of those terms of the result who have the preparation ancilla at state , and the ancillas of QLSA in state , where is defined in the step 6 of the algorithm 3.

We could in principle use a result from SPAI-QLSA that explains how to calculate the inner product of the solution with any vector. However, in our case we will read out a single entry of the solution vector. As the procedure to calculate the inner product involves performing Amplitude Estimation several times, it is simpler and faster to use Amplitude Estimation Amplitude_estimation to estimate the absolute value of the amplitude of each component of the solution vector. The procedure is explicitly specified in algorithm 2, and a circuit representing Amplitude Estimation indicated in figure 2. The sign of the amplitudes is discussed afterwards.

1:procedure State readout: Amplitude estimation
2:     The procedure of amplitude amplification Amplitude_estimation is a generalization of Grover’s search algorithm Grover to the cases where an algorithm produces a result , and we wish to amplify . We do that by recursively applying to , where
(49)
(50)
so that after applications of , . Note that the period of is therefore approximately . The procedure of amplitude estimation consists on estimating

using Fourier transform. Recall that in our case,

will be the concatenation of algorithms 1 and 3. They are unitary (have inverse) since no measurements are performed at any step. Finally, we can see that is defined according to the states of the ancillas we wish to measure, and the element of the basis we want to obtain.
3:     Initialize two registers of equal size, in the state .
4:     Apply Fourier transform to the first register. (If we are working with power of 2 registers one can perform Hadamard unitary instead).
(51)
5:     Apply the following action
(52)
6:     Apply to the first register. (Or a Hadamard transform if registers have size power of 2).
7:     Measure the first register, and let be the result.
8:     Output
Algorithm 2 State readout: Amplitude estimation.

The reader might be surprised since in order to perform the Predictor-Corrector algorithm we need the full solution , not just an element of the basis. The solution to this problem consists in classically paralleling the entire procedure, so that the time complexity remains , whereas the space complexity scales to , the same as for the preconditioning procedure. Or put in another words, in parallel we will solve the same system of equations times (specifically ) and read out one element of the solution vector at each copy of the solution.

The only negative side of using this procedure is that Amplitude Estimation has a time-complexity of instead of the we would have wished for. Unfortunately, we are not aware of any procedure that could allow us to readout the state faster, and in principle this procedure for Amplitude Estimation is optimal Amplitude_estimation . Since Amplitude Estimation requires iterations and in each of those we get an error of sparsification of (i.e. making and ) proportional to due to the sparsification of , that means that the total complexity on the procedure is .

Figure 2: Circuit representation of Amplitude Estimation.

There is one more thing we should do: find out the sign of each term in the vector, since amplitude estimation only estimates the absolute value of the amplitude. We propose to use the following method: we are going to check the relative sign of every amplitude to each other, and later on calculate the global sign correction classically checking if one entry fulfills or is off by a sign (this is the same procedure as to correctly scale the solution vector).

To derive the relative sign between the amplitude of any two entries ( and , for instance) of the solution vector , we can encode the states , the needed normalization constant, using algorithm 1. Then we can calculate, with the procedure explained in SPAI-QLSA , the quantities , which will either be for and for when the relative sign is the same; or viceversa if the relative signs are opposite. One can establish the relative sign of all the entries of the solution vector (and therefore the solution up to a global sign) with the previous procedure in classical parallelism, which amounts to a total space overhead that we already had, and the same time complexity we already incurred in when calculating the amplitudes.

iii.5 Quantum Linear System Algorithm (QLSA)

Let us know explain the heart of our construction: the QLSA as in QLSAchilds . In particular, we will use the Fourier version since even if it has slightly worse scaling in polylogarithmic terms, its Hamiltonian simulation procedure can be used as a black-box and is more generally applicable. Needless to say, it can be substituted by the Chebyshev polynomial approach whenever useful.

1:procedure QLSA
2:     Here we explain the algorithm of QLSAchilds using the Fourier approach, which is more generally applicable since it uses Hamiltonian simulation as an oracle subroutine.
3:     Fourier decomposition of
4:     Define and fixed, positive and arbitrarily small.
5:     Define and , for large and .
6:     Define .
7:     Define and .
8:     Define, for sparse operator
(53)
9:     Define
(54)
10:     One may prove (Lemma 11 on QLSAchilds ) that is -close to in the domain .
11:     Implementing the combination of unitaries. Lemmas 6, 7, and corollary 10 of QLSAchilds .
12:     Define
13:     Define the operator such that .
14:     Define , where is such that
(55)
Whenever is unitary, then .
15:     Define .
16:     When we apply we get
(56)
such that . Additionally, the first term (the one we are interested in) is close to .
Algorithm 3 QLSA.

The standard procedure at the end of the QLSA algorithm explained here, would be to perform amplitude amplification to amplify the amplitude appearing in front of the term we are interested in, in step 12 in algorithm 3. Note, from steps 4 and 5 that the complexity would be . This complexity would be multiplied by the one of preparing in step 14 of algorithm 3 (), which is much bigger than that of preparing in the same algorithm but in the step 13 QLSAchilds . However, in order to avoid having to take into account this additional factor without making the algorithm more complicated with the trick of Variable-time Amplitude Amplification Ambainis , we pass on the result without amplification as it is done in SPAI-QLSA with the original HHL algorithm HHL . Note also that we cannot use Variable-time Amplitude Amplification because there is a step involving measurements, incompatible with Amplitude Estimation. In brief: there is no need to perform amplitude amplification, nor a way to do it efficiently (Variable-time Amplitude Amplification would not be unitary as required).

We can see that in algorithm 3, in contrast to what happens in SPAI-QLSA , the final complexity is instead of since the procedure of QLSAchilds allows us to avoid Phase Estimation. This does not take into account the error due to the sparsification which adds an additional .

Taking into account all the points previously explained, we describe the QLSA step-by-step in algorithm 3.

iii.6 On quantizing the termination.

If there exist a feasible and optimal solution, we have seen that the loop should terminate with either procedures (24) or (25). However it is unclear how to carry out this minimization. What we know is that it can be efficiently calculated, or substituted by any more modern and efficient method if found.

The cost of carrying out this termination by classical procedures should be not too big. In fact, according to ye_termination the overall cost is around that of one iteration of the main loop.

However, we can also propose a quantum method to finish this. It would consist on using a small Grover subroutine Grover to find all solutions of (24b) or (25b) in a small neighbourhood of the latest calculated point. After that, without reading out the state, one could apply GroverMin to calculate the one with the smallest distance to the calculated point, as in (24a) or (25a). In any case this should be no problem, and should be calculated efficiently.

iii.7 Complexity

Algorithms for LP Worst-case time complexity Space complexity Average time complexity
Multiplicative weights QuantumSDP3
Pred-Corr. Predictor-Corrector + Grad des. conjugate_gradient
Pred-Corr. Predictor-Corrector + Cholesky numerical_recipies
Pred-Corr. Predictor-Corrector + QLSA QLSAchilds
Table 1: Comparison of complexity of different algorithms that can be used for solving LP problems. It includes only leading-order terms. QLSA stands for Quantum Linear System Algorithm, although it is not comparable to usual linear system algorithms, for a series of reasons explained in Aaronson-read_the_fine_print . ‘Grad des.’ indicates Preconditioned Conjugate Gradient Descent, and ‘Cholesky’ stands for Cholesky decomposition. Note that both the algorithm of Multiplicative weights approach can be applied to more general problems concerning Semidefinite Programming, and that the sparsity they report equals 1 in Linear Programming, as Linear Programming is a particular case of Semidefinite Programming when all matrices are diagonal. Additionally, take into account that the () sparsity contribution to the complexity of QLSA and Preconditioned Conjugate Gradient Descent cases comes mainly (otherwise ) from the preconditioning, which we have chosen to be Sparse Approximate Inverse SPAI , for comparison purposes. is the error coming from the sparsification. Finally, the average time complexity is the same as the worst case except for the fact that it takes into account the independence of the time complexity with respect to Average_performance .

In this section we will indicate the complexity of our algorithm against other algorithms that can be used to solve Linear Programming problems. In particular, we will compare against the same Predictor-Corrector algorithm but using the fastest Classical Linear System Algorithm (conjugate gradient descent conjugate_gradient ), and against the recent algorithm proposed by Brandão and Svore Brandao-Svore for solving Semi Definite Programming problems, a more general class of problems than those studied here (Linear Programming).

Firstly, we must take into account that, as we are using the Predictor-Corrector algorithm Predictor-Corrector , that means by construction iterations of the main loop, and therefore the final time complexity will carry such factor. For sparse problems (as those we are considering), we should also take into account the complexity of solving two Linear Systems of Equations. The QLSA we are using is QLSAchilds , with complexity . Note that we are not using Amplitude Amplification to lower the complexity from to , but this is because we will not be needing it since we do not amplify the amplitude at the end, so we will only have time complexity. In contrast, the fastest comparable Classical Linear System Algorithm is the conjugate gradient method conjugate_gradient , which in our case has time complexity or depending on whether the matrix is positive definite or not ( comes from the sparsification).

But we also have to take into account other procedures. Those are: the preconditioning using SPAI requires space complexity and time complexity. Since this is a classical procedure needed to control , it will be the same for the classical algorithm if we substituted QLSA for conjugate gradient descent, for example. The preparation of quantum states has time complexity , but the readout procedure (Amplitude Estimation, algorithm 2) has space complexity (in each solution we can only read out one amplitude) and time complexity.

Let us therefore compare the complexity of different ways of solving Linear Programming. To leading order, we have table 1.

To summarize all the components in our quantum Predictor-Corrector algorithm and the interrelations among them, we show a diagram in Fig. 3 in the form of a flow chart of actions from the initialization to the termination of the quantum algorithm providing the solution to the given (LP) and (LD) problems in (2) and (3).

Figure 3: Flow chart of the algorithm

Iv Overall structure of the algorithm

iv.1 Initialization

The initialization procedure consists in preparing the matrix M, and the state f.

1:procedure Initialization
2:     Problem: Solve the following dual problems
(57)
and
(58)
3:     Input: Sparse matrix , sparse vector , vector .
4:     Output: Dual solutions and , or a signal that the problem is infeasible.
5:     Initialization: Want to form the matrix (19).
6:     Define .
7:     Set , and .
8:     Calculate classically, .
9:     Set and (sparsification).
10:     Set .
Algorithm 4 Quantum interior point algorithm initialization.

iv.2 Termination

In the termination we propose one possible way of using Grover to run the termination explained in Predictor-Corrector . Any other classical termination is also possible.

1:procedure Termination
2:     In this section we propose a termination technique using Grover algorithm Grover and GroverMin to find the optimal solution. We suppose the search space is small enough to allow for this ‘brute force’ search without affecting the complexity class of the main loop. This technique can be nevertheless substituted by any other efficient classical termination.
3:     if termination of algorithm 6 was due to criterion then
4:         (2) or (3) do not have feasible solutions such that