I Introduction
Linear Programming (LP) problems are among the most basic optimization problems NeringTucker ; Padberg ; Murty
. Applications abound both at personal and professional fronts: improving a project delivery, scheduling of tasks, analyzing supply chain operations, shelf space optimization, designing better strategies and logistics and scheduling problems in general. LP is also used in Machine Learning where Supervised Learning works on the basics of linear programming. A system is trained to fit on a mathematical model of an objective (cost) function from the labeled input data that later can predict values from an unknown test data
RussellNorvig ; MRT . More specifically, linear programming is a method to achieve the best outcome, such as maximum profit or lowest cost, in a mathematical model whose requirements are represented by linear relationships known as constraints. SemiDefinite Programming (SDP) is an extension of LP when the objective or cost function is formulated with a nondiagonal matrix and constraints contain more general inequalities VandernbergheBoyd ; Todd ; LaurentRendl ; deKlerk .We are in the time of small quantum computers with reduced computational capabilities due to noisy physical qubits
Preskill ; Ions ; SCQ1 ; SCQ2 . The challenge of surpassing the power of current and foreseeable classical computers is attracting a lot of attention in the academia and in technological companies Preskill2 ; AaronsonArkhipov . This motivates the endeavour of searching for new quantum algorithms beyond the standard ones that spurred the field of quantum computation in the mid 90s (Shor, Grover, etc.) Shor ; Grover ; NielsenChuang ; GMD . Only recently, SDP problems have been given a quantization procedure by Brandão and Svore providing us with the first quantum advantage for these optimization problems BrandaoSvore ; QuantumSDP1 ; QuantumSDP2 ; QuantumSDP3 ; QuantumSDP4 .The development of methods to solve LP problems has a long tradition starting with the Simplex Method Murty . Interior Point Methods (IP) stand out among the large variety of available methods PotraWright . In turn, IP methods represent a whole class of solving strategies for LP optimization. Among them, the PredictorCorrector Method PredictorCorrectorI ; PredictorCorrector is arguably one of the best procedures to achieve an extremely wellbehaved solution. Here we present a quantum algorithm that relies on the quantization of a PredictorCorrector Method. One important feature of our quantum IP algorithm is that it is a hybrid algorithm: partially classical, partially quantum. This feature has become very common and a similar situation occurs with the BrandãoSvore algorithm in SDP or the Quantum EigenSolver for quantum chemistry Aspuru ; Mezzacapo ; Solano ; Innsbruck ; ReviewQCh , and many others. The core of this quantum IP algorithm relies on the quantization of a crucial step of the PredictorCorrector method by means of the HHL quantum algorithm for solving linear systems of equations HHL . More precisely, by means of a Quantum Linear System Algorithm (QLSA) QLSAchilds that has improved features w.r.t. the original HHL algorithm. In order to apply the QLSA in the context of LP programming, we have to solve several caveats since the straightforward application of it is doom to failure.
The quantum IP algorithm we propose benefits from several fundamental properties inherited from the classical PredictorCorrector algorithm, and has a better performance than other classical IP algorithms. In particular PredictorCorrector :

The PredictorCorrector method can solve the LP problem without assuming the existence of feasible or optimal solutions.

If LP has solution, the loop of this interior point algorithm approaches feasibility and optimality at the same time, and if the problem is infeasible or unbounded the algorithm detects infeasibility for either the primal or dual problem.

The algorithm can start from any point near the center of the positive orthant, and does not use any big penalty or lower bound (except in our case in the first iteration, as we will see).
The notions of feasible, optimal solutions etc. are defined in Sec. II where a selfcontained review of the PredictorCorrector method is presented.
The time complexity of the proposed quantum IP algorithm is and the space complexity is , where is the number of variables of the cost function, is the number of constraints, is the size of the encoded data (see eq.(1)), is the sparsity of the matrix of constraints, is a precondition parameter to be specified later, is the precision of the algorithm and a precision coming from a sparsification procedure that we will explain afterwards. This represents a quantum speedup in with respect to the best classical IP comparable algorithm (preconditioned conjugate gradient descent) with efficiency , or , and space complexity, if we are using the customary Cholesky decomposition numerical_recipies ; PotraWright ; Parallel_cholesky .
On the other hand the average time complexity of the quantum IP algorithm is , which equals the efficiency of the quantum SDP algorithm of BrandãoSvore when restricted to LP problems. A precise comparison among different classical and quantum algorithms can be found in Table I.
It is worth mentioning that our quantization approach to LP problems is radically different from the method of Brandão and Svore and this comes with several benefits. Namely, the problem of quantising linear programming using multiplicative weight methods AroraKale as in BrandãoSvore is that they yield an efficiency depending on parameters and of the primal and dual problems. In fact, they might depend on the sizes of the cost function, thereby the real time complexity of the algorithm remains hidden. Moreover and generically, unless specified, these parameters cannot be computed before hand, but after running the algorithm. Thus, the real efficiency of the quantum algorithm is masqueraded by overhead refactors behaving badly on R and r. This situation is clearly not satisfactory and quantization methods with a clean quantum advantage are desirable like the ones proposed here. The quantum IP algorithm has a better behaviour in the precision as compared to the strong powerlike behaviour as in the most recent improvement of the BrandãoSvore algorithm QuantumSDP3 (although we should also take into account a precision coming from the sparsification process ). On the contrary, the space complexity of the latter is , whereas both the classical and quantum IP algorithms have space complexity , or if the classical algorithms uses Cholesky decomposition.
Next, we present a more detailed description of our main results and the structure of the paper.
i.1 Results
This article combines the PredictorCorrector algorithm PredictorCorrector with the Quantum Linear System Algorithm (QLSA) QLSAchilds with the aim of obtaining an interior point hybrid (quantumclassical) algorithm for linear programming, that runs on average as , which is optimal as explained in BrandaoSvore . The major price to pay is a space complexity, in the sense of classical parallelism.
The main constraint with using this algorithm is that the matrix
and vectors
and from (2) and (3) must be sparse. This is because one way to control the condition number parameter of the Quantum Linear System Algorithm is using a preconditioner (Sparse Approximate Inverse SPAI ), which will have complexity , for the sparsity parameter (the number of nonzero terms).The dependence on other parameters is comparatively rather good. We obtain
coming from the sparsification procedure and the Amplitude Estimation algorithm
Amplitude_estimation for the readout procedure of QLSA. The sparsification may also lead to numerical precision problems in the first iteration of the algorithm, due to some terms of size . The dependence on is worse than in comparable classical algorithms where one would get , but other quantum algorithms do not reach this bound either and they are even worse.Our algorithm inherits the nice properties of the PredictorCorrector algorithm, since we have successfully implemented the QLSA algorithm at the core of solving the various linear systems of equations appearing in this classical Interior Point algorithm.
i.2 Structure of the paper.
The paper has two main sections. The first reviews the PredictorCorrector algorithm from PredictorCorrector . It is itself divided in subsections where we explain how to initialize and terminate the algorithm, and the main loop.
In the second section we explain the changes we carry out to be able to use the QLSA from QLSAchilds . In particular we start with two subsections discussing the condition number and sparsity. Then we focus on how to prepare the initial quantum states for the QLSA, following SPAIQLSA , and read out the results using Amplitude Estimation Amplitude_estimation . Finally we explain the QLSA, comment on the possibility of quantizing the termination of the algorithm, and devote one subsection to the complexity of the overall algorithm and its comparison with other alternatives.
We recommend the reader to look at the algorithms where we have described the precise process step by step. There are also informative figures depicting the flow of the algorithm. Since they can help understand the overall process, we recommend first to understand them before going to the details of each subsection.
Ii The Predictorcorrector algorithm
In this section we review the PredictorCorrector algorithm of Mizuno, Todd and Ye for solving Linear Programming problems PredictorCorrector . As stated in the original article, we will see that it performs iterations of the main loop in the worst case scenario, where is the number of variables and the length of the encoding of the input data:
(1) 
Note that the smallest value can take is . However, on the average case the number of iterations will not depend on , rather Average_performance .
ii.1 Initialization
The linear programming problem we want to solve is denoted as (LP): Given , and , find such that:
(2a)  
(2b) 
If the matrix A fulfills some weak requirements (the duality gap being 0) its dual problem has the same solution. The dual problem is (DP): finding such that
(3a)  
(3b) 
It is common to define the slack (dual) variable to the constraint (3). However, in order to avoid confusion with the sparsity of , we will call it (zeta) in similarity with the notation used sometimes in semidefinite programming ().
To solve the previous problem, we set another which is artificial or auxiliary, homogeneous (in the sense that there is a single nonzero constraint), and selfdual (its dual problem is itself). This allows us to apply primaldual interiorpoint algorithms without doubling the dimension of the linear system solved at each iteration. Therefore, given any , , and , formulate (HLP)
(4a)  
such that ():  
(4b)  
(4c)  
(4d)  
(4e) 
with
(5) 
The constraint (4e) is used to impose selfduality. It is also important to remark that , and (which in PredictorCorrector is denoted by ) indicate the infeasibility of the initial primal and dual points, and the dual gap, respectively.
Recall also that we use slack variables to convert inequality constraints into equality constraints. Those slack variables indicate the amount by which the original constraint deviates from an equality. As we have two inequality constraints, we define slack variables for (4c) and for (4d):
(6)  
(7) 
This implies that we can rewrite (4e) as
(8) 
Once we have defined these variables, Theorem 2 of PredictorCorrector indicates that

The dual (HLD) of (HLP) is again (HLP).

We can check that a feasible solution to this new system is:
(9) 
There is an optimal solution such that
(10) which we call strictly selfcomplementary. Selfcomplementary indicates that it solves (HLP) and strictly indicates that the inequalities are fulfilled strictly (with rather than ).
Therefore, we can choose any point fulfilling (9) as a feasible starting point for our algorithm. A particularly simple one would be
(11) 
ii.2 Main loop
In this section we explain how to set up an iterative method that allows us to get close to the optimal point, following a path along the interior of the feasible region. The original references are PredictorCorrectorI ; PredictorCorrector . To define that path, theorem 5 of PredictorCorrector indicates that

For each , there is a unique point , where is the set of feasible points of (HLP), such that
(12) where . We will later see that we use the analog notation of .

Let be a point in the null space of the constraint matrix of (HLP) with variables and , i.e.
(13a) (13b) (13c) (13d) Then
(14)
This theorem defines the following path in (HLP)
(15) 
and its neighbourhood
(16) 
In consequence, the algorithm goes as follows: starting from an interior feasible point and given the following system of equations for variables and :
(17) 
(18) 
which can be written as , i.e.
(19) 
perform the following steps iteratively:
Predictor step: Solve (19) with for the previously calculated point . Then find the biggest step length such that
(20) 
is in , and update the values accordingly.
ii.3 Termination
The loop from the previous section will run over until one of the following two criteria are fulfilled: For small numbers, either
(22) 
or
(23) 
We will have to iterate up to times, with .
If the termination is due to condition (23), then we know that there is no solution fulfilling . We will then consider, following PredictorCorrector , that either (LP) or (LD) are infeasible or unbounded.
However, if termination is due to (22), denote by the index set . Let also B the columns of M such that their index is in , and the rest by C.
Case 1: If solve for
(24a)  
such that  
(24b) 
Case 2: If and we solve for and from
(25a)  
such that  
(25b) 
The result of either of these two calculations will be the output of our algorithm, and the estimate of the solution of the (HLP) problem. In particular, will be the calculated in the least square projection together with , and will be the calculated again in the least square projection.
Iii The quantum algorithm
The aim of this section is to explain how the Quantum Linear System Algorithm (QLSA) can help us efficiently run this algorithm, in the same spirit of, for example, FEM solving the problem of the Finite Element Method. This is due to the fact that solving (19) is clearly the most computationally expensive part of each step. We will use the following result (algorithm):
Theorem 1 QLSAchilds : Let M be an Hermitian matrix (if the matrix is not hermitian it can be included as a submatrix of a Hermitian one) with
(that is, the condition number: the ratio between the highest and smallest eigenvalue for positive definite matrices), and M having an sparsity
(at most nonzero entries in each row). The usual notation for the condition number is , but since we want to avoid confusion with the slack variable from the PredictorCorrector algorithm, we will call the former .Let be an Ndimensional unit vector, and assume that there is an oracle which produces the state , and another which, taking as input, outputs the location and value of the i’th nonzero entry in row of . Let
(26) 
Then, QLSAchilds construct an algorithm relying on Hamiltonian simulation that outputs the state up to accuracy
, with bounded probability of failure, and makes
(27) 
uses of and
(28) 
of ; and has overall time complexity
(29) 
In our case the variable is the size of the matrix of (19), that is , so the time complexity of running their proposed algorithm is , considering that .
Using the QLSA is not straightforward, so let us look first into the parameters involved: the condition number and the sparsity. The sparsity will play an important role on the Hamiltonian simulation necessary in the QLSA, while the condition number will appear when we try to implement a linear combination that calculates as we shall see.
iii.1 The condition number .
We have seen that the QLSA is linear in . We also know, from Ambainis , that this dependence is optimal. Therefore it is important to check that does not depend (except maybe polylogarithmically) on .
The main problem, however, is that we should be able to control on each of the iterations, in order to be able to calculate the overall complexity beforehand. This is not easy since the matrix of (19) will be updated each time we solve the system. For that reason we will use the Sparse Approximate Inverse (SPAI) algorithm SPAI as suggested in SPAIQLSA , to precondition the system and control . Other preconditioning methods not discussed here might also be possible.
Let us follow SPAI . The procedure consists in finding a matrix that minimizes in the Frobenius norm.
(30) 
This problem can be separated into least squares problems
(31) 
to be solved in parallel, where the hat means that columns and rows with all zeros have been removed and being a Kronecker delta between the row index and .
The key of the algorithm relies on choosing the sparsity pattern of . SPAI is a recursive algorithm itself, so this can be done in first place taking with the same sparsity pattern as . In subsequent applications of the preconditioning algorithm, when we manage to find out the best pattern for , we can use that pattern in the rest of the recursions of the main loop of the predictor corrector algorithm, which will make it run faster.
This means solving independent least squares problems in parallel, with an iteration procedure times, with and the row and column sparsity respectively. Note that it is customary to refer to row sparsity as sparsity, but due to the (anti)symmetry of the matrix , . This can be done in operations with calls to the oracle of matrix , , and the additional coming from the iteration procedure of SPAI.
Now define the residues
(32) 
Making can be done inverting the matrix so that . The classical complexity of inverting a matrix is (it is another way of obtaining that complexity bound) or we can fix a more modest objective of attaining
(33) 
If so, we have a theorem SPAI indicating that if , then the condition number is bounded as
(34) 
Since in QLSAchilds the complexity dependence on and is the same, we want to find out which is necessary in order to have
(35) 
That means
(36) 
so if then we can take .
The preconditioning will be carried out classically. Overall the time complexity of SPAI is and the space complexity is , since we need to solve systems of equations in parallel.
iii.2 The sparsity
In the previous section we have seen that the sparsity plays an important role, and in the QLSA the time complexity of this parameter is lineal, whereas in the preconditioning it is cubic.
Luckily, the sparsity of the system does not vary when we update (19). However, we need to ensure that from the start it is a sparse system. Obviously, needs to be sparse. Otherwise we might be able to employ the QLSA for dense matrices DenseHHL but not the one that we have been using, the QLSA for sparse systems QLSAchilds . For the time being let us suppose that is in fact sparse. What else do we need? Since , , and also appear in matrix , it is important that those vectors are sparse too.
Let us assume then that both and are sparse. Otherwise we might want to sparsify our matrix, using for example the sparsifier given in arora_sparsifier , but it would be of little help since it would take the sparsity to , for . However, the problem is that we want , so it would not be a good procedure. If , we can make by solving the linear system of equations for the variable (see (5)). This trick will not be available for though, because , so we will abandon this procedure in favour of a more powerful one.
The strategy we use is setting . We can take to make things simpler. The idea is dropping all those factors (because they are small) in and (check (5)). In particular, and . Let us now calculate the error we incur when we effectively make and .
First recall that we define
(37) 
The difference in this case between the sparsed matrix and the original matrix is terms in a column and those same terms in the corresponding row. It can be calculated that expression (37) for our matrix will be maximized for a vector with all (relevant) entries equal except the last one, that can take any value. is defined such that there are of those entries in the vector, and of nonzero entries concentrated in one column and one row of .
It is then easy to see that . In brief, we can choose , , and sparsify our matrix setting and . We have seen that then .
Let us finally see that this causes an error up to when solving the linear system of equations. Suppose that for any , we have a such that . If instead of , we use , then, by (37),
(38) 
Applying ,
(39) 
and we obtain
(40) 
where the subindex will be used to indicate that the error is associated with the sparsifying procedure. Therefore, we can see that the complexity on the precision of calculating the inverse is also bounded by .
Using the previously explained procedure may lead to numerical representation problems in the first iteration, but will be solved once . We must nevertheless take into account that this problem does not only affect the quantum algorithm since any classical algorithm will have dependence , and the preconditioning will need as well.
Finally, it might be possible to bypass the sparsity problem using DenseHHL , although we have not explored it here, and would worsen the dependence on of the overall algorithm. It would also require a different preconditioning procedure. Using our procedure requires , and to be sparse.
iii.3 Quantum state preparation
If we want to use the QLSA QLSAchilds we need to be able to prepare the quantum state and after the procedure, read out the solution. We will follow the procedure of SPAIQLSA to carry out the former procedure, and we will explain it in this section.
The main problem that preparing a quantum state has is that no arbitrary quantum state preparation procedure is known SPAIQLSA . However, it is possible to prepare a quantum state of the form
(41) 
where is a notation for a linear combination such that .
For preparing it, we will use algorithm 1 from SPAIQLSA which has a query complexity , and the only condition it gives is that we need an oracle that efficiently takes for each , and similarly another one taking for each too, in superposition.
(42) 
(43) 
(44) 
(45) 
(46) 
(47) 
(48) 
Once we have this state we could in principle prepare several copies, and postselect the result on the ancilla being measured on state . However, this last step will not be necessary. We will instead carry out the computations in the state and we will see how to take care of the ancilla being in the right state during the readout procedure.
iii.4 Readout of the solution of QLSA: Amplitude Estimation
In the same way that we need some procedure to prepare the quantum state that feeds in the QLSA, we need some way to read out the information in , defined as in equation (26). For this, first remember that we are only interested in the amplitude of those terms of the result who have the preparation ancilla at state , and the ancillas of QLSA in state , where is defined in the step 6 of the algorithm 3.
We could in principle use a result from SPAIQLSA that explains how to calculate the inner product of the solution with any vector. However, in our case we will read out a single entry of the solution vector. As the procedure to calculate the inner product involves performing Amplitude Estimation several times, it is simpler and faster to use Amplitude Estimation Amplitude_estimation to estimate the absolute value of the amplitude of each component of the solution vector. The procedure is explicitly specified in algorithm 2, and a circuit representing Amplitude Estimation indicated in figure 2. The sign of the amplitudes is discussed afterwards.
(49) 
(50) 
using Fourier transform. Recall that in our case,
will be the concatenation of algorithms 1 and 3. They are unitary (have inverse) since no measurements are performed at any step. Finally, we can see that is defined according to the states of the ancillas we wish to measure, and the element of the basis we want to obtain.(51) 
(52) 
The reader might be surprised since in order to perform the PredictorCorrector algorithm we need the full solution , not just an element of the basis. The solution to this problem consists in classically paralleling the entire procedure, so that the time complexity remains , whereas the space complexity scales to , the same as for the preconditioning procedure. Or put in another words, in parallel we will solve the same system of equations times (specifically ) and read out one element of the solution vector at each copy of the solution.
The only negative side of using this procedure is that Amplitude Estimation has a timecomplexity of instead of the we would have wished for. Unfortunately, we are not aware of any procedure that could allow us to readout the state faster, and in principle this procedure for Amplitude Estimation is optimal Amplitude_estimation . Since Amplitude Estimation requires iterations and in each of those we get an error of sparsification of (i.e. making and ) proportional to due to the sparsification of , that means that the total complexity on the procedure is .
There is one more thing we should do: find out the sign of each term in the vector, since amplitude estimation only estimates the absolute value of the amplitude. We propose to use the following method: we are going to check the relative sign of every amplitude to each other, and later on calculate the global sign correction classically checking if one entry fulfills or is off by a sign (this is the same procedure as to correctly scale the solution vector).
To derive the relative sign between the amplitude of any two entries ( and , for instance) of the solution vector , we can encode the states , the needed normalization constant, using algorithm 1. Then we can calculate, with the procedure explained in SPAIQLSA , the quantities , which will either be for and for when the relative sign is the same; or viceversa if the relative signs are opposite. One can establish the relative sign of all the entries of the solution vector (and therefore the solution up to a global sign) with the previous procedure in classical parallelism, which amounts to a total space overhead that we already had, and the same time complexity we already incurred in when calculating the amplitudes.
iii.5 Quantum Linear System Algorithm (QLSA)
Let us know explain the heart of our construction: the QLSA as in QLSAchilds . In particular, we will use the Fourier version since even if it has slightly worse scaling in polylogarithmic terms, its Hamiltonian simulation procedure can be used as a blackbox and is more generally applicable. Needless to say, it can be substituted by the Chebyshev polynomial approach whenever useful.
(53) 
(54) 
(55) 
(56) 
The standard procedure at the end of the QLSA algorithm explained here, would be to perform amplitude amplification to amplify the amplitude appearing in front of the term we are interested in, in step 12 in algorithm 3. Note, from steps 4 and 5 that the complexity would be . This complexity would be multiplied by the one of preparing in step 14 of algorithm 3 (), which is much bigger than that of preparing in the same algorithm but in the step 13 QLSAchilds . However, in order to avoid having to take into account this additional factor without making the algorithm more complicated with the trick of Variabletime Amplitude Amplification Ambainis , we pass on the result without amplification as it is done in SPAIQLSA with the original HHL algorithm HHL . Note also that we cannot use Variabletime Amplitude Amplification because there is a step involving measurements, incompatible with Amplitude Estimation. In brief: there is no need to perform amplitude amplification, nor a way to do it efficiently (Variabletime Amplitude Amplification would not be unitary as required).
We can see that in algorithm 3, in contrast to what happens in SPAIQLSA , the final complexity is instead of since the procedure of QLSAchilds allows us to avoid Phase Estimation. This does not take into account the error due to the sparsification which adds an additional .
Taking into account all the points previously explained, we describe the QLSA stepbystep in algorithm 3.
iii.6 On quantizing the termination.
If there exist a feasible and optimal solution, we have seen that the loop should terminate with either procedures (24) or (25). However it is unclear how to carry out this minimization. What we know is that it can be efficiently calculated, or substituted by any more modern and efficient method if found.
The cost of carrying out this termination by classical procedures should be not too big. In fact, according to ye_termination the overall cost is around that of one iteration of the main loop.
However, we can also propose a quantum method to finish this. It would consist on using a small Grover subroutine Grover to find all solutions of (24b) or (25b) in a small neighbourhood of the latest calculated point. After that, without reading out the state, one could apply GroverMin to calculate the one with the smallest distance to the calculated point, as in (24a) or (25a). In any case this should be no problem, and should be calculated efficiently.
iii.7 Complexity
Algorithms for LP  Worstcase time complexity  Space complexity  Average time complexity 

Multiplicative weights QuantumSDP3  –  
PredCorr. PredictorCorrector + Grad des. conjugate_gradient  
PredCorr. PredictorCorrector + Cholesky numerical_recipies  
PredCorr. PredictorCorrector + QLSA QLSAchilds 
In this section we will indicate the complexity of our algorithm against other algorithms that can be used to solve Linear Programming problems. In particular, we will compare against the same PredictorCorrector algorithm but using the fastest Classical Linear System Algorithm (conjugate gradient descent conjugate_gradient ), and against the recent algorithm proposed by Brandão and Svore BrandaoSvore for solving Semi Definite Programming problems, a more general class of problems than those studied here (Linear Programming).
Firstly, we must take into account that, as we are using the PredictorCorrector algorithm PredictorCorrector , that means by construction iterations of the main loop, and therefore the final time complexity will carry such factor. For sparse problems (as those we are considering), we should also take into account the complexity of solving two Linear Systems of Equations. The QLSA we are using is QLSAchilds , with complexity . Note that we are not using Amplitude Amplification to lower the complexity from to , but this is because we will not be needing it since we do not amplify the amplitude at the end, so we will only have time complexity. In contrast, the fastest comparable Classical Linear System Algorithm is the conjugate gradient method conjugate_gradient , which in our case has time complexity or depending on whether the matrix is positive definite or not ( comes from the sparsification).
But we also have to take into account other procedures. Those are: the preconditioning using SPAI requires space complexity and time complexity. Since this is a classical procedure needed to control , it will be the same for the classical algorithm if we substituted QLSA for conjugate gradient descent, for example. The preparation of quantum states has time complexity , but the readout procedure (Amplitude Estimation, algorithm 2) has space complexity (in each solution we can only read out one amplitude) and time complexity.
Let us therefore compare the complexity of different ways of solving Linear Programming. To leading order, we have table 1.
To summarize all the components in our quantum PredictorCorrector algorithm and the interrelations among them, we show a diagram in Fig. 3 in the form of a flow chart of actions from the initialization to the termination of the quantum algorithm providing the solution to the given (LP) and (LD) problems in (2) and (3).
Iv Overall structure of the algorithm
iv.1 Initialization
The initialization procedure consists in preparing the matrix M, and the state f.
(57) 
(58) 
iv.2 Termination
In the termination we propose one possible way of using Grover to run the termination explained in PredictorCorrector . Any other classical termination is also possible.
Comments
There are no comments yet.