Online optimization is a field of optimization theory that deals with optimization problems having no knowledge of the future [k2016]. An online algorithm reads an input piece by piece and returns an answer piece by piece immediately, even if the answer can depend on future pieces of the input. The goal is to return an answer that minimizes an objective function (the cost of the output). The most standard method to define the effectiveness of an online algorithm is the competitive ratio [st85, kmrs86]. The competitive ratio is the approximation ratio achieved by the algorithm. That is the worst-case ratio between the cost of the solution found by the algorithm and the cost of an optimal solution.
In the general setting, online algorithms have unlimited computational power. Nevertheless, many papers consider them with different restrictions. Some of them are restrictions on memory [bk2009, gk2015, blm2015, kkm2018, aakv2018, gs93, h95], others are restrictions on time complexity [fnn2006, rbm2013].
In this paper, we focus on efficient online algorithms in terms of time complexity. We consider the -server problem on trees. Chrobak and Larmore [cl91] proposed an -competitive algorithm for this problem that has the optimal competitive ratio. The existing implementation of their algorithm has time complexity for each query, where is the number of nodes in the tree. For general graphs, there exists a time-efficient algorithm for the -server problem [rbm2013] that uses min-cost-max-flow algorithms. However, it is too slow to apply it to the case of a tree. In the case of a tree, there exists an algorithm with time complexity for preprocessing and for each query [ky2020].
We propose a new time-efficient implementation of the algorithm from [cl91]. It has time complexity for preprocessing and for processing a query. It is based on fast algorithms for computing Lowest Common Ancestor (LCA) [bv93, bfc2000] and the binary lifting technique [bf2004]. Compared to [ky2020], the idea of our algorithm is simpler: it has less efficient preprocessing and more efficient processing of a query when .
We revisit the problem of finding the first marked element in a collection of objects. It is well-known that it can be solved in expected time when given quantum oracle access to the input, and even expected time where is the position of the first marked element [ll2015, Theorem 10]
. However, this algorithm has a small probability of taking timebecause of the properties of Dürr-Høyer minimum finding algorithm [dh96] on which is based [ll2015]. We improve upon the state of the art in two ways: we give a worst-case time algorithm that works even in the presence of two-sided bounded errors in the input. We also provide an expected time time algorithm in the case where the input has one-sided errors only. Compared to the algorithm of [ll2015], our algorithm has worst-case complexity . The technique that we propose is interesting by itself. It can also be used for boosting the success probability of binary search for a function with errors.
We also consider the -server problem in the case where the description of the tree is given by a string path. The string path of a node in a rooted tree is a sequence of length , where is the height of the node, describing the path from the root to the node. It is possible to access a node by its path and get a path for a node. Such a way of representing the trees is useful, for example, as a path to a file in file systems. We leverage our classical algorithm for the -server problem, and we improve a quantum search algorithm to obtain a quantum algorithm with running time for processing a query, without prepossessing. In the case of queries, the total runtime of the quantum algorithm is smaller than the classical one.
2.1 Online algorithms
An online minimization problem consists of a set of inputs and a cost function. Each input is a sequence of requests, where is the length of the input . Furthermore, a set of feasible outputs (or solutions) is associated with each ; an output is a sequence of answers . The cost function assigns a positive real value to and . An optimal solution for is .
Let us define an online algorithm for this problem. A deterministic online algorithm computes the output sequence such that is computed based on . We say that is -competitive if there exists a constant such that, for every and for any input of size , we have: . The minimal that satisfies the previous condition is called the competitive ratio of .
2.2 Rooted Trees
Let us consider a rooted tree , where is the set of nodes (vertices), and is the set of edges. Let be the number of nodes, or equivalently the size of the tree. We denote by the root of the tree. A path is a sequence of nodes that are connected by edges, i.e. for all , such that there are no duplicates among . Here is a length of the path. The distance between two nodes and is the length of the path between them. For each node we can define a parent node such that . Additionally, we can define the set of children .
Lowest Common Ancestor (LCA).
Given two nodes and of a rooted tree, the Lowest Common Ancestor is the node such that is an ancestor of both and , and is the closest one to and among all such ancestors. The following result is well-known.
Lemma 1 ([bv93, bfc2000])
There is an algorithm for LCA problem with the following properties:
The time complexity of the preprocessing step is
The time complexity of computing LCA for two vertices is .
We call the subroutine that does the preprocessing for the algorithm and that computes the LCA of two nodes and .
Binary Lifting Technique.
This technique from [bf2004] allows us to obtain a vertex that is at distance from a vertex with time complexity. There are two procedures:
prepares the required data structures. The time complexity is .
returns a vertex on the path from to the root and at distance . The time complexity is .
The technique is well documented in the literature. We present an implementation in the Appendix 0.A for completeness.
2.3 -server Problem on Trees
Let be a rooted tree, and we are given servers that can move among nodes of . At each time slot, a query appears. We have to “serve” this query, that is, to choose one of the servers and move it to . The other servers are also allowed to move. The cost function is the distance by which we move the servers. In other words, if before the request, the servers are at positions and after the request they are at , then and the cost of the move is . The problem is to design a strategy that minimizes the cost of servicing a sequence of queries given online.
2.4 Quantum query model
We use the standard form of the quantum query model. Let be an variable function. We wish to compute on an input . We are given an oracle access to the input , ie. it is realized by a specific unitary transformation usually defined as where the register indicates the index of the variable we are querying, is the output register, and is some auxiliary work-space. An algorithm in the query model consists of alternating applications of arbitrary unitaries independent of the input and the query unitary, and a measurement in the end. The smallest number of queries for an algorithm that outputs with probability on all is called the quantum query complexity of the function and is denoted by . We refer the readers to [nc2010, a2017, aazksw2019part1] for more details on quantum computing.
In the quantum algorithms in this article, to avoid any ambiguity with queries from -server problem’s definition, we refer to the quantum query complexity as the quantum time complexity. However, both notions are usually different. For instance, in our algorithms, we use some modifications of Grover’s search algorithm (see next section), which time complexity differs from query complexity in a logarithmic factor.
Grover’s algorithm for quantum search
Definition 1 (Search problem)
Suppose we have a set of objects named , of which some are targets. Suppose is an oracle that identifies the targets. The goal of a search problem is to find a target by making queries to the oracle .
In search problems, one will try to minimize the number of queries to the oracle. In the classical setting, one needs queries to solve such a problem. Grover, on the other hand, constructed a quantum algorithm that solves the search problem with only queries [g96], provided that there is a unique target. When the number of targets is unknown, Brassard et al. designed a modified Grover algorithm that solves the search problem with queries [bbht98], which is of the same order as the query complexity of the Grover search.
3 A Fast Online Algorithm for -server Problem on Trees with Preprocessing
We first describe Chrobak-Larmore’s -competitive algorithm for -server problem on trees from [cl91]. Assume that we have a query on a vertex , and the servers are on the vertices . We say that a server is active if there are no other servers on the path from to . In each phase, we move every active server one step towards the vertex . After each phase, the set of active servers can be changed. We repeat this phase (moving of the active servers) until one of the servers reaches the queried vertex .
The naive implementation of this algorithm has time complexity for each query. First, we run a depth-first search with time labels [cormen2001], whose result allows us to check in constant time whether a vertex is an ancestor of a vertex . After that, we can move each active server towards the queried vertex, step by step. Together all active servers cannot visit more than vertices.
In the following, we present an effective implementation of Chrobak-Larmore’s algorithm with preprocessing. The preprocessing part is done once and has time complexity (Theorem 3.1). The query processing part is done for each query and has time complexity (Theorem 3.2).
We do the following steps for the preprocessing:
We do required preprocessing for LCA algorithm that is discussed in Section 2.2.
We do required preprocessing for Binary lifting technique that is discussed in Section 2.2.
Additionally, for each vertex we compute the distance from the root to , ie. . This can be done using a depth-first search algorithm [cormen2001].
The algorithm for the preprocessing is the following (Algorithm 2).
Algorithm 2 for the preprocessing has time complexity .
The time complexity of the preprocessing phase is for LCA, for the binary lifting technique and for . Therefore, the total time complexity is .
3.2 Query Processing
Step 1. We sort all the servers by their distance to the node . The distance between a node and the node can be computed in the following way. Let be the lowest common ancestor of and , then . Using the prepocessing, this quantity can be computed in constant time. We denote by this sorting procedure. In the following steps we assume that for .
Step 2. The first server on processes the query. We move it to the node .
Step 3. For we consider the server on . It will be inactive when some other server with a smaller index arrives on the path between and . Section 3.3 contains the different cases that can happen and how to compute the distance traveled by before it becomes inactive. We then move the -th server steps towards the query . The new position of the -th server is a vertex .
3.3 Distance to inactive state
When processing a query, all servers except one will eventually become inactive. The crucial part of the optimization is to compute when a server becomes inactive quickly. For the purpose of computing this time, we claim that we can pretend that servers “never go inactive”. Formally, let be a query, be a server, and another server with smaller index. We know that will become inactive because it is not the closest to the target. However it is possible that this particular server is not the one that will render inactive. Nevertheless, we can pretend that will never become inactive and compute the distance will travel before going inactive because of , call this distance (the index is fixed in this reasoning). We claim the following:
For any query and server (i.e. a server that will become inactive), the distance travelled by before it becomes inactive is equal .
Let be one of the servers that renders inactive, then because will not become inactive before it makes inactive, hence for the purpose of computing , it makes no difference whether eventually becomes inactive or not. Therefore, we only need to prove no other is strictly smaller. Assume for contradiction that for some , and pick so that is minimum among all (and in case of equality, pick the smallest possible). Then, it means there exists a vertex such that and is on the paths from and to . Now we claim that must become inactive before it reaches . Indeed, if not, it would reach and makes inactive after a distance , which is impossible by definition of . Therefore is rendered inactive before reaching by another server reaching some vertex on the path from to . In particular, we must have and . But now observe that if we pretend that never goes inactive, it will reach after travelling a distance hence . But we chose so that is minimal so we must have and therefore (we sort by index in case of tie). Going back to the computation, we see that implies that , i.e. and reach at the same time. But when two servers reach the same vertex simultaneously, the greater index goes inactive, i.e. would go inactive because of . This is a contradiction because we assumed that is the one making inactive.
We have now reduced the problem to the following question: given a server and another server with smaller index, compute , the distance until becomes inactive because of , pretending that never goes inactive. There are several cases to consider, depicted in Figure 1, depending on the relationship of , and in the tree. Let be the vertex where the paths from to and to intersect the first time, then and
if is an ancestor of and , then ;
if is an ancestor of but not of , then ;
if is an ancestor of , then because must also be ancestor of since is closer to than ;
if the LCA of and is not an ancestor of , then ;
if the LCA of and is not an ancestor of , then either ;
Note that in this case distinction, the order of the cases is important: if cases 1 to 3 do not apply for example, then we know that is not an ancestor of and is not an ancestor of .
The time complexity of is .
Since a vertex is ancestor of if , we can check this condition in due to results from Section 2.2. It follows that we can compute for every in and there are at most other servers to consider.
3.4 How to move a server
We now consider the following problem: given a server and a distance , how to efficiently compute the new position of the server after moving it steps towards . We use the binary lifting technique for this procedure.
Let . If , then the result node is on the path between and . We can thus invoke from Section 2.2. Otherwise, we should move the server first to . We then move it steps down towards from . Moving down from is the same as moving up steps from . The algorithm is presented in Algorithm 5.
The time complexity of the algorithm Move is .
3.5 Complexity of the Query Processing
The time complexity of the query processing phase is .
4 Binary Search for a Function with Errors
Consider a search space and a subset of marked elements. Define the indicator function by
In other words, indicates whether there is a marked element from in the interval . Now assume that we do not know but have access to a two-sided probabilistic approximation of . Formally, there is a probability such that for any ,
Intuitively, behaves like with probability at least . However, sometimes it makes mistakes and returns a completely wrong answer. Note that has two-sided error: it can return even if the interval contains a marked element, but more importantly, it can also return even though the interval does not contain any marked element. We further assume that a call to takes time where is some nondecreasing function. Typically, we assume that , i.e. is strictly better than a linear search.
We now consider the problem of finding the first marked element in , with probability at least, say, . A trivial algorithm is to perform a linear search in until returns . If had no errors, we could perform a binary search in . This does not work very well in the presence of errors because decisions made are irreversible, and errors accumulate quickly. Our observation is that if we modify the binary search to boost the success probability of certain calls to , we can still solve the problem in time in .
The idea is inspired by [abikkpssv2020]. For reasons that become clear in the proof, we need to boost some calls’ success probability. We do so by repeating them several times and taking the majority: by this we mean that we take the most common answer, and return an error in the case of a tie.
Assume that satisfies for some and every and , then with probability more than , Algorithm 6 returns the position of the first marked element, or if none exists. The running time is .
The condition for some and every and is clearly satisfied by any function of the form .
The correctness of the algorithm, when there are no errors, is clear. We need to argue about the complexity and error probability.
At the iteration of the loop, the algorithm considers a segment of length at most . The complexity of is at most but we repeat it times, so the total complexity of the iteration is . The number of iterations is at most . Hence the total complexity is
Finally, we need to analyze the success probability of the algorithm: at the iteration, the algorithm will run each test times and each test has a constant probability of failure . Hence for the algorithm to fail at iteration , at least half of the runs must fail: this happens with probability at most where . Hence the probability that the algorithm fails is bounded by
By taking small enough (say ), which is always possible by repeating the calls to a constant number of times to boost the probability, we can ensure that the algorithm fails less than half of the time.
4.2 Application to Quantum Search
A particularly useful application of the previous section is for quantum search, particularly when is a Grover-like search. Indeed, Grover’s search can decide in time if a marked element exists in an array of size , with a constant probability of error.
More precisely, assume that we have a function and the task is to find the minimal such that . If we let then has complexity and fails with constant probability. Hence we can apply Proposition 1 and obtain an algorithm to find the first marked element with complexity and constant probability of error. In fact, note that we are not making use Proposition 1 to its full strength because really has one-sided error: it will never return if there are no marked element. We will make use of this observation later. We note that contrary to some existing results (e.g. [ll2015, Theorem 10]), our algorithm always runs in time , and not in expected time .
There exists a quantum algorithm that finds the first marked element in an array of size in time and error probability less than . Note that is a worst-case time bound, not an average one.
As observed above, we are not really using Proposition 1 to its full strength because Grover’s search has one-sided error. This suggests that there is room for improvement. Suppose that we now only have access to a two-sided probabilistic approximation of . In other words, can now make mistakes: it can return for an unmarked element or for a marked element with some small probability. Formally,
for some probability . We cannot apply Grover’s search directly in this case111It is known that Grover’s search does not behave well in the presence of two-sided errors. but some variants have been developed that can handle bounded errors [PMR03]. Using this result, we can build a two-sided error function with high probability of success and time complexity . Applying Proposition 1 again, we obtain the following improvement:
There exists a quantum algorithm FindFirst that finds the first marked element in a array of size in time and error probability less than ; even when the oracle access to the array has a two-sided error. Note that is a worst-case time bound, not an average one.
In practice, however, especially in quantum computing, rarely has two-sided errors. For instance, Grover’s search has a one-sided error only. If we assume that has one-sided error only, we can obtain a slightly better version of Proposition 3. Formally, we assume that
For space reasons, we defer the proof to Appendix 0.B.
Proposition 4 (Appendix 0.b)
There exists a quantum algorithm that finds the first marked element in a array of size in expected time and with error probability less than , where is the position of the first marked element, or if none is marked. Furthermore, it works even when the oracle access to the array has one-sided error. Additionally, it has a worst-case complexity of in all cases.
5 The Fast Quantum Implementation of Online Algorithm for -server Problem on Trees
We consider a special way of storing a rooted tree. Assume that for each vertex we have access to a sequence for . Here is a path from the root (the vertex ) to the vertex , . Such a way of describing a tree is not uncommon, for example when the tree represents a file system. A file path “c:/Users/MyUser/Documents/newdoc.txt” is exactly such a path in the file system tree. Here “c”, “Users”, “MyUser”, “Documents” are ancestors of “newdoc.txt”, “c” is the root and “newdoc.txt” is the node itself. Another example of a similar representation is the embedding of a binary tree in a array, where a node with index has two children with indices and ; and the parent node has index . Here a path is encoded by index which is really just a list of bits.
We assume that we have access to the following two oracles in :
given a vertex , a (classical) oracle that returns the length of the string path ;
given a vertex and an index , a quantum oracle that returns the vertex of the sequence .
We can solve the -server problem on trees using the same algorithm as in Section 3 with the following modifications:
The function becomes where is a longest common prefix of two sequences and .
is the vertex where is the sequence for ;
We can compute if is the ancestor of : it is , where is a length of and is a length of . Note that the invocations of in Algorithms 5, 3 are always this form. The only exception is in Sort in which the function uses LCA as a subroutine. The complexity of Sort is thus the same as the complexity of LCA or LCP in our case.
By doing so, we do not need any preprocessing. We now replace the function by a quantum subroutine , presented in Section 5.1, and keep everything else as is. This subroutine runs in time with error probability. This allows us to obtain the following result.
There is a quantum algorithm for processing a query in time and with probability of error . This algorithm does not require any preprocessing.
The complexity Move is the complexity of LCA that is QLCP in our implementation, plus the complexity of MoveUp. The former has complexity is by Lemma 5, and the latter by the oracle. Therefore, the total running time of Move is .
The complexity of Query is times the cost of LCA that is QLCP in our implementation, and then a call to Move. Additionally, the Sort function invokes LCA to compute distances. Hence, the complexity of Sort is , and the total complexity is .
We invoke, QLCP at most times so the success probability is at least . Therefore, the error probability is . Note that we do not need any preprocessing.
5.1 Quantum Algorithm for Longest Common Prefix of Two Sequences
Let us consider the Longest Common Prefix (LCP) problem. Given two sequences and , the problem is to find such that and for , where .
Let us consider a function such that iff . Assume that is the minimal argument such that , then . The LCP problem is thus equivalent to the problem of finding the first marked element from Section 4.2. Therefore, the algorithm for LCP is the following.
Algorithm 7 finds the LCP of two sequences of length in time and with probability of error .
The correctness of the algorithm follows from the definition of . The complexity of FindFirst is by Proposition 2. The total running time is because of the repetitions.
Appendix 0.A Implementation of Binary Lifting
The prepares an array that stores data for MoveUp subroutine. For a vertex and an integer , the cell stores a vertex on the path from to the root and at distance . We construct the array using dynamic programming and obtain the following formulas:
Let us show that the formulas are correct. Let , . Then .
The algorithm is presented in Algorithm 8
The subroutine returns a vertex on the path from to the root and at distance . First, we find the maximal such that . Then, we move to the vertex and reduce by . We repeat this action until . The total number of steps is at most .
Appendix 0.B A quantum algorithm for finding the first marked vertex
In this section, let FindFirst denote the algorithm from Proposition 3 and GroverTwoSided denote the variant of Grover’s algorithm of [PMR03] that works with two-sided error oracles. Recall that we assume that has one-sided error, i.e. it may return instead of with small probability but not the other way around. Consider the following algorithm:
We now show that this algorithm satisfies the requirements of Proposition 4. To simplify the proof, we assume that the array always contains a marked element; this is without loss of generality because we can add an extra object at the end that is always marked. Furthermore, we assume that is a power a 2, this is again without loss of generality because we can add dummy object at the end at the cost of doubling the array size at most.
Recall that has a one-sided error, and the same applies to GroverTwoSided in this case. Therefore the test can only fail if there actually is a marked element in the interval . Of course, the problem is that it can succeed even though there is a marked element in this interval. Let be the probability that this happens (i.e. GroverTwoSided fails), we know that this is by [PMR03, Theorem 10]. Let be the position of the first marked element and let be such that . Let be the value of
after the loop, it is a random variable and always a power of. By the above reasoning, it is always the case that . Furthermore, for any , the probability that is at most . The call to FindFirst takes time by Proposition 3. Hence the expected time complexity of this algorithm is
where we assume that is small enough. This is always possible by repeating the calls to FindFirst a constant number of times to reduce the failure probability . Finally, we note that the only way this algorithm can fail is if the (unique) call to FindFirst fails and this only happen with constant probability.