1 Introduction
Exact string matching problem is to decide if a pattern string appears as a substring of a text string . In the classical models of computation, this problem can be solved in time [7]. Different quantum algorithms for this basic problem have been developed [10, 11, 13], resulting into different solutions, the best of which finds a match in time [10]
with high probability. These assume the pattern and text are stored in quantum registers, requiring thus
qubits to function. Moreover, these approaches implicitly assume that strings of the size of the pattern could fit into a (quantum) memory word; an assumption that, if dropped, would introduce a linear dependence with in the time complexity.In the classical models of computation, an analogy for the first storage assumption is to assume the text has been preprocessed for subsequent queries. For example, one can build a BurrowsWheeler transform based index structure for the text in time [4], assuming , where . Then, one can query the pattern from the index in time [4, Theorem 6.2]. In this light, quantum models can offer only limited benefit over the classical models for exact string matching.
Motivated by this difficulty in improving lineartime solvable problems using quantum approaches, let us consider problems known to be solved in quadratic time. For example, approximate string matching problem is such a problem: decide if a pattern string is within edit distance from a substring of a text string , where edit distance is the number of single symbol insertions, deletions, and substitution needed to convert a string to another. This problem can be solved using bitparallelism in time [8], under the Random Access Memory (RAM) model with computer word size . A reasonable assumption is that , so that this model reflects the capacity of classical computers. Thus, when , this bitparallel algorithm for approximate string matching takes time for all , as for all
. It is believed that this quadratic bound cannot be significantly improved, as there is a matching conditional lower bound saying that if approximate pattern matching could be solved in
) time with some, then the Orthogonal Vector Hypothesis (OVH) and therefore the Strong Exponential Time Hypothesis (SETH) would not hold
[2]. As these hypotheses are about classical models of computation, it is natural to ask if the quadratic barrier could be broken using quantum computation.In this quest for breaking the quadratic barrier, we study another problem with a bitparallel solution and a conditional lower bound. Consider exact pattern matching on a graph, that is, consider deciding if a pattern string equals a labeled path in a graph , where is the set of nodes and is the set of edges. Here we assume the nodes of the graph are labeled by and a path , for , spells string . There is an OVH lower bound conditionally refuting an ) or ) time solution [6]. This conditional lower bound holds even if graph is a level DAG [6]: for every two nodes and , holds the property that every path from to has the same length. On DAGs, this string matching on labeled graphs (SMLG) problem can be solved in time [12], so the status of this problem is identical to that of approximate pattern matching on strings. However, the simplicity of the bitparallel solution for SMLG on level DAGs enables a connection to quantum computation. We consider a specific model of quantum computation, the Quantum Random Access Memory (QuRAM) model, in which we have access to “quantum arrays”, and we assume that integer values like , or fit into a (quantum) memory word. Under this model, we turn the bitparallel solution into a quantum algorithm that solves SMLG on level DAGs with high probability in time, breaking through the classical quadratic conditional lower bound. As far as we know, this is the first time the quadratic barrier has been broken using quantum computation for a problem that admits a classical conditional lower bound based on SETH and OVH. Moreover, the bitparallel strategy allow us to use a limited number of qubits, that is, our qubits space complexity on level DAGs is , where is the number of nodes of the level with the highest number nodes. We remark that is always less or equal to the width of the graph, that is the minimum number of nondisjoint paths needed to cover all the nodes.
As mentioned above, in some previous works [10, 13, 11](and references in [13]) algorithms have been proposed to solve string matching in plain text in the QuRAM model, under the assumption that the entire text, or at least text substrings of the same length of the pattern, fit into a memory word. We find this assumption to be too restrictive, as even the classical RAM model does not adopt it, since in such a model of computation many operations would become trivial. Instead, our algorithm works without the need for such an assumption, and has the same time complexity as the previous solutions, if they are all compared under our QuRAM model of computation, This is because, using our QuRAM model, a term would appear in the time complexities of these previous algorithms.
The paper is structured as follows. We revisit exact pattern matching and derive a simple quantum algorithm for it, in order to introduce the quantum machinery. Then we give a bruteforce quantum algorithm for SMLG, which we later improve on level DAGs. This improvement is based on extending the ShiftAnd algorithm [3], whose quantum version we first study for exact pattern matching, and finally extend for level DAGs. We conclude with discussion on some connections to other related problems.
In what follows, we assume the reader is familiar with the basic notions in quantum computing as covered in textbooks [9].
2 String Matching in Plain Text
A quantum computer, with access to QuRAM, can solve the problem of finding an exact match for a string pattern into a text string in time , with high probability. We explain a simple solution to this problem. We assume to have and stored in quantum registers , register initialized to , and register initialized to . We prepare quantum register in an equally balanced superposition spanning all the text positions, that is (assuming to be a power of for simplicity). Each individual state in the superposition represents a computation starting at position in the text. In each of these computations, we scan and try to match each character with , storing the intermediate results of such comparisons in register . More precisely, at iteration , , we compute a logic and between and the value in . This procedure is correct because at the first step , thus we simply compute and store in . At iteration , we assume by induction that register stores a if , thus computing tells us if we are extending a match or not. At the end, we can run Grover’s search algorithm to retrieve the superposition states where , and then measure register to locate the ending position of a match. We illustrate this procedure in Algorithm 1.
In the following example we have and .
We demonstrate how Algorithm 1 works by simulating it on text and pattern string . In the following example, we depict one term of the superposition per row, omitting the amplitudes and the plus sign, as the amplitudes are all the same and the plus sign does not add important information.
3 String Matching in Labeled Graphs
3.1 Quantum Bruteforce Algorithm for Smlg
In SMLG we are given pattern string with characters in alphabet and a nodelabeled graph , with labelling function . We are asked to find a path (or, actually, a walk) in such that , where denotes string concatenation.
Generalizing the idea we presented for plain text, we provide a bruteforce quantum algorithm solving SMLG running in time , or just if can be stored in a single quantum register. The idea is to list all possible paths of length in the graph and then mark those ones that are actual matches for . Consider qubits registers , , each consisting of qubits, plus one additional qubit . We set to . We set every in an equally balanced superposition, so that every state in such superposition represent a different node in . Thus, we have all the nodes in listed times. For , we access the nodes represented by and , and we check that an edge exists between these two nodes and that . This is done by updating the value of as , where is and array in our QuRAM that implements the function. Then, for , we handle the last comparison updating as . Notice that every time we perform the check what we are doing is accessing the QuRAM using and as indexes. Since these two register are in a superposition, we are actually checking every pair of nodes at the same time. Thus, what the whole algorithm is doing is accessing every possible tuple of nodes , checking that it is an actual walk in the graph, and checking that the labels match the pattern.
We now show an example where and .
The superposition is indicated with the tensor product notation, otherwise it would take too much space, but it is indeed representing every possible tuple
. Assume that we can check in constant time whether an edge between two nodes exists. For instance, we could assume to have adjacency matrix of stored in our QuRAM, that we can access to check the existence of an edge. For nodes , our algorithm computes and stores the result in , which is an operation that we can perform in time by scanning the superposition from left to right. For instance, for nodes , we have . In this way, the states of the superposition representing an actual path in the graph are marked with . Then, we check whether the pattern matches these paths, that is we compute , and we store the result in . This operation also takes for the same reason as above. At this point, the states of the superposition marked with represent a path in with labels matching . Thus, we can retrieve these states with Grover’s algorithm (or, to better put it, we can amplify the amplitudes of the states with , and then perform a measurement). the overall time complexity is then . However, the space complexity is qubits, because we are assuming to have access to the entire adjacency matrix of the graph.3.2 The Classical ShiftAnd Algorithm
We first introduce the classic shiftand algorithm for matching a pattern against a text and generalize it to work on graphs. Then, we show how the bitvector data structure of that algorithm can be represented as a superposition of a logaritmic number of qubits. This approach allows to achieve better performances than the brute force algorithm for special types of graphs.
In the shiftand algorithm, we use bit vector of the same length of pattern to represent which of its prefixes are matching the text during the computation. Assuming integeralphabet , we also initialize bidimensional array of size so that if and only if , and otherwise. The algorithm starts by initializing vector to zero and array as specified above. Then, we scan whole text performing the next three operations for each , :

;

;

if , return ;

.
Operation 1 sets the least significant bit of to , which is needed to test against . Operation 2 computes a bitwise and between and the column of corresponding to character . Remember that means , thus this operation leaves each bit set to if and only if it was already set to before this step and the the th character of the pattern matches the current character of the text. At this point, if bit is set to 1 we have found a match for , and Operation 3 will return . For the other positions, if bit is set to , then we know that prefix matches , and Operation 4 shift the bits in by one position, so that in the next iteration we will check whether matches . We illustrate a simulation of the algorithm in Figure 1.
In a labeled DAG , each node has a singlecharacter label . We generalize the shiftand algorithm to labeled DAGs by computing a different bitvector for each node , initializing them to zero. Consider a DFS visit of DAG . When visiting node , each bitvector of its inneighbor represents a set of prefixes of matching a path in the graph ending in . Thus, we merge all of this information together by taking the bitwise or of all of the inneighbors of , that is we replace Operation 1 with . Operations 2, 2 and 3 are performed as before. An example of the state of the data structures at a generic iteration is shown in Figure 2, and the body of the iteration now is:

;

;

if , return ;

.
3.3 Quantum BitParallel Algorithm for Level DAGs
We make the classic techniques work in a quantum setting for a special class of DAGs, which we call level DAGs. A level DAG is a DAGs in which, for every two nodes and , every path from to has the same length. Given any node with no inneighbours, we call level (or depth) the length of any path from to ; the level (or depth) of DAG is then . The DAG from Figure 2 is not a level DAG due to the horizontal edges, but without those it would be a level DAG, as in Figure 3.
Our approach aims to represent each bit vector with a single qubit set up in a proper superposition, and realize the bitwise operations as parallel operations across such superposition. To this end, we use quantum register of size to generate the superposition, one quantum register for each node consisting of one qubit, ancilla qubits , and and additional register of size . Moreover, we assume to have access to a quantum random access memory (QuRAM).
Assume all the quantum registers to be initialized to . The algorithms starts by setting quantum register in a balanced superposition, by applying the Hadamard gate on each one of its qubits. In the following, we use the notation to refer to the substate of register that appears in the th term of the superposition summation. For example, if , then . Figure 4 shows a schematic representation of the resulting superposition.
The rest of the algorithm maintains almost the same overall structure, with the exception of one necessary adaptation that we will discuss later. The main change consists in the fact that the four operations listed above have to be converted into operations on the quantum registers, to make them work across the superposition. This translation to bitparallelism to superposition parallelism is the core of our technique, and we now describe how to apply it to each operation.
Operation 1 can be broken down into two simpler operations: computing the bitwise and adding . If we denote as the qubit of current node , adding means to set to , because is always guaranteed to be for any node that we have not visited yet. This operation can be implemented by applying a gate on , using as control register and as the guard, an operation which we write as . This has the effect of flipping , thus setting it to , while leaving unchanged. The bitwise can be implemented as a series of updates , one for each , because in the superpositon this has the effect of a termwise .
Operation 2 can similarly be implemented as , a termwise that uses the QuRAM to simultaneously access all the entries and compute , for every .
Operation 3 is now converted into updating the last substate of ancilla qubit as , leaving unchanged. We implement this operation by performing and then computing and finally . We then reset ancilla qubits and to . The need for this procedure will be clear later, and the fact that an occurrence was found or not will be reported by Grover’s Search phase at the end of the algorithm.
Operation 4 is the crucial one. Our goal is to cyclically shift the values of the bits across the superposition, meaning that this shift operation should perform the update on the values of , where and are intended modulo . We observe that, if we consider register and any other register , the shift is equivalent to . This, in turn, can be implemented as , since the state of is a balanced superposition of every value between and , thus simply stores .
As mentioned above, we have to change the overall structure of the algorithm to handle the fact that each time that we perform the new shift operation, we are actually shifting all the vectors, not just the current . This poses a problem when we process a node at a certain level and another node of the same level has already been processed. To overcome this situation, we visit the graph one level at a time, and we update all the vectors of that level before performing the shift. Algorithm 2 shows the entire procedure.
As last step of the algorithm, we run Grover’s search that uses as oracle function an identity function returning the value of register . Thus, if there is at least one , the measurement at the end of the search will return with high probability. Otherwise, if for every holds that , than the measurement of Grover’s search will yield .
To prove the correctness of Algorithm 2, we formalise the key properties in the following lemmas. We start by ensuring that the shift operation provides the desired result.
Lemma 1.
Operation transforms state
into state
for any number of qubits.
Proof.
We observe that the state of is a balanced superposition of every value between and , that is stores , thus
When we perform , the effect is that of performing for every . After such operation, , for every , thus has been transformed into . ∎
Then, we need to guarantee two invariant conditions that have to hold after executing the inner loop (line 2).
Lemma 2.
After running the inner loop in Algorithm 2 times (and before performing ), for every such that holds that if and only if there exists a path in ending at and matching .
Proof.
We proceed by induction on the number of times that we run the inner loop.
Base case, . In this case, vectors such that are those with indegree zero, initialized by the first loop, while the inner loop has never run. For each such , the initialization loop initially sets to ( operation), and then resets it to if and only if it does not match ( operation). Thus, the only set to are the ones matching , while are correctly left to .
Inductive case, . After running the inner loop times, we have to perform before running the loop for the th time. Assuming the inductive hypothesis and using Lemma 1, operation makes every with such that if and only if there is a match for in ending at . Then we run the loop for the th time. Notice that in any previous iteration of the loop it could never be that case that , and we update if and only if , thus every is currently set to . At this point, for each such that , we set to with operation . Then, we perform the bitwise and the bitwise , after which if and only if there is a match for in ending at and matches , where . This means that if and only if a match for ending at can be extended to a match for ending at using edge . ∎
Lemma 3.
After running the inner loop in Algorithm 2 times (and before performing ), there exists at least one such that if and only if there exists at least one such that has a match in ending at , where .
Proof.
Base case, . In this case, nodes such that are those with indegree zero, while the inner loop has never run. Since we are visiting only singlenode paths and we are assuming that pattern has length at least two, there can be no match for ending at these nodes. Correctly, every .
Inductive case, . Right before running the inner loop for the th time, every with is such that if and only if there is a match for in ending at . This follows from the same reasoning as in the proof of Lemma 2 and from the inductive hypothesis. While running the inner loop for the th time, we observe that during an iteration for a single node the statement of Lemma 2 restricted to only holds after line 2. This means that if and only if has a match ending at . At this point we set up register with operation , and then we run . This way, are set to since , while since . Operation leaves unchanged, while is set to either if it was already or if . Then, and are reset to . If was already , then the inductive hypothesis guarantees that at some earlier iteration a path matching was already found. If we turned to in this iteration, we know that this happened if and only if , which means that there is a path ending at that matches (that is, the full pattern ). Either way, we are guaranteed that if and only if there is a path matching that ends at node , where . ∎
The correctness of the algorithm follows from the previous lemmas combined with few additional observations.
Theorem 1.
Proof.
After running the inner loop of Algorithm 2 times we exit also the outer loop, having visited all the nodes. Thanks to Lemma 3, we know that there is at least one substate of register set to if and only if there is a node such that has a match in ending at , where . moreover, condition means any node in the graph. Thus, if has no match in , every substate is set to , the measurement will output a , thus the algorithm will return . if has at least one match in , there is at least one , which will be amplified by Grover’s search. In turn, the algorithm will measure a and return with high probability. ∎
Finally, the time complexity of our algorithm is linear in the size of the graph.
Theorem 2.
The time complexity of Algorithm 2 is in the QuRAM model, and the space complexity is .
Proof.
The operation that we perform to update in the inner loop processes every edge once, and for each edge we perform a qubit . The nested loops scan every node once and, for each node, we perform a constant number of quantum operations. We assume that fits in a quantum memory word (as it would be the case in a classical setting). Since we are scanning each node and edge once and each time we do a constant number of operations, the time complexity is . Matrix occupy qubits, where is the alphabet, and we use qubits . Since we assume a constant alphabet, the space complexity is . ∎
We can improve Algorithm 2 to run in the same time complexity but using only qubits, where is the number of nodes of the level with the highest number nodes. Notice that always, where is the minimum number of nondisjoint paths needed to cover all the nodes, that is the width of the DAG.
Theorem 3.
The problem of SMLG on level DAG and pattern can be solved in time using qubits.
Proof.
We can modify Algorithm 2 to satisfy the statement of the theorem. Instead of using a qubit for each node , we use qubits, and . This is because, by definition of width, for each level there are at most nodes such that . We also use classical arrays and , both f size , which map every node to the qubit representing its bitvector (we use two arrays for simmetry with the qubits but, since they will store the same values, one would be enough). We were already implicitly using a similar data structure in the unmodified Algorithm 2, but in that case we could assume to initialize it once at the beginning and never change it later. Here, we need to handle these arrays explicitly.
At each iteretion of the first loop (line 2), we visit each node such that and we set , which means that qubit represents the bitvector for node . We then perform the same operations of this loop as before, replacing with .
At each iteration of the inner loop (line 2), we set and , then we perform the same operations as before replacing with and with . After the execution of this entire loop, right before or right after , we switch the role of and by setting , , for each .
Notice that the positions of and that we access at iteration are those of the current node, initialized at the beginning of the same iteration, and of the inneighbors of said node, which have been initialized at some previous iteration. Thus, the positions of and are always correctly initialized when we access them. Moreover, the update , , for each , preserves the following invariant: after performing this update times, thus having run the inner loop times, qubits store the bitvectors of the nodes of level . This holds true after the initialization loop, and can be seen to hold true by an induction argument on similar to the ones in the previous lemmas. We conclude that, thanks to this invariant, the algorithm is correct.
The new update operation , , for each , takes at most time for each level , thus time in total, not affecting the overall time complexity. Instead, the number of qubits that we are using is plus a constant, that is the ancilla qubits and register ; thus, we use qubits in total. This is because data srtucture requires space to be stored, where is the alphabet, that we assume to have constant size. ∎
4 Discussion
Although we studied level DAGs here just to give an example of breaking the quadratic barrier, such DAGs have connections to earlier literature. Namely, degenerate strings [1] are a special case of level DAGs, so the algorithms developed here apply directly on them. We leave it as open question if similar solutions can be derived for arbitrary DAGs, or e.g. for elastic degenerate strings [5]. Or more generally, can bitparallel algorithms with more complex dependencies, like in the case of approximate pattern matching, be turned into quantum algorithms?
Acknowledgements
We would like to thank Sabrina Maniscalco for giving useful feedback on technical details of the quantum computing framework.
Comments
There are no comments yet.