Fast Prefix Search in Little Space, with Applications

04/12/2018 ∙ by Djamal Belazzougui, et al. ∙ 0

It has been shown in the indexing literature that there is an essential difference between prefix/range searches on the one hand, and predecessor/rank searches on the other hand, in that the former provably allows faster query resolution. Traditionally, prefix search is solved by data structures that are also dictionaries---they actually contain the strings in S. For very large collections stored in slow-access memory, we propose much more compact data structures that support weak prefix searches---they return the ranks of matching strings provided that some string in S starts with the given prefix. In fact, we show that our most space-efficient data structure is asymptotically space-optimal. Previously, data structures such as String B-trees (and more complicated cache-oblivious string data structures) have implicitly supported weak prefix queries, but they all have query time that grows logarithmically with the size of the string collection. In contrast, our data structures are simple, naturally cache-efficient, and have query time that depends only on the length of the prefix, all the way down to constant query time for strings that fit in one machine word. We give several applications of weak prefix searches, including exact prefix counting and approximate counting of tuples matching conjunctive prefix conditions.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we are interested in the following problem (hereafter referred to as prefix search): given a collection of strings, find all the strings that start with a given prefix. In particular, we will be interested in the space/time tradeoffs needed to do prefix search in a static context (i.e., when the collection does not change over time).

There is a large literature on indexing of string collections. We refer to Ferragina et al. [14, 4] for state-of-the-art results, with emphasis on the cache-oblivious model. Roughly speaking, results can be divided into two categories based on the power of queries allowed. As shown by Pǎtraşcu and Thorup [19] any data structure for bit strings that supports predecessor (or rank) queries must either use super-linear space, or use time for a query on a prefix . On the other hand, it is known that prefix queries, and more generally range queries, can be answered in constant time using linear space [1].

Another distinction is between data structures where the query time grows with the number of strings in the collection (typically comparison-based), versus those where the query time depends only on the length of the query string (typically some kind of trie)222Obviously, one can also combine the two in a single data structure.. In this paper we fill a gap in the literature by considering data structures for weak prefix search, a relaxation of prefix search, with query time depending only on the length of the query string. In a weak prefix search we have the guarantee that the input is a prefix of some string in the set, and we are only requested to output the ranks (in lexicographic order) of the strings that have as prefix. Weak prefix searches have previously been implicitly supported by a number of string indexes, most notably the String B-tree [13] and its descendants. In the paper we also present a number of new applications, outlined at the end of the introduction.

Our first result is that weak prefix search can be performed by accessing a data structure that uses just bits, where is the average string length. This is much less than the space of bits used for the strings themselves. We also show that this is the minimum possible space usage for any such data structure, regardless of query time. We investigate different time/space tradeoffs: At one end of this spectrum we have constant-time queries (for prefixes that fit in words), and still asymptotically vanishing space usage for the index. At the other end, space is optimal and the query time grows logarithmically with the length of the prefix. Precise statements can be found in the technical overview below.

Motivation for smaller indexes.

Traditionally, algorithmic complexity is studied in the so-called RAM model. However, in recent years a discrepancy has been observed between that model and the reality of modern computing hardware. In particular, the RAM model assumes that the cost of memory access is uniform; however, current architectures, including distributed ones, have strongly non-uniform access cost, and this trend seems to go on, see e.g. [17] for recent work in this direction. Modern computer memory is composed of hierarchies where each level in the hierarchy is both faster and smaller than the subsequent level. As a consequence, we expect that reducing the size of the data structure will yield faster query resolution. Our aim in reducing the space occupied by the data structure is to improve the chance that the data structure will fit in the faster levels of the hierarchy. This could have a significant impact on performance, e.g. in cases where the plain storage of the strings does not fit in main memory. For databases containing very long keys this is likely to happen (e.g., static repositories of URLs, that are of utmost importance in the design of search engines, can contain strings as long as one kilobyte). In such cases, reduction of space usage from to bits can be significant.

By studying the weak version of prefix search, we are able to separate clearly the space used by the original data, and the space that is necessary to store an index. Gál and Miltersen [15]classify structures as systematic and non-systematic depending on whether the original data is stored verbatim or not. Our indices provide a result without using the original data, and in this sense our structures for weak prefix search are non-systematic. Observe, however, that since those structures gives no guarantee on the result for strings that are not prefixes of some string of the set, standard information-theoretical lower bounds (based on the possibility of reconstructing the original set of strings from the data structure) do not apply.

Technical overview.

For simplicity we consider strings over a binary alphabet, but our methods generalise to larger alphabets (the interested reader can refer to appendix H for discussion on this point). Our main result is that weak prefix search needs just time and space in addition to the original data, where is the average length of the strings, is the query string, and is the machine word size. For strings of fixed length , this reduces to query time and space , and we show that the latter is optimal regardless of query time. Throughout the paper we strive to state all space results in terms of , and time results in terms of the length of the actual query string , as in a realistic setting (e.g., term dictionaries of a search engine) string lengths might vary wildly, and queries might be issued that are significantly shorter than the average (let alone maximum) string length. Actually, the data structure size depends on the hollow trie size of the set —a data-aware measure related to the trie size [16] that is much more precise than the bound .

Building on ideas from [1], we then give an solution (i.e., constant time for prefixes of length ) that uses space . This structure shows that weak prefix search is possible in constant time using sublinear space. This data structure uses I/Os in the cache-oblivious model.

Comparison to related results.

If we study the same problem in the I/O model or in the cache-oblivious model, the nearest competitor is the String B-tree [13], and its cache-oblivious version [4]. In the static case, the String B-tree can be tuned to use bits by carefully encoding the string pointers, and it has very good search performance with I/Os per query (supporting all query types discussed in this paper). However, a search for inside the String B-tree may involve RAM operations, so it may be too expensive for intensive computations333Actually, the string B-tree can be tuned to work in time in the RAM model, but this would imply a I/O cost instead of .. Our first method, which also achieves the smallest possible space usage of bits, uses RAM operations and I/Os instead. The number of RAM operations is a strict improvement over String B-trees, while the I/O bound is better for large enough sets. Our second method uses slightly more space ( bits) but features RAM operations and I/Os.

In [14], the authors discuss very succinct static data structures for the same purposes (on a generic alphabet), decreasing the space to a lower bound that is, in the binary case, the trie size. The search time is logarithmic in the number of strings. As in the previous case, we improve on RAM operations and on I/Os for large enough sets.

The first cache-oblivious dictionary supporting prefix search was devised by Brodal et al. [5] achieving RAM operations and I/Os. We note that the result in [5] is optimal in a comparison-based model, where we have a lower bound of I/Os per query. By contrast, our result, like those in [4, 14], assumes an integer alphabet where we do not have such a lower bound.

Implicit in the paper of Alstrup et al. [1] on range queries is a linear-space structure for constant-time weak prefix search on fixed-length bit strings. Our constant-time data structure, instead, uses sublinear space and allows for variable-length strings.


Data structures that allow weak prefix search can be used to solve the non-weak version of the problem, provided that the original data is stored (typically, in some slow-access memory): a single probe is sufficient to determine if the result set is empty; if not, access to the string set is needed just to retrieve the strings that match the query. We also show how to solve range queries with two additional probes to the original data (wrt. the output size), improving the results in [1]. We also present other applications of our data structures to other important problems, viz., prefix counting. We finally show that our results extend to the cache-oblivious model, where we provide an alternative to the results in  [5, 4, 14] that removes the dependence on the data set size for prefix searches and range queries.

Our contributions.

The main contribution of this paper is the identification of the weak prefix search problem, and the proposal of an optimal solution based on techniques developed in [2]. Optimality (in space or time) of the solution is also a central result of this research. The second interesting contribution is the description of range locators for variable-length strings; they are an essential building block in our weak prefix search algorithms, and can be used whenever it is necessary to recover in little space the range of leaves under a node of a trie.

2 Notation and tools

In the following sections, we will use the toy set of strings shown in Figure 1 to display examples of our constructions. In this section, we introduce some terminology and notation adopted throughout the rest of the paper. We use von Neumann’s definition and notation for natural numbers: , so and is the set of all binary strings.

Weak prefix search. Given a prefix-free set of strings , the weak prefix search problem requires, given a prefix of some string in , to return the range of strings of having as prefix; this set is returned as the interval of integers that are the ranks (in lexicographic order) of the strings in having as prefix.

Model and assumptions. Our model of computation is a unit-cost word RAM with word size . We assume that for some constant , so that constant-time static data structures depending on can be used.
We also consider bounds in the cache-oblivious model. In this model, we consider that the machine has a two levels memory hierarchy, where the fast level has an unknown size of bits (which is actually not used in this paper) and a slower level of unspecified size where our data structures resides. We assume that the slow level plays a role of cache for the fast level with an optimal replacement strategy where the transfers between the two levels is done in blocks of an unknown size of bits, with . The cost of an algorithm is the total number of block transfers between the two levels.
Compacted tries. Consider the compacted trie built for a prefix-free set of strings . For a given node of the trie, we define (see Figure 1):

= 001001010 = 0010011010010 = 00100110101
Figure 1: The compacted trie for the set , and the related names.

0 00 0010 0010010 00100101 0010011 00100110 00100110100 001001101001 00100110101    1 0 1 1 0 0

Figure 2: The data making up a z-fast prefix trie based on the trie above, and the associated range locator. maps handles to extents; the corresponding hollow z-fast prefix trie just returns the lengths (shown in parentheses) of the extents. In the range locator table, we boldface the zeroes and ones appended to extents, and we underline the actual keys (as trailing zeroes are removed). The last two keys are and , respectively.
  • , the extent of node , is the longest common prefix of the strings represented by the leaves that are descendants of (this was called the “string represented by ” in [2]);

  • , the compacted path of node , is the string stored at ;

  • , the name of node , is the string deprived of its suffix (this was called the “path leading to ” in [2]);

  • given a string , we let be the exit node of , that is, the only node such that is a prefix of and either or is not a prefix of ;

  • the skip interval associated to is for the root, and for all other nodes.

We note the following property, proved in Appendix B:

Theorem 1

The average length of the extents of internal nodes is at most the average string length minus one.

Data-aware measures. Consider the compacted trie on a set . We define the trie measure of  [16] as

where the summation ranges over all nodes of the trie. For the purpose of this paper, we will also use the hollow trie measure

Since , we have .444A compacted trie is made hollow by replacing the compacted path at each node by its length and then discarding all its leaves. A recursive definition of hollow trie appears in [3].

Storing functions. The problem of storing statically an -bit function from a given set of keys has recently received renewed attention [10, 7, 20]. For the purposes of this paper, we simply recall that these methods allow us to store an -bit function on keys using bits for some constant , with access time for a query string . Practical implementations are described in [3]. In some cases, we will store a compressed function using a minimal perfect function ( bits) followed by a compressed data representation (e.g., an Elias–Fano compressed list [3]). In that case, storing natural numbers , requires space .

Relative dictionaries. A relative dictionary stores a set relatively to some set . That is, the relative dictionary answers questions about membership to , but its answers are required to be correct only if the query string is in . It is possible to store such a dictionary in bits of space with access time [2].

Rank and select. We will use two basic blocks of several succinct data structures—rank and select. Given a bit array (or bit string) , whose positions are numbered starting from 0, is the number of ones up to position , exclusive (), whereas is the position of the -th one in , with bits numbered starting from (). It is well known that these operations can be performed in constant time on a string of bits using additional bits, see [18, 8, 6, 21].

3 From prefixes to exit nodes

We break the weak prefix search problem into two subproblems. Our first goal is to go from a given a prefix of some string in to its exit node.

3.1 Hollow z-fast prefix tries

We start by describing an improvement of the z-fast trie, a data structure first defined in [2]. The main idea behind a z-fast trie is that, instead of representing explicitly a binary tree structure containing compacted paths of the trie, we will store a function that maps a certain prefix of each extent to the extent itself. This mapping (which can be stored in linear space) will be sufficient to navigate the trie and obtain, given a string , the name of the exit node of and the exit behaviour (left, right, or possibly equality for leaves). The interesting point about the z-fast trie is that it provides such a name in time , and that it leads easily to a probabilistically relaxed version, or even to blind/hollow variants.

To make the paper self-contained, we recall the main definitions from [2]. The 2-fattest number in a nonempty interval of positive integers is the number in the interval whose binary representation has the largest number of trailing zeros. Consider the compacted trie on , one of its nodes , its skip interval , and the -fattest number in (note the change); if the interval is empty, which can happen only at the root, we set . The handle of is , where denotes the first bits of . A (deterministic) z-fast trie is a dictionary mapping each handle to the corresponding extent . In Figure 2, the part of the mapping with non- output is the z-fast trie built on the trie of Figure 1.

We now introduce a more powerful structure, the (deterministic) z-fast prefix trie. Consider again a node of the compacted trie on with notation as above. The pseudohandles of are the strings , where ranges among the 2-fattest numbers of the intervals , with . Essentially, pseudohandles play the same rôle as handles for every prefix of the handle that extends the node name. We note immediately that there are at most pseudohandles associated with , so the overall number of handles and pseudohandles is bounded by . It is now easy to define a z-fast prefix trie: the dictionary providing the map from handles to extents is enlarged to pseudohandles, which are mapped to the special value .

We are actually interested in a hollow version of a z-fast prefix trie—more precisely, a version implemented by a function that maps handles of internal nodes to the length of their extents, and handles of leaves and pseudohandles to . The function (see again Figure 2) can be stored in a very small amount of space; nonetheless, we will still be able to compute the name of the exit node of any string that is a prefix of some string in using Algorithm 2, whose correctness is proved in Appendix D.

  Algorithm 1   Input: a prefix of some string in .         while  do      if  such that  then         // is 2-fattest number in                  if  then                     else                     end if      end if         end while   if  then      return   else      return   end if
Figure 3: Given a nonempty string that is the prefix of at least one string in the set , this algorithm returns the name of .
  Algorithm 2   Input: the name of a node   if  then      ,   else            if  then               else               end if   end if   return  
Figure 4: Given the name of a node in a trie containing strings, compute the interval containing precisely all the (ranks of the) strings prefixed by (i.e., the strings in the subtree whose name is ).

3.2 Space and time

The space needed for a hollow z-fast prefix trie depends on the component chosen for its implementation. The most trivial bound uses a function mapping handles and pseudohandles to one bit that makes it possible to recognise handles of internal nodes ( bits), and a function mapping handles to extent lengths ( bits, where is the maximum string length).

These results, however, can be significantly improved. First of all, we can store handles of internal nodes in a relative dictionary. The dictionary will store strings out of strings, using bits. Then, the mapping from handles to extent lengths can actually be recast into a mapping . But since , by storing this data using a compressed function we will use space

where ranges over internal nodes.

Algorithm 1 cannot iterate more than times; at each step, we query constant-time data structures using a prefix of : using incremental hashing [9, Section 5], we can preprocess in time (and in I/Os) so that hashing prefixes of requires constant time afterwards. We conclude that Algorithm 1 requires time .

3.3 Faster, faster, faster…

We now describe a data structure mapping prefixes to exit nodes inspired by the techniques used in [1] that needs bits of space and answers in time , thus providing a different space/time tradeoff. The basic idea is as follows: let and, for each node of the compacted trie on the set , consider the set of prefixes of with length such that either is a multiple of or is smaller than the first such multiple. More precisely, we consider prefixes whose length is either of the form , where , or in , where is the minimum such that .

We store a function mapping each prefix defined above to the length of the name of the corresponding node (actually, we can map to ). Additionally, we store a mapping from each node name to the length of its extent (again, we can just map ).

To retrieve the exit node of a string that is a prefix of some string in , we consider the string (i.e., the longest prefix of whose length is a multiple of ). Then, we check whether (i.e., whether is a prefix of the extent of the exit node of ). If this is the case, then clearly has the same exit node as (i.e., ). Otherwise, the map provides directly the length of the name of the exit node of , which is thus . All operations are completed in time .

The proof that this structure uses space is deferred to Appendix C.

4 Range location

Our next problem is determining the range (of lexicographical ranks) of the leaves that appear under a certain node of a trie. Actually, this problem is pretty common in static data structures, and usually it is solved by associating with each node a pair of integers of bits. However, this means that the structure has, in the worst case, a linear () dependency on the data.

To work around this issue, we propose to use a range locator—an abstraction of a component used in [2]. Here we redefine range locators from scratch, and improve their space usage so that it is dependent on the average string length, rather than on the maximum string length.

A range locator takes as input the name of a node, and returns the range of ranks of the leaves that appear under that node. For instance, in our toy example the answer to would be . To build a range locator, we need to introduce monotone minimal perfect hashing.

Given a set of strings , a monotone minimal perfect hash function [2] is a bijection that preserves lexicographical ordering. This means that each string of is mapped to its rank in (but strings not in give random results). We use the following results from [3]:555Actually, results in [3] are stated for prefix-free sets, but it is trivial to make a set of strings prefix-free at the cost of doubling the average length.

Theorem 2

Let be a set of strings of average length and maximum length , and be a string. Then, there are monotone minimal perfect hashing functions on that:

  1. use space and answer in time ;

  2. use space and answer in time .

We show how a reduction can relieve us from the dependency on ; this is essential to our goals, as we want to depend just on the average length:

Theorem 3

There is a monotone minimal perfect hashing function on using space that answers in time on a query string .

Proof 1

We divide into the set of strings shorter then , and the remaining “long” strings . Setting up a

-bit vector

that records the elements of with select-one and select-zero structures ( bits), we can reduce the problem to hashing monotonically and . We note, however, that using Theorem 2 can be hashed in space , as , and can be hashed explicitly using a -bit function; since necessarily, the function requires bits. Overall, we obtain the required bounds.

We now describe in detail our range locator, using the notation of Section 2. Given a string , let be with all its trailing zeroes removed. We build a set of strings as follows: for each extent of an internal node, we add to the strings , , and, if , we also add to the string , where denotes the successor of length of in lexicographical order (numerically, it is ). We build a monotone minimal perfect hashing function on , noting the following easily proven fact:

Proposition 1

The average length of the strings in is at most .

The second component of the range locator is a bit vector of length , in which bits corresponding to the names of leaves are set to one. The vector is endowed with a ranking structure (see Figure 2).

It is now immediate that given a node name , by hashing and ranking the bit position thus obtained in , we obtain the left extreme of the range of leaves under . Moreover, performing the same operations on , we obtain the right extreme. All these strings are in by construction, except for the case of a node name of the form ; however, in that case the right extreme is just the number of leaves (see Algorithm 4 for the details).

A range locator uses at most bits for and its selection structures. Thus, space usage is dominated by the monotone hashing component. Using the structures described above, we obtain:

Theorem 4

There are structures implementing range location in time using bits of space, and in time using bits of space.

We remark that other combinations of monotone minimal perfect hashing and succinct data structures can lead to similar results. For instance, we could store the trie structure using a preorder standard balanced parentheses representation, use hashing to retrieve the lexicographical rank of a node name, select the -th open parenthesis, find in constant time the matching closed parenthesis and obtain in this way the number of leaves under the node. Among several such asymptotically equivalent solutions, we believe ours is the most practical.

5 Putting It All Together

In this section we gather the main results about prefix search:

Theorem 5

There are structures implementing weak prefix search in space with query time , and in space with query time .

Proof 2

The first structure uses a hollow z-fast prefix trie followed by the range locator of Theorem 3: the first component provides the name of exit node of ; given , the range locator returns the correct range. For the second structure, we use the structure defined in Section 3.3 followed by the first range locator of Theorem 2.

Actually, the second structure described in Theorem 5 can be made to occupy space for any constant , as shown in Appendix E:

Theorem 6

For any constant , there is a structure implementing weak prefix search in space with query time .

We note that all our time bounds can be translated into I/O bounds in the cache-oblivious model if we replace the terms by . The term represents appears in two places:

  • The phase of precalculation of a hash-vector of hash words on the prefix which is later used to compute all the hash functions on prefixes of .

  • In the range location phase, where we need to compute and , where is a prefix of and subsequently compute the hash vectors on and .

Observe that the above operations can be carried on using arithmetic operations only without any additional I/O (we can use -wise independent hashing involving only multiplications and additions for computing the hash vectors and only basic basic arithmetic operations for computing and ) except for the writing the result of the computation which occupies words of space and thus take I/Os. Thus both of the two phases need only I/Os corresponding to the time needed to read the pattern and to write the result.

6 A space lower bound

In this section we show that the space usage achieved by the weak prefix search data structure described in Theorem 5 is optimal up to a constant factor. In fact, we show a matching lower bound for the easier problem of prefix counting (i.e., counting how many strings start with a given prefix), and consider the more general case where the answer is only required to be correct up to an additive constant less than . We note that any data structure supporting prefix counting can be used to achieve approximate prefix counting, by building the data structure for the set that contains every -th element in sorted order. The proof is in Appendix F.

Theorem 7

Consider a data structure (possibly randomised) indexing a set of strings with average length , supporting -approximate prefix count queries: Given a prefix of some key in , the structure returns the number of elements in that have this prefix with an additive error of less than , where . The data structure may return any number when given a string that is not a prefix of a key in . Then the expected space usage on a worst-case set is bits. In particular, if no error is allowed and , for constant , the expected space usage is bits.

Note that the trivial information-theoretical lower bound does not apply, as it is impossible to reconstruct from the data structure.

It is interesting to note the connections with the lower and upper bounds presented in [14]. This paper shows a lower bound on the number of bits necessary to represent a set of strings that, in the binary case, reduces to , and provide a matching data structure. Theorem 5 provides a hollow data structure that is sized following the naturally associated measure: . Thus, Theorem 5 and 7 can be seen as the hollow version of the results presented in [14]. Improving Theorem 7 to is an interesting open problem.

7 Applications

In this section we highlight some applications of weak prefix search. In several cases, we have to access the original data, so we are actually using weak prefix search as a component of a systematic (in the sense of [15]) data structure. However, our space bounds consider only the indexing data structure. Note that the pointers to a set of string of overall bits need in principle bits of spaces to be represented; this space can be larger than some of the data structures themselves. Most applications can be turned into cache-oblivious data structures, but this discussion is postponed to the Appendix for the sake of space.

In general, we think that the space used to store and access the original data should not be counted in the space used by weak/blind/hollow structures, as the same data can be shared by many different structures. There is a standard technique, however, that can be used to circumvent this problem: by using bits to store the set , we can round the space used by each string to the nearest power of two. As a results, pointers need just bits to be represented.

7.1 Prefix search and counting in minimal probes

The structures for weak prefix search described in Section 5 can be adapted to solve the prefix search problem within the same bounds, provided that the actual data are available, although typically in some slow-access memory. Given a prefix we get an interval . If there exists some string in the data set prefixed by , then it should be at one of the positions in interval , and all strings in that interval are actually prefixed by . So we have reduced the search to two alternatives: either all (and only) strings at positions in are prefixed by , or the table contains no string prefixed by . This implies the two following results:

  • We can report all the strings prefixed by a prefix in optimal number of probes. If the number of prefixed strings is , then we will probe exactly positions in the table. If no string is prefixed by , then we will probe a single position in the table.

  • We can count the number of strings prefixed by a given prefix in just one probe: it suffices to probe the table at any position in the interval : if the returned string is prefixed by , we can conclude that the number of strings prefixed by is ; otherwise, we conclude that no string is prefixed by .

7.2 Range emptiness and search with two additional probes

The structures for weak prefix search described in Section 5 can also be used for range emptiness and search within the same bounds, again if the actual data is available. In the first case, given two strings and we ask whether any string in the interval belongs to ; in the second case we must report all such strings.

Let the longest common prefix of and (which can be computed in time ). Then we have two sub-cases

  • The case ( is actually a prefix of ). We are looking for all strings prefixed by which are lexicographically smaller than . We perform a prefix query for , getting . Then we can report all elements in by doing a scan strings at positions in until we encounter a string which is not in interval . Clearly the number of probed positions is .

  • The case . We perform a prefix query for , getting and another query for , getting . Now it is immediate that if is not empty, then necessarily it is made by a suffix of and by a prefix of . We can now report using at most probes; we start from the end of the first interval and scan backwards until we find an element not in ; then, we start from the beginning of the second interval and scan forwards until we find an element not in

We report all elements thus found: clearly, we make at most two additional probes. In particular, we can report whether is empty in at most two probes. These results improve the space bounds of the index described in [1], provide a new index using just bits, and give bounds in terms of the average length.


  • [1] Stephen Alstrup, Gerth Brodal, and Theis Rauhe. Optimal static range reporting in one dimension. In

    Proceedings of the thirty-third annual ACM symposium on Theory of computing

    , pages 476–482. ACM, 2001.
  • [2] Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. Monotone minimal perfect hashing: Searching a sorted table with accesses. In Proceedings of the 20th Annual Symposium On Discrete Mathematics (SODA), pages 785–794. ACM Press, 2009.
  • [3] Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. Theory and practise of monotone minimal perfect hashing. In Proceedings of the Eleventh Workshop on Algorithm Engineering and Experiments, ALENEX 2009, New York, New York, USA, January 3, 2009, pages 132–144, 2009.
  • [4] Michael A. Bender, Martin Farach-Colton, and Bradley C. Kuszmaul. Cache-oblivious string b-trees. In Proceedings of the 25th ACM symposium on Principles of Database Systems, pages 233–242, New York, NY, USA, 2006. ACM.
  • [5] Gerth Stølting Brodal and Rolf Fagerberg. Cache-oblivious string dictionaries. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2006, Miami, Florida, USA, January 22-26, 2006, pages 581–590, 2006.
  • [6] Andrej Brodnik and J. Ian Munro. Membership in constant time and almost-minimum space. SIAM Journal on Computing, 28(5):1627–1640, 1999.
  • [7] Denis Xavier Charles and Kumar Chellapilla. Bloomier filters: A second look. In Algorithms - ESA 2008, 16th Annual European Symposium, Karlsruhe, Germany, September 15-17, 2008. Proceedings, pages 259–270, 2008.
  • [8] David R. Clark and J. Ian Munro. Efficient suffix trees on secondary storage (extended abstract). In Proceedings of the Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, 28-30 January 1996, Atlanta, Georgia., pages 383–391, 1996.
  • [9] Martin Dietzfelbinger, Joseph Gil, Yossi Matias, and Nicholas Pippenger. Polynomial hash functions are reliable (extended abstract). In Automata, Languages and Programming, 19th International Colloquium, ICALP92, Vienna, Austria, July 13-17, 1992, Proceedings, pages 235–246, 1992.
  • [10] Martin Dietzfelbinger and Rasmus Pagh. Succinct data structures for retrieval and approximate membership (extended abstract). In Proceedings of 35th International Colloquium on Automata, Languages and Programming (ICALP), volume 5125 of Lecture Notes in Computer Science, pages 385–396. Springer, 2008.
  • [11] Peter Elias. Efficient storage and retrieval by content and address of static files. J. Assoc. Comput. Mach., 21(2):246–260, 1974.
  • [12] Peter Elias. Universal codeword sets and representations of the integers. IEEE Transactions on Information Theory, 21:194–203, 1975.
  • [13] Paolo Ferragina and Roberto Grossi. The string B-tree: a new data structure for string search in external memory and its applications. Journal of the ACM, 46(2):236–280, 1999.
  • [14] Paolo Ferragina, Roberto Grossi, Ankur Gupta, Rahul Shah, and Jeffrey Scott Vitter. On searching compressed string collections cache-obliviously. In Proceedings of the 27th ACM symposium on Principles of Database Systems, pages 181–190, 2008.
  • [15] A. Gál and P.B. Miltersen. The cell probe complexity of succinct data structures. Theoret. Comput. Sci., 379(3):405–417, 2007.
  • [16] Ankur Gupta, Wing-Kai Hon, Rahul Shah, and Jeffrey Scott Vitter. Compressed data structures: Dictionaries and data-aware measures. Theor. Comput. Sci., 387(3):313–331, 2007.
  • [17] Nikos Hardavellas, Michael Ferdman, Babak Falsafi, and Anastasia Ailamaki. Reactive NUCA: near-optimal block placement and replication in distributed caches. In Stephen W. Keckler and Luiz André Barroso, editors, ISCA, pages 184–195. ACM, 2009.
  • [18] G. Jacobson. Space-efficient static trees and graphs. In In Proc 30th Annual Symposium on Foundations of Computer Science, pages 549–554, 1989.
  • [19] Mihai Pǎtraşcu and Mikkel Thorup. Randomization does not help searching predecessors. In Proc. 18th Symposium on Discrete Algorithms (SODA), pages 555–564, 2007.
  • [20] Ely Porat. An optimal bloom filter replacement based on matrix solving. In Computer Science - Theory and Applications, Fourth International Computer Science Symposium in Russia, CSR 2009, Novosibirsk, Russia, August 18-23, 2009. Proceedings, pages 263–273, 2009.
  • [21] Rajeev Raman, Venkatesh Raman, and Srinivasa Rao Satti. Succinct indexable dictionaries with applications to encoding k-ary trees, prefix sums and multisets. ACM Trans. Algorithms, 3(4):43, 2007.

Appendix A Conclusions

We have presented two data structures for prefix search that provide different space/time tradeoffs. In one case (Theorem 5), we prove a lower bound showing that the structure is space optimal. In the other case (Theorem 6) the structure is time optimal. It is also interesting to note that the space usage of the time-optimal data structure can be made arbitrarily close to the lower bound. Our structures are based on range locators, a general building block for static data structures, and on structures that are able to map prefixes to names of the associated exit node. In particular, we discuss a variant of the z-fast trie, the z-fast prefix trie, that is suitable for prefix searches. Our variant carries on the good properties of the z-fast trie (truly linear space and logarithmic access time) and significantly widens its usefulness by making it able to retrieve the name of the exit node of prefixes. We have shown several applications in which sublinear indices access very quickly data in slow-access memory, improving some results in the literature.

Appendix B Proof of Theorem 1

Let be the sum of the lengths of the extents of internal nodes, and the sum of the lengths of the strings in the trie. We show equivalently that . This is obviously true if . Otherwise, let be the length of the compacted path at the root, and let , be the number of leaves in the left and right subtrie; correspondingly, let the sum of lengths of the extents of each subtrie, and the sum of the lengths of the strings in each subtrie, stripped of their first bits. Assuming by induction , we have to prove

which can be easily checked to be always true under the assumption above.

Appendix C Proof of the space bound claimed in Section 3.3

First of all, it can easily be proved that the domain of is in size. Each contributes at most prefixes whose lengths are in interval . It also contributes at most prefixes whose lengths are of the form , where . Overall the total number of prefixes is no more than:

The sum of lengths of skip intervals of all nodes of the trie is no larger than sum of lengths of strings :

From that we have:

Summing up, the total number of prefixes is less than . Since the output size of the function is bounded by , where is the maximum string length, we would obtain the space bound . To prove the strict bound , we need to further refine the structure so that the “cutting step” is larger in the deeper regions of the trie.

Let be the subset of strings of of length less than , and the remaining strings. We will change the step after depth . Let and let . We will say that a node is deep if its extent is long at least . We will split into a function with output size that maps prefixes shorter than (short prefixes), and a function with output size that maps the remaining long prefixes. For every node with skip interval , we consider three cases:

  1. If (a non-deep node), we will store the prefixes of that have lengths either of the form , where , or in , where is the minimum such that . Those prefixes are short, so they will be mapped using .

  2. If (a deep node with non-deep parent), we store the following prefixes of :

    1. Prefixes of lengths , where , or of lengths in , where is the minimum such that . Those prefixes are short, so they will be mapped using .

    2. Prefixes of lengths , where . Those prefixes are long, so they will be mapped using .

  3. If (a deep node with a deep parent), we will store all prefixes that have lengths either of the form , where , or in , where is the minimum such that . Those prefixes are long, so they will be mapped using .

The function is now defined by combining and in the obvious way. To retrieve the exit node of a string that is a prefix of some string in , we have two cases: if , we consider the string , otherwise we consider the string . Then, we check whether (i.e., whether is a prefix of the extent of the exit node of ). If this is the case, then we conclude clearly has the same exit node of (i.e., ). Otherwise, the map gives the name of the exit node of : .

The space bound holds immediately for , as we already showed that prefixes (long and short) are overall , and has output size .

To bound the size of , we first bound the number of deep nodes. Clearly either a deep node is a leaf or it has two deep children. If a deep node is a leaf then its extent is long at least , so it represent a string from . Hence, the collection of deep nodes constitute a forest of complete binary trees where the number of leaves is the number of strings in . As the number of strings in is at most , we can conclude that the total number of nodes in the forests (i.e., the number of deep nodes) is at most . For each deep node we have two kinds of long prefixes:

  1. Prefixes that have lengths of the form . Those prefixes can only be prefixes of long strings, and for each long string , we can have at most such prefixes. As the total length of all strings in is at most , we conclude that the total number of such prefixes is at most .

  2. Prefixes that have lengths in or in for a node , where is the minimum such that . We can have at most prefix per node: since we have at most nodes, the number of prefixes of that form is .

As we have a total of long prefixes, and the output size of is , we can conclude that total space used for is bounded above by .

Finally, we remark that implementing by means of a compressed function we need just bits of space.

Appendix D Correctness proof of Algorithm 1

The correctness of Algorithm 1 is expressed by the following

Lemma 1

Let , where , are the extents of the nodes of the trie that are prefixes of , ordered by increasing length. Let be the interval maintained by the algorithm. Before and after each iteration the following invariants are satisfied:

  1. there exists at most a single such that ;

  2. for some , and ;

  3. during the algorithm, is only queried with strings that are either the handle of an ancestor of , or a pseudohandle of ; so, is well-defined on all such strings;

  4. ;

We will use the following property of 2-fattest numbers, proved in [2]:

Lemma 2

Given an interval of strictly positive integers:

  1. Let be the largest number such that there exists an integer satisfying . Then is unique, and the number is the 2-fattest number in .

  2. If , there exists at most a single value such that .

  3. If is such that does not contain any value of the form , then and the interval may contain at most one single value of the form .

Now, the correctness proof of Lemma 1 follows. (1) Initially, when we have , and this interval contains at most a single value of the form , that is . Now after some iteration suppose that we have at most a single such that . We have two cases:

  • There is no such that . Then, the interval remains unchanged and, by Lemma 2 (3), it will contain at most a single value of the form .

  • There is a single such that . The interval may be updated in two ways: either we set the interval to for some or we set the interval to . In both cases, the new interval will no longer contain . By invariant 3. of Lemma 2, the new interval will contain at most a single value of the form .

(2) The fact that for some is true at the beginning, and when is reassigned it remains true: indeed, if this means that is the handle of a node found on the path to , and ; but since is a prefix of some string, and the latter is for some . This fact also implies , since the ’s have decreasing lengths.

(3) By (2), is always the length of the extent of some , whereas at the beginning, and then it can only decrease; so is a union of some skip intervals of the ancestors of and of an initial part of the skip interval of the node itself. Hence, its 2-fattest number is either the handle of some of the ancestors (possibly, of itself) or a pseudohandle of (this can only happen if is not larger than the 2-fattest number of the skip interval of ).

(4) The property is true at the beginning. Then, is reduced only in two cases: either is the 2-fattest number of the skip interval of (in this case, is assigned ); or we are querying with a pseudohandle of or with the handle of a leaf (in this case, is assigned the value ). In both cases, is reduced to , which is still not smaller than the extent of the parent of .

Appendix E Proof of Theorem 6

Rather that describing the proof from scratch, we describe the changes necessary to the proof given in Appendix C.

The main idea is that of setting , , and let and . In this way, since clearly the prefixes of the form and are and . The problem is that now the prefixes at the start of each node (i.e., those of length , , …) are too many.

To obviate to this problem, we record significantly less prefixes. More precisely we record sequences of prefixes of increasing lengths:

  • For non deep nodes we first store prefixes of lengths , , … until we hit a multiple of , say . Then we record prefixes of lengths , ,… until we hit a multiple of , and so on, until we hit a multiple of . Then we finally terminate by recording prefixes of lengths multiple of .

  • We work analogously with and for deep nodes whose parents are also deep nodes. That is we store all prefixes of lengths , , … until we hit a length of the form , then record all prefixes of lengths , until we hit a length of the form , and so on until we record a length of the form . Then we finally store all prefixes of lengths ,….

  • For deep nodes with non deep parents, we do the following two things:

    • We first record short prefixes. We record all strings of lengths ,, … until we either hit a length multiple of or length . If we have hit length we stop recording short prefixes. Otherwise, we continue in the same way, with prefixes of lengths multiple of for increasing each time terminating the step if we hit a multiple of or completely halt recording short prefixes if we hit length .

    • Secondly, we record long prefixes. That is all prefixes of lengths of the form

Clearly each node contributes at most short prefixes ( long prefixes respectively). In the first case, there are obviously short prefixes. In the second case, since the number of deep nodes is at most there are at most long prefixes. Overall, requires bits, whereas requires bits.

The query procedure is modified as follows: for , , … , , if is short we consider the string and check whether . If this happen, we know that is the name of the exit node of and we return it. Note that if is a prefix of some string in , this must happen at some point (eventually at ). If