1 Introduction
The dynamic ordered set problem with integer keys is to represent a set , with , such that the following operations are supported: determines whether ; inserts/deletes in/from ; returns the next smaller/larger element from ; returns the smallest/largest element from . This is among the most studied problems in Computer Science (see the introduction to parts III and V of the book by Cormen et al. (2009)). Many solutions to this problem are known to require an optimal amount of time per operation within polynomial space. For example, under the comparisonbased model that allows only two keys to be compared in time, it is wellknown that any selfbalancing search tree data structure, such as AVL or RedBlack, solves the problem optimally in worstcase time and words of space. (Unless otherwise specified, all logarithms are binary throughout the article).
However, working with integer keys makes it possible to beat the time bound with a RAM model having word size bits Pǎtraşcu and Thorup (2014); van Emde Boas (1975); Willard (1983); Fredman and Willard (1993). In this scenario, classical solutions include the van Emde Boas tree van Emde Boas (1975, 1977); van Emde Boas et al. (1977), fast trie Willard (1983) and the fusion tree Fredman and Willard (1993) — historically the first data structure that broke the barrier of , by exhibiting an improved running time of .
In this work, we are interested in preserving the asymptotic time optimality for the operations under compressed space. A simple informationtheoretic argument Pagh (2001) shows that one needs at least bits to represent ( is the base of the natural logarithm), because there are possible ways of selecting integers out of . The meaning of this bound is that any solution solving the problem in optimal time but taking polynomial space, i.e., bits, is actually bits larger than necessary.
Interestingly, the EliasFano representation Elias (1974); Fano (1971) of the ordered set uses bits which is at most bits. For , we have that bits, showing that EliasFano takes bits. We conclude that EliasFano is at most bits away from the informationtheoretic minimum Grossi et al. (2009). We describe EliasFano in Section 2.1.
Given the total order of , it is natural to extend the problem by also considering the operation Access that, given an index , returns the th smallest element from . (This operation is also known as Select.) It should also be noted that, for any key , the operation can be implemented by running and checking whether the returned value is equal to or not. Furthermore, it is wellknown that Predecessor and Successor have the same complexities and are solved similarly, thus we only discuss Predecessor. Lastly, returning the smallest/largest integer from can be trivially done by storing these elements explicitly in bits (which is negligible compared to the space needed to represent ) and updating them as needed upon insertions/deletions. For these reasons, the problem we consider in this article is formalized as follows.
Problem 1.
Dynamic ordered set with random access — Given a nonnegative integer , represent an ordered set with , such that the following operations are supported for any and :

returns the th smallest element from ,

sets ,

sets ,

.
Our contribution
In this article we describe a solution to Problem 1 whose space in bits is expressed in terms of — the cost of representing with EliasFano — and achieves optimal running times. We consider a unitcost RAM model with word size bit, allowing multiplication. We study the asymptotic behaviour of the data structures, therefore we also assume, without loss of generality, that is larger than a sufficiently big constant Pagh (2001).
For the important and practical case where the integers come from a polynomial universe of size , we give a solution that uses bits, thus introducing a sublinear redundancy with respect to , and supports: Access in time, Insert, Delete and Predecessor in time. The time bound for random access under updates matches a lower bound given by Fredman and Saks (1989) for dynamic selection. Dynamic predecessor search, instead, matches a lower bound given by Pǎtraşcu and Thorup (2006). Our result significantly improves the space of the best known solution by Pǎtraşcu and Thorup (2014) which takes optimal time but polynomial space, i.e., bits.
2 Preliminaries
In this section we illustrate the context of our work, whose discussion is articulated in three parts. We first describe the static EliasFano representation because it is a key ingredient of our solutions. Then we discuss the results concerning the static predecessor and dynamic ordered set (and related) problems, stressing what lower bounds apply to these problems. Recall that we use a RAM model with word size bits.
2.1 Static EliasFano representation
Lemma 0.
Space complexity
Let indicate the th smallest of . We write each in binary using bits. The binary representation of each integer is then split into two parts: a low part consisting in the rightmost bits that we call low bits and a high part consisting in the remaining bits that we similarly call high bits. Let us call and the values of low and high bits of respectively. The integers are written explicitly in bits and they represent the encoding of the low parts. Concerning the high bits, we represent them in negated unary using a bitmap of bits as follows. We start from a valued bitmap and set the bit in position , for . It is easy to see that the th unary value of , say , indicates that integers of have high bits equal to . For example, if is 1110, 1110, 10, 10, 110, 0, 10, 10 (as in Table 1), we have that , so we know that there are 3 integers in having high bits equal to 1.
Summing up the costs of high and low parts, we derive that EliasFano takes bits. Although we can opt for an arbitrary split into high and low parts, ranging from to , it can be shown that minimizes the overall space of the encoding Elias (1974). As explained in Section 1, the space of EliasFano is related to the informationtheoretic minimum: it is at most bits redundant.
3  4  7  13  14  15  21  25  36  38  54  62  
high  0  0  0  0  0  0  0  0  1  1  1  1  1 
0  0  0  0  0  0  1  1  0  0  0  1  1  
0  0  0  1  1  1  0  1  0  0  1  0  1  
low  0  1  1  1  1  1  1  0  1  1  1  1  
1  0  1  0  1  1  0  0  0  1  1  1  
1  0  1  1  0  1  1  1  0  0  0  0  
1110  1110  10  10  110  0  10  10  
011.100.111  101.110.111  101  001  100.110  110  110 
Example
Table 1 shows a graphical example for the sorted set 3, 4, 7, 13, 14, 15, 21, 25, 36, 38, 54, 62. The missing high bits embody the representation of the fact that using bits to represent the high part of an integer, we have at most distinct high parts because not all of them could be present. In Table 1, we have and we can form up to distinct high parts. Notice that, for example, no integer has high part equal to 101 which are, therefore, “missing” high bits.
Random access
A remarkable property of EliasFano is that it can be indexed to support Access in worstcase. The operation is implemented by building an auxiliary data structure on top of that answers queries. The answer to a query over a bitmap is the position of the th bit set to . This auxiliary data structure is succinct in the sense that it is negligibly small in asymptotic terms, compared to , requiring only additional bits (Mäkinen and Navarro, 2007; Vigna, 2013), hence bringing the total space of the encoding to bits. For a given , we proceed as follows. The low bits are trivially retrieved as . The retrieval of the high bits is, instead, more complicated. Since we write in negated unary how many integers share the same high part, we have a 1 bit for every integer in and a 0 for every distinct high part. Therefore, to retrieve , we need to know how many 0s are present in . This quantity is evaluated on in as . Lastly, relinking the high and low bits together is as simple as: , where indicates the left shift operator and is the bitwise OR.
Predecessor search
The query is supported in time as follows. Let be the high bits of . Then for , indicates that there are integers in whose high bits are less than . On the other hand, gives us the position at which the elements having high bits larger than start. The corner case is handled by setting . These two preliminary operations take . Now we can conclude the search in the range , having skipped a potentially large range of elements that, otherwise, would have required to be compared with . The range may contain up to integers that we search with binary search. The time bound follows. In particular, it could be that : in this case is the element to return if .
Partitioning the representation
In this article we will use extensively the following property of EliasFano.
Property 0.
Given an ordered set , with , let indicate the EliasFano representation of , for any . Then given an index , we have that , where , for .
The property tells us that splitting the EliasFano encoding of does not increase its space of representation. This is possible because each segment can be encoded with a reduced universe, by subtracting to each integer the last value of the preceding segment (the first segment is left as it is). Informally, we say that a segment is “remapped” relatively to its own universe. The property can be easily extended to work with an arbitrary number of splits. Let us now prove it.
Proof.
We know that takes bits, where . Similarly, and , where and , are minimized by choosing and . Any other choice of and yields a larger cost, therefore: . ∎
An important consideration to make is that Property 2 needs the knowledge of the value to work — the pivoting element — which can be stored in bits. This means that for small values of it can happen that the space reduction does not exceed bits. Since we do not deal with such values of , we always assume that this is not the case.
2.2 The static predecessor problem
Simple solutions
There are two simple solutions to the static predecessor problem. The first uses an array where we store the answers to all possible queries. In this case for any (), thus the problem is solved in worstcase time and bits. The second solution stores as a sorted array and answers the queries using binary search, therefore taking bits and worstcase time. Both solutions are unsatisfactory: the first one because of its space; the second one because of its time.
Lower bounds
Ajtai (1988) proved the first time lower bound for polynomial space, i.e., memory words, claiming that , that gives query time. Miltersen (1994) elaborated on Ajtai’s result and also showed that , that gives query time.
For the dense case of , Pagh (2001) gave a static data structure taking bits and answering membership and predecessor queries in worstcase time. (We consider larger universes in this article.)
Beame and Fich (1999, 2002) proved two strong bounds for any cellprobe data structure. They proved that , that requires query time and that , that requires query time. They also gave a static data structure achieving
which is, therefore, optimal.
Building on a long line of research, Pǎtraşcu and Thorup (2006, 2007) finally proved the following optimal (up to constant factors) space/time tradeoff.
Theorem 3.
This lower bound holds for cellprobe, RAM, transdichotomous RAM, external memory and communication game models. The first branch of the tradeoff indicates that, whenever one integer fits in one memory word, fusion trees Fredman and Willard (1993) are optimal as they have query time. The second branch holds for polynomial universes, i.e., when , for any . In such important case we have that , therefore fast tries Willard (1983) and van Emde Boas trees van Emde Boas (1975, 1977); van Emde Boas et al. (1977) are optimal with query time . The last two bounds of the tradeoff, instead, treat the case for superpolynomial universes and are out of scope for this work.
For example, given a space budget of words we have , thus implying that fast tries and van Emde Boas trees are optimal if and fusion trees are optimal if .
Predecessor queries in succinct space
We are now interested in determining the optimal running time of Predecessor given the EliasFano space bound of bits from Lemma 1, knowing that the time for dynamic predecessor with logarithmic update time can not be better than that of static predecessor (allowing polynomial space) Pǎtraşcu and Thorup (2014).
We make the following observation.
Observation 1.
Given any linearspace data structure supporting Predecessor in worstcast time, an ordered set with can be represented in bits such that Access is supported in and Predecessor in worstcase time, for any constant .
We represent with EliasFano and (logically) divide it into blocks of integers each (the last block may contain less integers). We can solve Predecessor queries in a block in time by applying binary search, given that each access is performed in time. The first element of each block (and its position in ) is also stored in the linearspace data structure solving Predecessor in time. The space of such data structure is bits.
Corollary 0.
An ordered set , with and , can be represented in bits such that Access is supported in and Predecessor in optimal worstcase time.
The linearspace data structure in Observation 1 is chosen to be an fast trie, whose query time is optimal for polynomial universes (second branch of Theorem 3). The space of the fast trie is bits.
Let , for any . The bound only depends on , whereas the plain EliasFano bound of depends on both and , thus varying only one of the two bounds is optimal. In fact, we have that whenever , i.e., when . From this last condition we derive that the plain EliasFano bound is less than when . When, instead, , the query time is optimal and exponentially better than EliasFano. Therefore, is an accurate characterization of the Predecessor time bound with bits.
However for the rest of the discussion, we assume that is sufficiently large so that , that is .
2.3 Dynamic problems
Ordered set problem
As far as the Access operation is not supported, the following results hold. The van Emde Boas tree van Emde Boas (1975, 1977); van Emde Boas et al. (1977) is a recursive data structure that maintains in bits and worstcase time. Willard (1983) improved the space bound to bits with the fast trie. (The bound for Insert/Delete is amortized rather than worstcase). When polynomial universes are considered, Pǎtraşcu and Thorup (2006) proved that van Emde Boas trees and fast tries have an optimal query time for the dynamic predecessor problem too, that is worstcase.
Fredman and Willard (1993) showed how to solve that dynamic predecessor problem in time and space with the fusion tree. This data structure is a tree with branching factor that stores in each internal node a fusion node a small data structure able of answering predecessor queries in for sets up to integers.
Extending their result to the dynamic predecessor problem, Beame and Fich (1999, 2002) proved that any cellprobe data structure using bits per memory cell and worstcase time for insertions, requires worstcase query time. They also proved that, under a RAM model, the dynamic predecessor problem can be solved in , , using linear space. This bound was matched by Andersson and Thorup (2007) with the socalled exponential search tree. This data structure has an optimal bound of worstcase time for searching and updating , using polynomial space.
Set problems with random access
The lower bound for the problem changes by considering the Access operation because this operation is related to the partial sums problem that is, given an integer array , support returning the sum of the first integers, which sets to and which returns the index such that . Fredman and Saks (1989) proved a bound of amortized time for this problem (see also the extended version of the article by Pǎtraşcu and Thorup (2014) — Section 5). Therefore, this is the lower bound that applies to our problem as well. Bille et al. (2018) extended the problem as to also support dynamic changes to the array.
Fredman and Saks (1989) also proved that amortized is necessary for the list representation problem, that is to support Access, Insert and Delete. However, this problem is slightly different than the one tackled here, because one can specify the position of insertion of a key. Likewise, the Delete operation specify the position of the key, rather than the key itself. Raman, Raman, and Rao (2001) also addressed the list representation problem (referred to as the dynamic array problem) and provide two solutions. The first solution is given by the following lemma.
Lemma 0.
Raman, Raman, and Rao (2001). A dynamic array containing elements can be implemented to support Access in , Insert and Delete in time using pointers, where is any fixed positive constant.
The second solution supports all the three operations in amortized time. Both solutions take bits of redundancy (besides the space needed to store the array) and the time bounds are optimal.
Since it takes, time to construct and update a fusion node with keys, Pǎtraşcu and Thorup (2014) showed that it is possible to “dynamize” the fusion node and obtained the following result.
Lemma 0.
Pǎtraşcu and Thorup (2014). An ordered set , with , can be represented in bits and supporting Insert, Delete, Rank, Select and Predecessor in per operation.
3 Succinct Dynamic Ordered Sets with Random Access
In this section we illustrate our main result for polynomial universes: a solution to Problem 1 that uses bits and supports all operations in optimal time. From Section 2.3, we recall that a bound of applies to the Access operation under updates; Predecessor search needs, instead, time as explained in Section 2.2.
Theorem 7.
An ordered set , with and , can be represented in bits such that Access is supported in , Insert, Delete and Predecessor in time.
We first show how to handle small sets of integers efficiently in Section 3.1. Then we use this solution to give the final construction in Section 3.2.
3.1 Handling small sets
In this section, we give a solution to Problem 1 working for a small set of integers.
The following lemma is useful.
Lemma 0.
Jansson et al. (2012) Given a collection of blocks, each of size bits, we can store it using bits of redundancy to support Address in time and Realloc in time.
We say that the data structure of Lemma 8 has parameters . The operation returns a pointer to where the th block is stored in memory; the operation changes the length of the th block to bits.
Now we show the following theorem.
Theorem 9.
Let be an ordered set with and . Then a subset of , with and , can be represented with bits and supporting Access, Insert, Delete and Predecessor in time.
Memory management
We divide the ordered elements of into blocks of size and represent each block with EliasFano. We have blocks. Physically, the high and low parts of the EliasFano representations are stored using two different data structures.
The high parts of all blocks are stored using the data structure of Lemma 8, with parameters . For this choice of parameters, we support both Address and Realloc in time and pay a redundancy of bits. This allows to manipulate the high part of a block in time upon Access, Insert and Delete.
The low parts are stored in a collection of dynamic arrays, each being an instance of the data structure of Lemma 5. We maintain an array of pointers to such data structures, taking bits. Each array stores integers and supports Access in , Insert and Delete in as soon as in Lemma 5. The redundancy to maintain the arrays is bits.
Indexing
The blocks are indexed with a ary tree , with and . It follows that the height of the tree is constant and equal to . The tree operates as a Btree where internal nodes store children. In particular, each node stores counters, telling how many integers are present in the leaves that descend from each child. These counters are kept in prefixsum fashion to enable binary search. Such counters takes bits which fit in (less than) a machine word. This allows us to update such counters in time upon insertions/deletions.
Each leaf node also stores two offsets per block, each taking bits. The first offset is the position in of the pointer to the dynamic array storing the low parts of the EliasFano representation of the block. The second offset tells where the low parts of the block are stored inside the dynamic array. Thus the overhead per block is bits. As usual, each internal node also stores a pointer per child, thus maintaining the tree topology imposes an overhead per block equal to bits as soon as . Since the overhead per block is bits, it follows that the total space of is bits.
Operations
To support Access, we navigate the tree and spend per level, which is , by binary searching the counters. The proper block is therefore identified in and the wanted integer is returned in time from it knowing the local offset of the integer inside the block calculated during the traversal.
To support Insert, we need to identify the proper block where to insert the new integer. (The Delete operation is symmetric.) Again, we use binary search on each level of the tree but searching among the last values of the indexed blocks. We can retrieve the last value of a block in , having the pointer to the block and its size information from the counters. This is trivial at the leaves. In the internal nodes, instead, if the upper bound of the th child is needed for comparison for some , we access the block storing such value by following the pointer to the rightmost block indexed in the subtree rooted in the th child. Accessing the rightmost block takes time. Having located the proper block, we insert the new integer in time, as explained before. Updating the counters in each node of the tree along the roottoleaf path takes time as they fit in bits. If a split or merge of a block happens, it is handled as in a Btree and solved in a constant number of time operations.
During a Predecessor search we identify the proper block in time as explained for Insert and return the predecessor by binary searching the block’s values. The total time of the search is .
Space complexity
We now analyze the space taken by the EliasFano representations of the blocks. Our goal is to show that such space can be bounded by , that is the space of encoding the set with EliasFano. Since the universe of representation of a block could be as large as , storing the lower bounds of the blocks in order to use reduced universes — as for Property 2 — would require bits of redundancy. This is excessive because if the data structure is replicated every integers to represent a larger dynamic set with , then these lower bounds would cost bits, which is not sublinear in . We show that this extra space can be avoided, observing that the number of bits used to represent the low part of EliasFano remains the same for a sufficiently long sequence of updates.
From Section 2.1 recall that EliasFano represents each low part with bits. Now, suppose that the low parts of the blocks are encoded using a suboptimal value instead of . After we perform updates, is set to by rebuilding the blocks. It is easy to see that updates are required to let become , because changes by () whenever its argument doubles (halves). Therefore we have for any . In our case . In order to guarantee an amortized cost for update equal to , we set . Storing the current value of adds a global redundancy of bits which is negligible.
3.2 Final construction
Now we prove the final result – Theorem 7 – whose key ingredient is the data structure given in Theorem 9.
Lower level
We divide the ordered elements of into blocks of size and represents them using the tree data structure of Theorem 9. Therefore, we have a forest of such data structures.
Upper level
The first element of each block is (also) stored in the data structure of Lemma 6 that is a dynamic fusion tree with outdegree , and in an fast trie. Let call these data structures and respectively. The th leaf of both and holds a pointer to the data structure .
Space and time complexity
The lower level costs bits. The total cost of the upper level is bits. Since each block is remapped relatively to its universe, Property 2 guarantees that the space of representation is at most bits. The space bound claimed in Theorem 7 follows.
A total running time of for Access follows because the data structure operates in this time. For Insert, Delete and Predecessor, we use the data structure, thus attaining to time. (The bound for Insert and Delete is amortized rather than worstcase).
4 Appendonly
In this section we extend the result given in Corollary 4 to the case where the integers are inserted in sorted order using an Append operation. In this case, we obtain an appendonly representation.
Theorem 10.
An ordered set , with and , can be represented in bits such that Append and Access are supported in time, Predecessor in time.
Data structure and space analysis
We maintain an array of size where integers are appended uncompressed, for any . The array is periodically encoded with EliasFano in time and overwritten. Each compressed representation of the buffer is appended to another array of blocks encoded with EliasFano. More precisely, when is full we encode with EliasFano its corresponding differential buffer, i.e., the buffer whose values are , for . Each time the array is compressed, we append in another array the pair (base, low) , i.e., the buffer lower bound value (base) and the number of bits (low) needed to encode the average gap of the EliasFano representation of the block.
As discussed for Corollary 4, we store the buffer lower bounds an fast trie. More precisely, it stores a buffer lower bound and the index of the EliasFanoencoded block to which the lower bound belongs to. The space of this data structure is bits. Besides the space of the fast trie, which is bits, and that of the EliasFanoencoded blocks, the redundancy of the data structure is due to (1) bits for the array and its (current) size; (2) bits for pointers to the EliasFanoencoded blocks; (3) bits for the array ; and it sums up to bits.
Lastly, Property 2 guarantees that the space taken by the blocks encoded with EliasFano can be safely upper bounded by so that the overall space of the data structure is at most bits.
Operations
The operations are supported as follows. Since we compress the array each time it fills up (by taking time), Append is performed in amortized time. Appending new integers in the buffer accumulates a credit of that (largely) pays the cost of appending a value to the fast trie. To Access the th integer, we retrieve the element in position from the compressed block of index . This is done in worstcase time, since we know how many low bits are required to perform Access by reading . We finally return the integer . To solve , we first resolve a partial query in the fast trie to identify the index of the compressed block where the predecessor is located. This takes worstcase time. We return by binary searching the block of index in worstcase time.
5 Conclusions
In this paper we have shown that EliasFano can be used to obtain a succinct dynamic data structure with optimal update and query time, solving the dynamic ordered set with random access problem. Our main result holds for polynomial universes and is a data structure using the same asymptotic space of EliasFano — bits, where — and supporting Access in time, Insert, Delete and Predecessor in time. All time bounds are optimal. Note that the space of the solution can be rewritten in terms of informationtheoretic minimum since bits.
An interesting open problem is: Can the space be improved to bits and preserving the operational bounds?
Another question is: Can the result be extended to nonpolynomial universes?
In this case, the lower bound for dynamic predecessor search is that corresponds to the first branch of the time/space tradeoff in Theorem 3, as well as the one for Access, Insert and Delete Pǎtraşcu and Thorup (2014). It seems that a different solution than the one described here has to be found since the data structure of Theorem 7 allows us to support all operations in time when nonpolynomial universes are considered. Therefore, we give the following corollary that matches the asymptotic time bounds of fast tries and van Emde Boas trees (albeit suboptimal) but in almost optimally compressed space.
Corollary 0.
An ordered set , with , can be represented in bits such that Access, Insert, Delete and Predecessor are all supported in time.
References
 A lower bound for finding predecessors in Yao’s cell probe model. Combinatorica 8 (3), pp. 235–247. Cited by: §2.2.
 Dynamic ordered sets with exponential search trees. Journal of the ACM (JACM) 54 (3), pp. 13. Cited by: §2.3.

Optimal bounds for the predecessor problem.
In
Proceedings of Annual Symposium on Theory of Computing (STOC)
, pp. 295–304. Cited by: §2.2, §2.3.  Optimal bounds for the predecessor problem and related problems. Journal of Computer and System Sciences (JCSS) 65 (1), pp. 38–72. Cited by: §2.2, §2.3.
 Dynamic relative compression, dynamic partial sums, and substring concatenation. Algorithmica 80 (11), pp. 3207–3224. Cited by: §2.3.
 Introduction to algorithms (3rd edition). MIT Press. Cited by: §1.
 Efficient storage and retrieval by content and address of static files. Journal of the ACM (JACM) 21 (2), pp. 246–260. Cited by: §1, §2.1, Lemma 1.
 On the number of bits required to implement an associative memory. Memorandum 61, Computer Structures Group, MIT, Cambridge, MA. Cited by: §1, Lemma 1.
 The cell probe complexity of dynamic data structures. In Proceedings of the 21st Annual Symposium on Theory of Computing (STOC), pp. 345–354. Cited by: §1, §2.3, §2.3, §2.3.
 Surpassing the information theoretic bound with fusion trees. Journal of Computer and System Sciences (JCSS) 47 (3), pp. 424–436. Cited by: §1, §2.2, §2.3.
 More haste, less waste: lowering the redundancy in fully indexable dictionaries. In 26th International Symposium on Theoretical Aspects of Computer Science STACS 2009, pp. 517–528. Cited by: §1.
 CRAM: compressed random access memory. In Proceedings of 39th International Colloquium on Automata, Languages, and Programming (ICALP), pp. 510–521. Cited by: Lemma 8.
 Rank and select revisited and extended. Theoretical Computer Science (TCS) 387 (3), pp. 332–347. Cited by: §2.1.
 Lower bounds for unionsplitfind related problems on random access machines. In Proceedings of Annual Symposium on Theory of Computing (STOC), Vol. 94, pp. 625–634. Cited by: §2.2.
 Low redundancy in static dictionaries with constant query time. SIAM Journal on Computing 31 (2), pp. 353–363. Cited by: §1, §1, §2.2.
 Timespace tradeoffs for predecessor search. In Proceedings of the 38th Annual Symposium on Theory of Computing (STOC), pp. 232–240. Cited by: §1, §2.2, §2.3, Theorem 3.
 Randomization does not help searching predecessors. In Proceedings of the 18th Annual Symposium on Discrete Algorithms (SODA), pp. 555–564. Cited by: §2.2, Theorem 3.
 Dynamic integer sets with optimal rank, select, and predecessor search. In Proceedings of the 55th Annual Symposium on Foundations of Computer Science (FOCS), pp. 166–175. Cited by: §1, §1, §2.2, §2.3, §2.3, §5, Lemma 6.
 Dynamic EliasFano representation. In Proceedings of the 28th Annual Symposium on Combinatorial Pattern Matching (CPM), pp. 30:1–30:14. Cited by: footnote 1.
 Succinct dynamic data structures. In Proceedings of the 7th International Workshop on Algorithms and Data Structures (WADS), pp. 426–437. Cited by: §2.3, Lemma 5.
 Design and implementation of an efficient priority queue. Mathematical Systems Theory (MST) 10, pp. 99–127. Cited by: §1, §2.2, §2.3.
 Preserving order in a forest in less than logarithmic time. In Proceedings of the 16th Annual Symposium on Foundations of Computer Science (FOCS), pp. 75–84. Cited by: §1, §2.2, §2.3.
 Preserving order in a forest in less than logarithmic time and linear space. Information Processing Letters (IPL) 6 (3), pp. 80–82. Cited by: §1, §2.2, §2.3.
 Quasisuccinct indices. In Proceedings of the 6th International Conference on Web Search and Data Mining (WSDM), pp. 83–92. Cited by: §2.1.
 Loglogarithmic worstcase range queries are possible in space . Information Processing Letters (IPL) 17 (2), pp. 81–84. Cited by: §1, §2.2, §2.3.
Comments
There are no comments yet.