1 Introduction
We consider the dynamic dictionary problem for multisets. The special case in which every element of the universe can appear at most once is a fundamental problem in data structures and has been well studied [ANS10, PPR05, RR03, DadHPP06]. In the case of multisets, elements can have arbitrary multiplicities and we are given an upper bound on the total cardinality of the multiset (i.e., including multiplicities) at any point in time. The goal is to design a data structure that supports multiplicity queries and allows insertions and deletions to the multiset (i.e., the dynamic setting).
A related problem is that of supporting approximate membership and multiplicity queries. The classic approximate setting allows onesided errors in the form of false positives: given an error parameter , the probability of returning a “yes” on an element not in the set must be upper bounded by . Such data structures are known as filters. For multisets, the corresponding data structure is known as a counting filter. A counting filter returns a count that is at least the multiplicity of the element in the multiset and overcounts with probability bounded by . Counting filters have received significant attention over the years due to their applicability in practice [FCAB00, CM03, BMP06]. One of the main applications of dictionaries for multisets is precisely in designing counting filters. Namely, Carter et al. [CFG78] showed that by hashing each element into a random fingerprint, one can reduce a counting filter to a dictionary for multisets by storing the fingerprints in the dictionary.
For the design of both dictionaries and filters, the performance measures of interest are the space the data structure takes and the time it takes to perform the operations. For dictionaries, we would like to get close to the lower bound of bits, where is the size of the universe.^{1}^{1}1All logarithms are base unless otherwise stated. is used to denote the natural logarithm.^{2}^{2}2 This equality holds when is significantly larger than . In the case of filters, the lower bound is at least bits [LP10]. A data structure is spaceefficient if the total number of bits it requires is within of the lower bound, where the term converges to zero as tends to infinity. The goal is to design data structures that are spaceefficient with high probability.^{3}^{3}3By with high probability (whp), we mean with probability at least . The constant in the exponent can be controlled by the designer and only affects the term in the space of the dictionary or the filter. We would like to support queries, insertions and deletions in constant time in the word RAM model. The constant time guarantees should be in the worst case with high probability (see [BM01, KM07, ANS09, ANS10] for a discussion on the shortcomings of expected or amortized performance in practical scenarios). We assume that each memory access can read/write a word of contiguous bits.
The current best known dynamic dictionary for multisets was designed by Pagh, Pagh, Rao [PPR05] based on the dictionary for sets of Raman and Rao [RR03]. The dictionary is spaceefficient and supports membership queries in constant time in the worst case. Insertions and deletions take amortized expected constant time and multiplicity queries take in the worst case. In the case of sets, the stateoftheart dynamic dictionary of Arbitman, Naor and Segev [ANS10] achieves the “best of both worlds”: it is spaceefficient and supports all operations in constant time whp. Arbitman et al. [ANS10] leave it as an open problem whether a similar result can be achieved for multisets.
Recently, progress on this problem was achieved by Bercea and Even [BE20] who designed a constanttime dynamic spaceefficient dictionary for random multisets. In a random multiset, each element is sampled independently and uniformly at random from the universe. In this paper, we build upon their work and present the first spaceefficient dynamic dictionary for (arbitrary) multisets with constant time operations in the worst case with high probability, resolving the question of Arbitman et al. [ANS10] in the positive. We also obtain a counting filter with similar guarantees.
1.1 Results
In the following theorem, we assume that the size of the universe is polynomial in .^{4}^{4}4This is justified by mapping to using independent hash functions [DadHPP06]. Overflow refers to the event that the space allocated in advance for the dictionary does not suffice.
Theorem 1 (dynamic multiset dictionary).
There exists a dynamic dictionary that maintains a multiset of cardinality at most from the universe with the following guarantees: (1) For every polynomial in sequence of operations (multiplicity query, insertion, deletion), the dictionary does not overflow whp. (2) If the dictionary does not overflow, then every operation can be completed in constant time. (3) The required space is bits.
Our dictionary construction considers a natural separation into the sparse and dense case based on the size of the universe relative to . The sparse case, defined when , presents a more straightforward challenge for dictionary design because the dictionary construction can afford to store additional bits per element without sacrificing spaceefficiency. In this case, the dictionary for multisets is based on a simple observation. Namely, elements with multiplicity at most can be stored in a spaceefficient dictionary for sets by attaching to each element a fixedlength counter of bits (see Section 3).
The majority of the paper is focused on designing a dictionary for multisets in the dense case, in which .^{5}^{5}5This case is especially relevant in the approximate membership setting in which we have due to the reduction of Carter et al. [CFG78]. In this setting, the dense case arises in applications in which is large and the error probability is a constant (say ). Following [BE20], we hash distinct elements into a first level that consists of small spaceefficient “bin dictionaries” of fixed capacity. The first level only stores elements of multiplicity strictly smaller than , just like in the dense case. However, we employ variablelength counters to encode multiplicities and store them in a separate structure called a “counter dictionary”. We allocate one counter dictionary per each bin dictionary. The capacity of a counter dictionary is an upper bound on the total length of the counters it stores and is linear in the capacity of the associated bin dictionary.
Elements that do not fit in the first level are stored in a secondary data structure called the spare. The spare is small enough that it can allocate bit counters for the elements it stores. To bound the number of elements that are stored in the “spare”, we cast the process of hashing counters into counter dictionaries as a weighted ballsintobins experiment in which balls have logarithmic weights (see Sec. 4.4).
As a corollary of Thm. 1, we obtain a counting filter with the following guarantees.^{6}^{6}6Note that we allow to be as small as (below this threshold, simply use a dictionary).
Corollary 2 (dynamic counting filter).
There exists a dynamic counting filter for multisets of cardinality at most from a universe such that the following hold: (1) For every polynomial in sequence of operations (multiplicity query, insertion, deletion), the filter does not overflow whp. (2) If the filter does not overflow, then every operation can be completed in constant time. (3) The required space is bits. (4) For every count query, the probability of overcounting is bounded by .
1.2 Related Work
The dictionary for multisets of Pagh et al. [PPR05] is spaceefficient and supports membership queries in constant time in the worst case. Insertions and deletions take amortized expected constant time and multiplicity queries take for a multiplicity of . Multiplicities are represented “implicitly” by a binary counter whose operations (query, increment, decrement) are simulated as queries and updates to dictionaries on sets.^{7}^{7}7To be more exact, for each bit of the counter, the construction in Pagh et al. [PPR05] allocates a dictionary on sets such that the value of the bit can be retrieved by performing a lookup in the dictionary. Updating a bit of the counter is done by inserting or deleting elements in the associated dictionary. Increments and decrements to the counter take bit probes (and hence dictionary operations) but decoding the multiplicity takes time in the worst case. We are not aware of any other dictionary constructions for multisets.^{8}^{8}8Data structures for predecessor and successor queries such as [PT14] can support multisets but they do not meet the required performance guarantees for the special case of (just) supporting multiplicity queries.
Dynamic dictionaries for sets have been extensively studied [DadH90, DDMM05, DadHPP06, RR03, FPSS05, Pan05, DW07, ANS09, ANS10]. The dynamic dictionary for sets of Arbitman et al. [ANS10] is spaceefficient and supports operations in constant time whp. Their construction cannot be generalized in a straightforward manner to handle multisets. Specifically, their dictionary maintains a spare of size elements and hence, cannot store counters of length per element. In contrast, the spare in our construction is guaranteed to store at most elements whp.
In terms of counting filters, several constructions do not come with worst case guarantees for storing arbitrary multisets [FCAB00, BMP06]. The only previous counting filter with worst case guarantees we are aware of is the Spectral Bloom filter of Cohen and Matias [CM03] (with over citations in Google Scholar). The construction is a generalization of the Bloom filter and hence requires memory accesses per operation. The space usage is similar to that of a Bloom filter and depends on the sum of logs of multiplicities. Consequently, when the multiset is a set, the leading constant is , and hence Spectral Bloom Filters are not spaceefficient in general.
1.3 Paper Organization
Preliminaries are in Sec. 2. The construction for the sparse case can be found in Sec. 3 and the one for the dense case is described and analyzed in Sec. 4. Section 5 describes how our analysis works without the assumption of access to truly random hash functions. Corollary 2 is proved in Sec. 6. Appendix A reviews standard implementation techniques.
2 Preliminaries
2.1 Notation and Definitions
For , let denote the set . For a string , let denote the length of in bits. We often abuse notation, and regard elements in as binary strings of length . Let denote the universe of all possible elements.
Definition 3 (multiset).
A multiset over is a function . We refer to as the multiplicity of .
The cardinality of a multiset is denoted by and defined by . The support of the multiset is denoted by and is defined by .
Operations over Dynamic Multisets. We consider the following operations: , , and . Let denote the multiset after operations. A dynamic multiset is specified by a sequence of as follows.^{9}^{9}9We require that only if , i.e. if is not in the multiset, then a delete operation does not make its multiplicity negative.
We say that a dynamic multiset has cardinality at most if , for every .
Dynamic Dictionary for Multisets. A dynamic dictionary for multisets maintains a dynamic multiset . The response to is simply .
Dynamic Counting Filter. A dynamic counting filter maintains a dynamic multiset and is parameterized by an error parameter . Let denote the response to a at time . We require that the output satisfy the following conditions:
(1)  
(2) 
Namely, is an approximation of with a onesided error.
Definition 4 (overcounting).
Let denote the event that , and .
Note that overcounting generalizes false positive events in filters over sets. Indeed, a false positive event occurs if and .^{10}^{10}10 The probability space is induced only by the random choices (i.e., choice of hash functions) that the filter makes. Note also that if , then the events and need not be independent.
2.2 The Model
Memory Access Model. We assume that the data structures are implemented in the RAM model in which the basic unit of one memory access is a word. Let denote the memory word length in bits. We assume that . See Appendix A for a discussion on how the computations we perform over words are implemented in constant time.
Success Probability. We prove that overflow occurs with probability at most and that one can control the degree of the polynomial (the degree of the polynomial only affects the term in the size bound). The probability of an overflow depends only on the random choices that the dictionary makes.
Hash Functions. Our dictionary uses the succinct hash functions of Arbitman et al. [ANS10] which have a small representation and can be evaluated in constant time. For simplicity, we first analyze the data structure assuming fully random hash functions (Sec. 4.4). In Sec. 5, we prove that the same arguments hold when we use succinct hash functions. The filter reduction additionally employs pairwise independent hash functions.
3 Dictionary for Multisets via Dictionary+Retrieval (Sparse Case)
In this section, we show how to design a multiset dictionary using any dictionary on sets that supports attaching satellite data of bits per element. Such a dictionary with satellite data supports the operations: query, insert, delete, retrieve, and update. A retrieve operation for returns the satellite data of . An update operation for with new satellite data stores as the new satellite data of . The reduction incurs a penalty of extra bits per element. Hence, a spaceefficient multiset dictionary is obtained from a spaceefficient dictionary only if .
Let denote a dynamic dictionary for sets of cardinality at most , where bits of satellite data are attached to each element. Let denote a dynamic dictionary for multisets of cardinality at most .
The reduction is summarized in the following observation.
Observation 5.
One can implement using two dynamic dictionaries: and . Each operation over MSDict can be performed using a constant number of operations over and .
Proof Sketch.
An element is light if its multiplicity is at most , otherwise it is heavy. Dictionary is used for storing the light elements, whereas dictionary is used for storing the heavy elements. The satellite data in both dictionaries is a binary counter of the multiplicity. ∎
Claim 6.
If , then there exists a dynamic multiset dictionary that is spaceefficient and supports operations in constant time in the worst case whp.
Proof.
A spaceefficient implementation of (for ) with constant time per operation can be obtained from the dictionary of Arbitman et al. [ANS10] (see also [BE20]). The space of such a dictionary is bits. Instantiating this space for and from Observation 5 yields a multiset dictionary with space: . In the sparse case , and hence the obtained is space efficient. ∎
This completes the proof of Theorem 1 for the sparse case.
Remark. An alternative solution stores the multiplicities in an array separately from a dictionary that stores the support of the multiset. Let denote the cardinality of the support of the multiset. Let be a dynamic perfect hashing that requires bits and supports operations in constant time (such as the one in [DadHPP06]). Store the (variablelength) binary counter for at index in the array. The array can be implemented in space that is linear in the total length of the counters and supports query and update operations in constant time [BB08].
4 Dictionary for Multisets (Dense Case)
In this section, we prove Theorem 1 for the case in which , which we call the dense case. We refer to this dictionary construction as the MSDictionary (Multiset Dictionary) in the dense case.
The MSDictionary construction follows the same general structure as in [ANS10, DadHPP06, BE20]. Specifically, it consists of two levels of dictionaries. The first level is designed to store the majority of the elements (Sec. 4.2). An element is stored in the first level provided that its multiplicity is at most and there is enough capacity. Otherwise, the element is stored in the second level, which is called the spare (Sec. 4.3).
The first level of the MSDictionary consists of bin dictionaries together with counter dictionaries . Each bin dictionary can store at most distinct elements, where and denotes the mean occupancy of each bin dictionary. We say that a bin dictionary is full if it stores elements in it.
Each counter dictionary stores variablelength binary counters. Each counter represents the multiplicity of an element in the associated bin dictionary. Each counter dictionary can store counters whose total length in bits is at most . We say that a counter dictionary is full if the total length of the counters stored in it is bits.
Elements with high multiplicity or whose or are full are stored in the spare, as formulated in the following invariant:
Invariant 7.
An element such that is stored in the spare at time if: (1) , (2) the bin dictionary corresponding to is full, or (3) the counter dictionary corresponding to is full.
We denote the upper bound on the cardinality of the support of the multiset stored in the spare by (the value of is specified later). We say that the spare overflows when more than elements are stored in it.
4.1 Hash Functions
We employ a permutation . We define to be the leftmost bits of the binary representation of and by to be the remaining bits of . An element is hashed to the bin dictionary of index . Hence storing in the first level of the dictionary amounts to storing in , where , and storing in . (This reduction in the universe size is often called “quotienting” [Knu73, Pag01, PPR05, DadHPP06]).
4.2 The First Level of the Dictionary
We follow the same parametrization as in [BE20]. Namely, we set the average occupancy of a bin dictionary to be and set .
Bin Dictionaries. Each bin dictionary () is a deterministic dictionary for sets of cardinality at most that supports queries, insertions and deletions. The implementation of a bin dictionary using global lookup tables [ANS10] or EliasFano encoding [BE20] is briefly reviewed in Appendix A. We remark that each is spaceefficient, meaning it requires bits. Moreover, each fits in a constant number of words and performs queries, insertions and deletions in constant time.
Counter Dictionaries. Each counter dictionary
stores a vector of multiplicities of the elements stored in the corresponding bin dictionary
. The order of the multiplicities stored in is the same order in which the corresponding elements are stored in . Multiplicities in are stored by variablelength counters. We employ a trivial bit alphabet to encode and “endofcounter” symbols for encoding the multiplicities. Hence, the length of a counter is bits and its encoding bits long. The contents of is simply a concatenation of the encoding of the counters. We allocate bits per .^{11}^{11}11Note, however, that we define a to be full if the sum of counter lengths is (even if we did not use all its space). The justification for this definition is to simplify the analysis.The supports the operations of multiplicity query, increment and decrement. These operations are carried out naturally in constant time because each fits in a word. We note that an increment may cause the to be full, in which case is deleted from the bin dictionary and is inserted to the spare together with its updated counter. Similarly, a decrement may zero the counter, in which case is deleted from the bin dictionary (and hence its multiplicity is also deleted from the counter dictionary).
4.3 The Spare
The spare is a high performance spaceinefficient dictionary for multisets. It stores at most distinct elements. Each element stored in the spare can have a multiplicity as high as . It supports all the operations of the dictionary in constant time. In addition, the spare also moves elements back to the first level if their insertion no longer violates Invariant 7.
We propose to implement the spare using the dynamic dictionary of Arbitman et al. [ANS09] in which we append bit counters to each element. We briefly review the construction here. The dictionary is a deamortized construction of the cuckoo hash table of Pagh and Rodler [PR01]. Namely, each element is assigned two locations in an array. If upon insertion, both locations are occupied, then space for the new element is made by “relocating” an element occupying one of the two locations. Long chains of relocations are “postponed” by employing a queue of pending insertions. Thus, each operation is guaranteed to perform in constant time in the worst case. The space that the dictionary occupies is . The counters increase the space of the spare by bits.
The construction in [ANS09] is used as a spare in the spaceefficient dynamic filter in [ANS10]. We use it a similar manner to maintain Invariant 7 in a “lazy” fashion. Namely, if an element residing in the spare is no longer in violation of Invariant 7 (for instance, due to a deletion in the bin dictionary), we do not immediately move from the spare back to its bin dictionary. Instead, we “delay” such an operation until is examined during a chain of relocations. Specifically, during an insertion to the spare, for each evicted element, one checks if this element is still in violation of Invariant 7. If it is not, then it is deleted from the spare and inserted into the first level. This increases the time it takes to perform an insertion to the spare only by a constant. Moreover, it does not affect the overflow probability of the spare.
4.4 Overflow Analysis
The event of an overflow occurs if more than distinct elements are stored in the spare. In this section, we prove that overflow does not occur whp with respect to perfectly random hash functions. In Sec. 5, we discuss how this analysis can be modified when we employ succinct hash functions.
The analysis proceeds in two stages. First, we consider the incremental setting (in which elements of the multiset are inserted onebyone and there are no deletions). We prove that overflow does not occur whp if . The proof for the dynamic setting (deletions and insertions) is based on Invariant 7. Namely, Invariant 7 reduces the dynamic setting to an incremental setting. Formally, the probability of overflow at time (after a sequence of deletions and insertions) equals the probability of an overflow had the elements of been inserted onebyone (no deletions). Hence, overflow does not occur whp over a polynomial number of operations in the dynamic setting by applying a union bound.
Recall that each component of the first level of the dictionary has capacity parameters: each bin dictionary has an upper bound of on the number of distinct elements it stores and each counter dictionary has an upper bound of on the total length of the counters it stores. Additionally, the first level only stores elements whose multiplicity is strictly smaller than . According to Invariant 7, if the insertion of some element exceeds these bounds, then is moved to the spare.
We bound the number of elements that go to the spare due to failing one of the conditions of Invariant 7 separately. The number of elements whose multiplicity is at least is at most . The number of distinct elements that are stored in the spare because their bin dictionary is full is at most whp. The proof of this bound can be derived by modifying the proof of Claim 8 (see also [ANS10]). We focus on the number of distinct elements whose counter dictionary is full.
Claim 8.
The number of distinct elements whose corresponding is full is at most whp.
Proof.
Recall that there are counter dictionaries and that each stores the multiplicities of at most distinct elements of multiplicity strictly smaller than . In a full , the sum of the counter lengths reaches . We start by bounding the probability that the total length of the counters in a is at least .
Formally, consider a multiset of cardinality consisting of distinct elements with multiplicities (note that ). The length of the counter for multiplicity is (we refer to this quantity as weight). For , let denote the submultiset of consisting of the elements such that . Let denote the event that the weight of is at least , namely . We begin by bounding the probability of event occurring.
For
, define the random variable
, where if and otherwise. Since the values were sampled at random without replacement (i.e., obtained from a random permutation), the random variables are negatively associated. Let denote the expected weight per . Clearly, . We now scale the RVs so that they are in the range . Since the multiplicities of elements in the first level is strictly smaller than , we have that (we omit the ceiling to improve readability). We then define and . Then, by Chernoff’s bound:Let denote the indicator variable for event . Then . Moreover, the RVs are negatively associated (more weight in bin implies less weight in bin ). By Chernoff’s bound:
Whp, a bin is assigned at most elements. We conclude that the number of elements that are stored in the spare due to events is at most whp. ∎
4.5 Space Analysis
Each bin dictionary takes bits, where , and . Each occupies bits. Therefore, the first level of the MSDictionary takes bits. The spare takes bits, since . Therefore, the space the whole dictionary takes is bits. This completes the proof of Theorem 1 for the dense case.
5 Succinct Hash Functions
In this section, we discuss how to replace the assumption of truly random permutations with succinct hash functions (i.e., representation requires bits) that have constant evaluation time in the RAM model.
We follow the construction in [ANS10], which we describe as follows. Partition the universe into parts using a oneround Feistel permutation (described below) such that the number of elements in each part is at most whp. The permutation uses highly independent hash functions [Sie04, DR09]. Apply the dictionary construction separately in each part with an upper bound of on the cardinality of the set. Within each part, the dictionary employs a wise dependent permutation. A collection of permutations is wise dependent if for any distinct elements , the distribution on induced by sampling is close in statistical distance to the distribution induced by a truly random permutation. Arbitman et al. [ANS10] show how one can obtain succinct wise dependent permutations that can be evaluated in constant time by combining the constructions in [NR99, KNR09]. Setting and ensures that the bound on the size of the spare holds whp in each part and hence, by union bound, in all parts simultaneously.
To complete the proof, we need to prove that the partitioning is “balanced” whp also with respect to multisets. (Recall, that the cardinality of a multiset equals the sum of multiplicities of the elements in the support of the multiset.) Formally, we prove that the pseudo random partition induces in each part a multiset of cardinality at most whp. As “heavy” elements of multiplicity at least are stored in the spare, we may assume that multiplicities are less that .
We first describe how the partitioning is achieved in [ANS10]. The binary representation of is partitioned into the leftmost bits, denoted by and the remaining bits, denoted by . A wise independent hash function is then sampled, with . The permutation is defined as .
Note that this induces a view of the universe as a twodimensional table with rows (corresponding to each value) and columns (corresponding to each value). Indeed, each cell of the table has at most one element (i.e., if and satisfy and , then ). We define a part of the input multiset as consisting of all the elements of the input multiset that belong to the same column. The index of the part that is assigned to is . The corresponding part stores .
The following observation follows from [ANS10, Claim 5.4] and the fact that the maximum multiplicity of each element is strictly less than .
Observation 9.
The cardinality of every part of the multiset is at most whp.
Proof.
Fix a part and for each , let denote the multiset of all elements with the value equal to (i.e., the multisets consist of all the elements in row ). Each multiset contributes at most one distinct element to the multiset of part . Define to be the random variable that denotes the multiplicity of the element from that is mapped to part . Then . Now define to be the random variable that denotes the cardinality of the multiset that is mapped into part . By linearity of expectation, . The random variables are wise independent, since each variable is determined by a different row in the table (and hence, each depends on a different value). We scale the RVs by and then apply Chernoff’s bound for wise independent RVs [SSS95] and obtain:
The claim follows. ∎
6 The Counting Filter
To obtain a counting filter from our dictionary for multisets, use a pairwise independent hash function to map an element to a fingerprint [CFG78]. Let denote the multiset over induced by a multiset over defined by . A multiset dictionary for constitutes a counting filter in which the probability of an overcount is at most . The counting filter is requires bits and performs all operations in constant time. This completes the proof of Corollary 2.
References
 [ANS09] Yuriy Arbitman, Moni Naor, and Gil Segev. Deamortized cuckoo hashing: Provable worstcase performance and experimental results. In International Colloquium on Automata, Languages, and Programming, pages 107–118. Springer, 2009.
 [ANS10] Yuriy Arbitman, Moni Naor, and Gil Segev. Backyard cuckoo hashing: Constant worstcase operations with a succinct representation. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pages 787–796. IEEE, 2010.
 [BB08] Daniel K Blandford and Guy E Blelloch. Compact dictionaries for variablelength keys and data with applications. ACM Transactions on Algorithms (TALG), 4(2):1–25, 2008.
 [BE20] Ioana Oriana Bercea and Guy Even. A dynamic spaceefficient filter with constant time operations. CoRR, abs/2005.01098, 2020. to appear in SWAT 2020.
 [BM01] Andrei Broder and Michael Mitzenmacher. Using multiple hash functions to improve ip lookups. In Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No. 01CH37213), volume 3, pages 1454–1463. IEEE, 2001.
 [BMP06] Flavio Bonomi, Michael Mitzenmacher, Rina Panigrahy, Sushil Singh, and George Varghese. An improved construction for counting Bloom filters. In European Symposium on Algorithms, pages 684–695. Springer, 2006.

[CFG78]
Larry Carter, Robert Floyd, John Gill, George Markowsky, and Mark Wegman.
Exact and approximate membership testers.
In
Proceedings of the tenth annual ACM symposium on Theory of computing
, pages 59–65. ACM, 1978.  [CM03] Saar Cohen and Yossi Matias. Spectral Bloom filters. In Proceedings of the 2003 ACM SIGMOD international conference on Management of data, pages 241–252, 2003.
 [DadH90] Martin Dietzfelbinger and Friedhelm Meyer auf der Heide. A new universal class of hash functions and dynamic hashing in real time. In International Colloquium on Automata, Languages, and Programming, pages 6–19. Springer, 1990.
 [DadHPP06] Erik D Demaine, Friedhelm Meyer auf der Heide, Rasmus Pagh, and Mihai Pătraşcu. De dictionariis dynamicis pauco spatio utentibus. In Latin American Symposium on Theoretical Informatics, pages 349–361. Springer, 2006.
 [DDMM05] Ketan Dalal, Luc Devroye, Ebrahim Malalla, and Erin McLeish. Twoway chaining with reassignment. SIAM Journal on Computing, 35(2):327–340, 2005.
 [DR09] Martin Dietzfelbinger and Michael Rink. Applications of a splitting trick. In International Colloquium on Automata, Languages, and Programming, pages 354–365. Springer, 2009.
 [DW07] Martin Dietzfelbinger and Christoph Weidling. Balanced allocation and dictionaries with tightly packed constant size bins. Theoretical Computer Science, 380(12):47–68, 2007.
 [Eli74] Peter Elias. Efficient storage and retrieval by content and address of static files. Journal of the ACM (JACM), 21(2):246–260, 1974.
 [FCAB00] Li Fan, Pei Cao, Jussara Almeida, and Andrei Z Broder. Summary cache: a scalable widearea web cache sharing protocol. IEEE/ACM transactions on networking, 8(3):281–293, 2000.
 [FPSS05] Dimitris Fotakis, Rasmus Pagh, Peter Sanders, and Paul Spirakis. Space efficient hash tables with worst case constant access time. Theory of Computing Systems, 38(2):229–248, 2005.
 [KM07] Adam Kirsch and Michael Mitzenmacher. Using a queue to deamortize cuckoo hashing in hardware. In Proceedings of the FortyFifth Annual Allerton Conference on Communication, Control, and Computing, volume 75, 2007.
 [KNR09] Eyal Kaplan, Moni Naor, and Omer Reingold. Derandomized constructions of kwise (almost) independent permutations. Algorithmica, 55(1):113–133, 2009.
 [Knu73] Donald E Knuth. The art of computer programming, vol. 3: Searching and sorting. Reading MA: AddisonWisley, 1973.
 [LP10] Shachar Lovett and Ely Porat. A lower bound for dynamic approximate membership data structures. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pages 797–804. IEEE, 2010.
 [NR99] Moni Naor and Omer Reingold. On the construction of pseudorandom permutations: Luby—rackoff revisited. Journal of Cryptology, 12(1):29–66, 1999.
 [Pag01] Rasmus Pagh. Low redundancy in static dictionaries with constant query time. SIAM Journal on Computing, 31(2):353–363, 2001.
 [Pan05] Rina Panigrahy. Efficient hashing with lookups in two memory accesses. In Proceedings of the sixteenth annual ACMSIAM symposium on Discrete algorithms, pages 830–839. Society for Industrial and Applied Mathematics, 2005.
 [PPR05] Anna Pagh, Rasmus Pagh, and S. Srinivasa Rao. An optimal Bloom filter replacement. In SODA, pages 823–829. SIAM, 2005.
 [PR01] Rasmus Pagh and Flemming Friche Rodler. Cuckoo hashing. In European Symposium on Algorithms, pages 121–133. Springer, 2001.
 [PT14] Mihai Pătraşcu and Mikkel Thorup. Dynamic integer sets with optimal rank, select, and predecessor search. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 166–175. IEEE, 2014.
 [RR03] Rajeev Raman and Satti Srinivasa Rao. Succinct dynamic dictionaries and trees. In International Colloquium on Automata, Languages, and Programming, pages 357–368. Springer, 2003.
 [Sie04] Alan Siegel. On universal classes of extremely random constanttime hash functions. SIAM Journal on Computing, 33(3):505–543, 2004.
 [SSS95] Jeanette P Schmidt, Alan Siegel, and Aravind Srinivasan. Chernoff–hoeffding bounds for applications with limited independence. SIAM Journal on Discrete Mathematics, 8(2):223–250, 1995.
Appendix A Implementation of the First Level of the Dictionary
In this section, we discuss two implementations of the first level of the dictionary that meet the specifications from Sec. 4.2. Namely, that of using global lookup tables like it was suggested in [ANS10] or an EliasFano encoding [Eli74]. We briefly review them here (for details, see [BE20]).
a.1 Global Lookup Tables
In this implementation, all bin and counter dictionaries employ common global lookup tables. Hence, it is sufficient to show that the size of the tables is . Each bin dictionary stores at most distinct elements from a universe of size . Therefore, the total number of states of a bin dictionary is . Each operation on the bin dictionary ( query, insert, delete) is implemented as a function from to ., Namely, given the current state of the dictionary and an element , each function returns an updated state (in the case of and ) or a bit (in the case of a membership query). The global lookup tables explicitly represent these functions and can be built in advance. Operations are therefore supported in constant time.
Moreover, each table requires at most bits. Recall that we are in the sparse case defined as the case in which the size of the universe is small relative to . Specifically, we have that , hence . Since and , one can show that, under these parametrizations, and the total number of bits each table takes is .
Similarly, we can build a lookup table that encodes the lexicographic order of the elements in each state of a . Each operation on the counter dictionaries is implemented by further indexing the lookup tables with an index that denotes the position of in the .
a.2 EliasFano encoding
In this section, we briefly discuss the EliasFano encoding proposed in [BE20]. A bin dictionary implemented using this encoding is referred to as a “pocket dictionary”. The idea is to represent each element in the universe as a pair , where (the quotient) and (the remainder). A header encodes in unary the number of elements that have the same quotient. The body is the concatenation of remainders. The space required is bits, which meets the required space bound since . Similarly, a counter dictionary can be implemented by storing the counters consecutively using an “endofstring” symbol. We use a ternary alphabet for this encoding, which requires at most bits to encode each .
Both the s and the s fit in words. Operations in the and the require rank and select instructions. See [BE20] for a discussion of how these operations can be executed in constant time if the RAM model can evaluate in constant time instructions represented as Boolean circuits with depth and gates.
Comments
There are no comments yet.