1 Introduction
A common task in computational geometry, but also many other fields of application, is the storage and efficient retrieval of segments (or more abstractly: intervals). The question of which data structure to use is usually guided by the nature of the retrieval operations, and whether the data structure must by dynamic, i.e., support updates. One very common retrieval operation is that of a stabbing query, which can be formulated as follows: Given a set of intervals on and a query point , report all intervals that contain .
For the static case, a segment tree is the data structure of choice for this task. It supports stabbing queries in time (with being the number of intervals). Segment trees were originally introduced by Bentley [bentley1977algorithms]. While the segment tree is a static data structure, i.e., is built once and would have to be rebuilt from scratch to change the contained intervals, van Kreveld and Overmars present a dynamic version [kreveld1993union], called dynamic segment tree (DST).
Dynamic segment trees are applied in many fields. Solving problems from computational geometry certainly is the most frequent application, for example for route planning based on geometry [edelkamp2005geometric] or labeling rotating maps [gemsa2016evaluation]. However, DSTs are also useful in other fields, for example internet routing [chang2007dynamic] or scheduling algorithms [barth2019shaving].
In this paper, we present an adaption of dynamic segment trees, socalled zipping segment trees. Our main contribution is replacing the usual redblacktree base of dynamic segment trees with zip trees, a novel form of balancing binary search trees introduced recently by Tarjan et al. [tarjan2019zip]. On a conceptual level, basing dynamic segment trees on zip trees yields an elegant and simple variant of dynamic segment trees. Only few additions to the zip tree’s rebalancing methods are necessary. On a practical level, we can show that zipping segment trees outperform dynamic segment trees based on redblack trees in our experimental setting.
2 Preliminaries
A concept we need for zip trees are the two spines of a (sub) tree. We also talk about the spines of a node, by which we mean the spines of the tree rooted in the respective node. The left spine of a subtree is the path from the tree’s root to the previous (compared to the root, in tree order) node. Note that if the root (call it ) is not the overall smallest node, the left spine exits the root left, and then always follows the right child, i.e., it looks like . Conversely, the right spine is the path from the root node to the next node compared to the root node. Note that this definition differs from the definition of a spine by Tarjan et al. [tarjan2019zip].
2.1 UnionCopy Data Structure
Dynamic segment trees in general carry annotations of sets of intervals at their vertices or edges. These set annotations must be stored and updated somehow. To achieve the run times in [kreveld1993union], van Kreveld and Overmars introduce the unioncopy data structure to manage such sets. Sketching this data structure would be out of scope for this paper. It is constructed by intricately nesting two different types of unionfind data structures: a textbook unionfind data structure using unionbyrank and path compression (see for example Seidel and Sharir [seidel2005top]) and the data structure by La Poutré [lapoutre1989new].
For this paper, we just assume this unioncopy data structure to manage sets of items. It offers the following operations^{1}^{1}1The data structure presented by Kreveld and Overmars provides more operations, but the ones mentioned here are sufficient for this paper.:
 createSet()

Creates a new empty set in .
 deleteSet()

Deletes a set in , where is the number of elements in the set, and is the time the find operation takes in one of the chosen unionfind structures.
 copySet(A)

Creates a new set that is a copy of in .
 unionSets(A,B)

Creates a new set that contains all items that are in or , in .
 createItem(X)

Creates a new set containing only the (new) item in .
 deleteItem(X)

Deletes from all sets in , where is the number of sets is in.
2.2 Dynamic Segment Trees
This section recapitulates the workings of dynamic segment trees as presented by van Kreveld and Overmars [kreveld1993union] and outlines some extensions. Before we describe the dynamic segment tree, we briefly describe a classic static segment tree and the segment tree property. For a more thorough description, see de Berg et al. [deberg2008computational, 10.2]. Segment trees store a set of intervals. Let be the ordered sequence of interval end points in . For the sake of clarity and ease of presentation, we assume that all interval borders are distinct, i.e., . We also assume all intervals to be closed. See Section A in the appendix for the straightforward way of dealing with equal interval borders as well as arbitrary combinations of open and closed interval borders.
In the first step, we forget whether an is a start or an end of an interval. The intervals
are called the elementary intervals of . To create a segment tree, we create a leaf node for every elementary interval. On top of these leaves, we create a binary tree. The exact method of creating the binary tree is not important, but it should adhere to some balancing guarantee to provide asymptotically logarithmic depths of all leaves.
Such a segment tree is outlined in Figure 1. The lower box indicates the three stored intervals and their end points . The middle box contains a visualization of the elementary intervals, where the green intervals are the intervals (note that while of course they should have no area, we have drawn them “fat” to make them visible) while the blue intervals are the intervals. The top box contains the resulting segment tree, with the square nodes being the leaf nodes corresponding to the elementary intervals, and the circular nodes being the inner nodes.
We associate each inner node with the union of all the intervals corresponding to the leaves in the subtree below . In Figure 1, that means that the larger green inner node is associated with the intervals , i. e., the union of and , which are the two leaves beneath it. Recall that a segment tree should support fast stabbing queries, i.e., for any query point , should report which intervals contain . To this end, we annotate the nodes of the tree with sets of intervals. For any interval , we annotate at every node such that the associated interval of is completely contained in , but the associated interval of ’s parent is not. In Figure 1, the annotations for are shown. For example, consider the larger green node. Again, its associated interval is , which is completely contained in . However, its parent is associated with , which is not contained in . Thus, the large green node is annotated with .
A segment tree constructed in such a way is semidynamic. Segments cannot be removed, and new segments can be inserted only if their end points are already end points of intervals in . To provide a fully dynamic data structure with the same properties, van Kreveld and Overmars present the dynamic segment tree [kreveld1993union]. It relaxes the property that intervals are always annotated on the topmost nodes the associated intervals of which are still completely contained in the respective interval. Instead, they propose the weak segment tree property: For any point and any interval that contains , the search path of in the segment tree contains exactly one node that is annotated with . For any and any interval that does not contain , no node on the search path of is annotated with . Thus, collecting all annotations along the search path of yields the desired result, all intervals that contain . It is easy to see that this property is true for segment trees: For any interval that contains , some node on the search path for must be the first node the associated interval of which does not fully contain . This node contains an annotation for .
Dynamic segment trees also remove the distinction between leaf nodes and inner nodes. In a dynamic segment tree, every node represents an interval border. To insert a new interval, we insert two nodes representing its borders into the tree, adding annotations as necessary. To delete an interval, we remove its associated nodes. If the dynamic segment tree is based on a classic redblack tree, both operations require rotations to rebalance. Performing such a rotation without adapting the annotations would cause the weak segment tree property to be violated. Also, the nodes removed when deleting an interval might have carried annotations, which also potentially violates the weak segment tree property.
We must thus fix the weak segment tree property during rotations. We must also make sure that any deleted node does not carry any annotations, and we must specify how we add annotations when inserting new intervals.
3 Zipping Segment Trees
In Section 2.2 we have described a variant of the dynamic segment trees introduced by van Kreveld and Overmars [kreveld1993union]. These are built on top of a balancing binary search tree, for which van Kreveld and Overmars suggested using redblack trees. The presented technique is able to uphold the weak segment tree property during the redblack tree’s operations: node rotations, node swaps, leaf deletion and deletion of vertices of degree one. These are comparatively many operations that must be adapted to dynamic segment trees. Also, each each operation incurs a run time cost for the repairing of the weak segment tree property.
Thus it stands to reason to look at different underlying trees which either reduce the number of necessary balancing operations. One such data structure are zip trees introduced by Tarjan et al. [tarjan2019zip]. Instead of inserting nodes at the bottom of the tree and then rotating the tree as necessary to achieve balance, these trees determine the height of the node to be inserted before inserting it in a randomized fashion by drawing a rank. The zip tree then forms a treap, a combination of a search tree and a heap: While the key of (resp. ) must always be smaller or equal (resp. larger) to the key of , the ranks of both and must also be smaller or equal to the rank of . Thus, any search path always sees nodes’ ranks in a monotonically decreasing sequence. The ranks are chosen randomly in such a way that we expect the result to be a balanced tree. In a balanced binary tree, half of the nodes will be leaves. Thus, we assign rank
with probability
. A fourth of the nodes in a balanced binary tree are in the secondtobottom layer, thus we assign rank with probability . In general, we assign rank with probability, i.e., the ranks follow a geometric distribution with mean
. With this, Tarjan et al. show that the expected length of search paths is in , thus the tree is expected to be balanced.Zip trees do not insert nodes at the bottom or swap nodes down into a leaf before deletion. If nodes are to be inserted into or removed from the middle of a tree, other operations than rotations are necessary. For zip trees, these operations are zipping and unzipping. In the remainder of this section, we examine these two operations of zip trees separately and explain how to adapt them to preserve the weak segment tree property. For a more thorough description of the zip tree procedures, we refer the reader to [tarjan2019zip].
3.1 Insertion and Unzipping
Figure 2 illustrates the unzipping operation that is used when inserting a node. Note that we use the numbers through as nodes’ names as well as their ranks in this example. The triangles labeled through represent further subtrees. The node to be inserted is 6, the fat blue path is its search path (i.e., its key is smaller than the keys of 5, 4 and 2, but larger than the keys of 3 and 1). Since 6 has the largest rank in this example, the new node needs to become the new root. To this end, we unzip the search path, splitting it into the parts that are — in terms of nodes’ keys — larger than 6 and parts that are smaller than 6. In other words: We group the nodes on the search path by whether we exited them to the left (a larger node) or to the right (a smaller node). Algorithm 1, when ignoring the highlighted parts, provides pseudocode for the unzipping operation.
We remove all edges on the search path (Step 1 in Algorithm 1). The result is depicted in the two gray boxes in Figure 1(b): several disconnected parts that are either larger or smaller than the newly inserted node. Taking the new node 6 as the new root, we now reassemble these parts below it. The smaller parts go into the left subtree of 6, stringed together as each others’ right children (Step 3 in Algorithm 1). Note that all nodes in the “smaller” set must have an empty right subtree, because that is where the original search path exited them — just as nodes in the “larger” set have empty left subtrees. The larger parts go into the right subtree of 6, stringed together as each others’ left children. This concludes the unzipping operation, yielding the result shown in Figure 1(c). With careful implementation, the whole operation can be performed during a single traversal of the search path.
To insert a segment into a dynamic segment tree, we need to do two things: First, we must correctly update annotations whenever a segment is inserted. Second, we must ensure that the tree’s unzipping operation preserves the weak segment tree property.
We will not go into detail on how to achieve step one. In fact, we add new segments in basically the same fashion as redblacktree based DSTs do. We first insert the two nodes representing the segment’s start and end. Take the path between the two new nodes. The nodes on this path are the nodes at which a static segment tree would carry the annotation of the new segment. Thus, annotating these nodes (resp. the appropriate edges) repairs the weak segment tree property for the new segment.
In the remainder of this section, we explain how to adapt the unzipping operations of zip trees to repair the weak segment property. Let the annotation of an edge before unzipping be , and let the annotation after unzipping be . As an example how to fix the annotations after unzipping, consider in Figure 2 a search path that descends into subtree before unzipping. It picks up the annotations on the unzipped path from 5 up to 2, i.e., , , , and on the edge going into , i.e., . After unzipping, it picks up the annotations on all the new edges on the path from 6 to 2 plus . We set the annotations on all newly inserted edges to after unzipping. Thus, we need to add the annotations before unzipping, i.e., , to the edge going into . We therefore set after unzipping.
In Algorithm 1, the blue highlighted parts are responsible for repairing the annotations. While descending the search path to be unzipped, we incrementally collect all annotations we see on this search path (line 1), and at every visited node add the previously collected annotations to the other edge (line 1), i. e., the edge that is not on the search path. By setting the annotations of all newly created edges to the empty set (lines 1 and 1), we make sure that after reassembly, every search path descending into one of the subtrees attached to the reassembled parts picks up the same annotations on the edge into that subtree as it would have picked up on the path before disassembly.
3.2 Deletion and Zipping
Deleting segments again is a twostaged challenge: We need to remove the deleted segment from all annotations, and must make sure that the zipping operation employed for node deletion in zip trees upholds the weak segment tree property. Removing a segment from all annotations is trivial when using the unioncopy data structure outlined in Section 2.1: The deleteItem() method does exactly this.
We now outline the zipping procedure and how it can be amended to repair the weak segment tree property. Zipping two paths in the tree works in reverse to unzipping. Pseudocode is given in Algorithm 2. Again, the pseudocode without the highlighted parts is the pseudocode for plain zipping, independent of any dynamic segment tree. Assume that in the situation Figure 1(c), we want to remove 6, thus we want to arrive at the situation in Figure 1(a). The zipping operation consist of walking down the left spine (consisting of 3 and 1 in the example) and the right spine (consisting of 5, 4 and 2 in the example) simultaneously and zipping both into a single path. This is done by the loop in line 2. At every point during the walk, we have a current node on both spines, call it the current left node and the current right node . Also, there is a current parent , which is the bottom of the new zipped path being built. In the beginning, the current parent is the parent of the node being removed.^{2}^{2}2If the root is being removed, pretend there is a pseudonode above the root. In each step, we select the current node with the smaller rank, breaking ties arbitrarily (line 2). Without loss of generality, assume the current right node is chosen (the branch starting in line 2). We attach the chosen node to the bottom of the zipped path (), and then itself becomes . Also, we walk further down on the right spine.
Note that the choice whether to attach left or right to the bottom of the zipped path (made via in Algorithm 2) is made in such a way that the position in which we attach previously was part of one of the two spines being zipped. For example, if came from the right spine, we attach left to it. However, comes from the right spine. This method of attaching nodes always upholds the search tree property: When we make a node from the right spine the new parent (line 2), we know that the new is currently the largest remaining nodes on the spines. We always attach left to this (line 2). Since all other nodes on the spine are smaller than , this is valid. The same argument holds for the left spine.
We now explain how the edge annotations can be repaired so that the weak segment tree property is upheld. Assume that for an edge , is the annotation of before zipping, and is the annotation of after zipping. Again, we argue via the subtrees that search paths can descend into. A search path descending into a subtree on the right of a node on the right spine, e.g., subtree attached to 4 in Figure 1(c), will before zipping pick up the annotation on the right edge of the node being removed plus all annotations on the spine up to the respective node, e.g., , before descending into the respective subtree ( in the example). To preserve these picked up annotations, we again push them down onto the edge that actually leads away from the spine into the respective subtree.
Formally, during zipping, we keep two sets of annotations, one per spine. In Algorithm 2, these are and , respectively. Let be the node to be removed. Initially, we set and . Then, whenever we pick a node from the left (resp. right) spine as new parent, we set (resp. ). This pushes down everything we have collected to the edge leading away from the spine at . Then, we set and (resp. and ). This concludes the techniques necessary to use zip trees as a basis for dynamic segment trees, yielding zipping segment trees.
3.3 Complexity
Zip trees are randomized data structures, therefore all bounds on run times are expected bounds. In [tarjan2019zip, Theorem 4], Tarjan et al. claim that the expected number of pointers changed during a zip or unzip is in . However, they actually even show the stronger claim that the number of nodes on the zipped (or unzipped) paths is in . Observe that the loops in lines 1 and 1 of Algorithm 1 as well as line 2 of Algorithm 2 are executed at most once per node on the unzipping (resp. zipping) path. Inside each of the loops, a constant number of calls are made to each of the copySet, createSet, deleteSet and unionSets operations. Thus, the rebalancing operations incur expected constant effort plus a constant number of calls to the unioncopy data structure.
When inserting a new segment, we add it to the sets annotated at every vertex along the path between the two nodes representing the segment borders. Since the depth of every node is expected logarithmic in , this incurs expected calls to unionSets. The deletion of a segment from all annotations costs exactly one call to deleteItem.
All operations but deleteSet and deleteItem are in if the unioncopy data structure is appropriately built. The analysis for the two deletion functions is more complicated and involves amortization. The rough idea is that every nondeletion operation can increase the size of the unioncopy’s representation only by a limited amount. On the other hand, the two deletion operations each decrease the representation size proportionally to their run time.
The redblacktreebased DSTs by van Kreveld and Overmars [kreveld1993union] also need calls to copySet during the insertion operation, and at least a constant number of calls during tree rebalancing and deletion. Therefore, for every operation on zipping segment trees, the (expected) number of calls to the unioncopy data structure’s functions is no larger than the number of calls in the redblacktreebased implementation and we achieve the same (but only expected) run time guarantees, which are for insertion, for deletion (with being the rowinverse of the Ackermann function, for some constant ) and for stabbing queries, where is the number of reported segments.
3.4 Generating Ranks
Nodes’ ranks play a major role in the rebalancing operations of zip trees. In Section 3, we already motivated why nodes’ ranks should follow a geometric distribution with mean ; it is the distribution of the node depths in a perfectly balanced tree.
A practical implementation needs to somehow generate these values. The obvious implementation would be to somehow generate a (pseudo) random number and determine the position of the first in its binary representation. The rank generated in this way is then stored at the respective node.
Storing the rank at the node can be avoided if the rank is generated in a reproducible fashion. Tarjan et al. [tarjan2018zip_arxiv] already point out that one can “compute it as a pseudorandom function of the node (or of its key) each time it is needed.” In fact, the idea already appeared earlier in the work by Seidel and Aragon [seidel1996randomized]
on treaps. They suggest evaluating a degree 7 polynomial with randomly chosen coefficients at the (numerical representation of) the node’s key. However, the 8wise independence of the random variables generated by this technique is not sufficient to uphold the theoretical guarantees given by Tarjan et al.
[tarjan2018zip_arxiv].However, without any theoretical guarantees, a simpler method for reproducible ranks can be achieved by employing simple hashing algorithms. Note that even if applying universal hashing, we do not get a guarantee regarding the probability distribution for the values of individual bits of the hash values. However, in practice, we expect it to yield results similar to true randomness. As a fast hashing method, we suggest using the
almostuniversal multiplyshift method from Dietzfelbinger et al. [dietzfelbinger1997reliable]. Since we are interested in generating an entire machine word in which we then search for the first bit set to , we can skip the “shift” part, and the whole process collapses into a simple multiplication.4 Experimental Evaluation of Dynamic Segment Trees Bases
In this section, we experimentally evaluate zipping segment trees as well as dynamic segment trees based on two of the most prominent rotationbased balanced binary search trees: redblack trees and weightbalanced trees. Weightbalanced trees require a parametrization of their rebalancing operation. In [barth2020engineering], we perform an indepth engineering of weightbalanced trees. For this analysis of dynamic segment trees, we pick only the two most promising variants of weightbalanced trees: topdown weightbalanced trees with and topdown weightbalanced trees with .
Note that since we are only interested in the performance effects of the trees underlying the DST, and not in the performance of an implementation of the complex unioncopy data structure, we have implemented the simplified variant of DSTs outlined in Section B in the appendix, which alters the concept of segment trees to only report the aggregate value of weighted segments at a stabbing query, instead of a list of the respective segments. Evaluating the performance of the unioncopy data structure is out of scope of this work.
axis indicates the size of the tree that the operation is performed on. The lines indicate mean values. The standard deviation is all cases too small to be visible.
(d): Average depths of the nodes in DSTs based on redblack trees and zip trees. The axis specifies the number of inserted segments. Shaded areas indicate the standard deviation.
For the zip trees, we choose a total of three variants, based on the choices explained in Section 3.4: The first variant, denoted Hashing, generates nodes’ ranks by applying the fast hashing scheme by Dietzfelbinger et al. [dietzfelbinger1997reliable] to the nodes’ memory addresses. In this variant, node ranks are not stored at the nodes but recomputed on the fly when they are needed. The second variant, denoted Hashing, Store also generates nodes’ ranks from the same hashing scheme, but stores ranks at the nodes. The last variant, denoted Random, Store generates nodes’ ranks independent of the nodes and stores the ranks at the nodes.
We first individually benchmark the two operations of inserting (resp. removing) a segment to (resp. from) the dynamic segment tree. Our benchmark works by first creating a base dynamic segment tree of a certain size, then inserting new segments (resp. removing segments) into that tree. The number of new (resp. removed) segments is chosen to be the minimum of and
of the base tree size. Segment borders are chosen by drawing twice from a uniform distribution. All segments are associated with a realvalued value, as explained in Section
B. We conduct our experiments on a machine equipped with 128 GB of RAM and an Intel^{®} Xeon^{®} E51630 CPU, which has 10 MB of level 3 cache. We compile using GCC 8.1, at optimization level “O3 ffastmath”. We do not run experiments concurrently. To account for randomness effects, each experiment is repeated for ten different seed values, and repeated five times for each seed value to account for measurement noise. All our code is published, see Section C in the appendix.Figure 2(a) displays the results for the insert operation. We see that the redblack tree performs best for this operation, about a faster ( per operation at nodes) than the fastest zip tree variant, which is the variant using random rank selection ( per operation). The two weightbalanced trees lie between the redblack tree and the randomnessbased zip tree. Both hashingbased zip trees are considerably slower.
For the deletion operation, shown in Figure 2(b), the randomnessbased zip tree is significantly faster than the best competitor, the redblack tree. Again, the weightbalanced trees are slightly slower than the redblack tree, and the hashingbased zip trees fare the worst.
Since (randomnessbased) zip trees are the fastest choice for deletion and redblack trees are the fastest for insertion, benchmarking the combination of both is obvious. Also, using an dynamic segment tree makes no sense if only the insertion operation is needed. Thus, we next benchmark a move operation, which consists of first removing a segment from the tree, changing its borders, and reinserting it. The results are shown in Figure 2(c). We see that the randomnessbased zipping segment tree is the bestperforming dynamic segment tree for trees with at least segments.
The obvious measurement to explore why different trees perform differently is the trees’ balance, i.e., the average depth of a node in the respective trees. We conduct this experiment as follows: For each of the trees under study, we create trees of various sizes with randomly generated segments. In a tree generated in this way, we only see the effects of the insert operation, and not the delete operation. Thus, we continue by moving each segment once by removing it, changing its interval borders and reinserting it. This way, the effect of the delete operation on the tree balance is also accounted for. Since the weightbalanced trees were not competitive previously, we perform this experiment only for the redblack and zip trees. We repeat the experiment with different seeds to account for randomness. The results can be found in Figure 2(d). We can see that zipping segment trees, whether based on randomness or hashing, are surprisingly considerably less balanced than redblackbased DSTs. Also, whether ranks are generated from hashing or randomness does not impact balance.
Concluding the evaluation, we gain several insights. First, deletions in zipping segment trees are so much faster than for redblackbased DSTs that they more than make up for the slower insertion, and the fastest choice for moving segments are zipping segment trees with ranks generated randomly. Second, we see that this speed does not come from a better balance, but in spite of a worse balance. The speedup must therefore come from more efficient rebalancing operations. Third, and most surprising, the question of how ranks are generated does not influence tree balance, but has a significant impact on the performance of deletion and insertion. However, the hash function we have chosen is very fast. Also, during deletion, no ranks should be (re) generated for the variant that stores the ranks at the nodes. Thus, the performance difference can not be explained by the slowness of the hash function. Generating ranks with our chosen hash function must therefore introduce some disadvantageous structure into the tree that does not impact the average node depth.
5 Conclusion
We have presented zipping segment trees — a variation of dynamic segment trees, based on zip trees. The technique to maintain the necessary annotations for dynamic segment trees is comparatively simple, requiring only very little adaption of zip trees’ routines. In our evaluation, we were able to show that zipping segment trees perform well in practice, and outperform redblacktree or weightbalancedtree based DSTs with regards to modifications.
However, we were not yet able to discover exactly why generating ranks from a (very simple) hash function does negatively impact performance. Exploring the adverse effects of this hash function and possibly finding a different hash function that avoids these effects remains future work. Another compelling future experiment would be to evaluate the performance when combined with the actual data structure by van Kreveld and Overmars.
All things considered, their relatively simple implementation and the superior performance when modifying segments makes zipping segment trees a good alternative to classical dynamic segment trees built upon rotationbased balancing binary trees.
References
Appendix A General Interval Borders
In Section 2.2, we made two assumptions as to the nature of the intervals’ borders: we did assume a total ordering on the keys of the nodes, i.e., no segment border may appear in two segments, and we assumed all intervals to be rightopen. We now briefly show how to lift this restriction.
The important aspect is that a query path must see the nodes representing interval borders on the correct side. As an example, consider two intervals and . A query for the value should return but not . Thus, the node representing “” must lie to the left of the resulting search path, such that the query path does not end at a leaf between the nodes representing “” and “”. That way, the annotation for will not be picked up. Conversely, the node representing “” must lie to the left of the query path, such that the query path ends in a leaf between the nodes representing “” and “”.
This dictates the ordering of nodes with the same numeric key : First come the open upper borders, then the closed lower borders, then the closed upper borders, and finally the open lower borders. When querying for a value , we descend left on nodes representing “” and “”, and descend right on nodes representing “” and “”. That way, a search path for will end at a leaf after all closed lower borders and open upper borders, but before all closed upper borders and open lower borders. This yields the desired behavior.
Appendix B Numeric Annotations
Many applications of segmentstoring data structures deal with weighted segments, i.e., each segment is associated with a number (or a vector of numbers). In such scenarios, one is often only interested in determining the aggregated weight of the segments overlapping at a certain point instead of the actual set of segments.
This question can be answered without the need for a complicated unioncopy data structure. In this case, we annotate each edge with a real number resp. with a vector. Instead of adding the actual segments to the sets at edges, we just add the associated weight of the segment. The copy operation is a simple duplication of a vector, a union is achieved by vector addition.
Deletion becomes a bit more complicated in this setting. Previously, we have exploited the convenient operation of deleting an item from all sets offered by the unioncopy data structure. Now, say an interval associated with a weight vector is deleted from the dynamic segment tree, and the segment is reperesented by the two nodes and . If we had just inserted the interval (and therefore and ), we would now add to the annotations on a certain set of edges (see above for a description of the insertion process). When deleting an interval, we annotate the same set of edges with . This exactly cancels out the annotations made when the interval was inserted.
Appendix C Code Publication
We publish our C++17 implementation of all evaluated tree variants, including all code to replicate our benchmarks, at https://github.com/tinloaf/ygg. Note that this data structure library is still work in progress and might change after the publication of this work. The exact version used to produce the benchmarks shown in this paper can be accessed at https://github.com/tinloaf/ygg/releases/tag/version_sea2020.
The repository contains build instructions. The executables to generate our benchmark results can be found in the benchmark subfolder. The bench_dst_* executables produce the respective benchmark. The output format is that of Google Benchmark, see https://github.com/google/benchmark for documentation.
The exact commands we have run to generate our data are:
./bench_DST_Insert seed_start 42 seed_count 10 benchmark_repetitions=5 benchmark_out=<output_dir>/result_Insert.json benchmark_out_format=json doublings 14 relative_experiment_size 0.05 experiment_size 100000
./bench_DST_Delete seed_start 42 seed_count 10 benchmark_repetitions=5 benchmark_out=<output_dir>/result_Delete.json benchmark_out_format=json doublings 14 relative_experiment_size 0.05 experiment_size 100000
./bench_DST_Move seed_start 42 seed_count 10 benchmark_repetitions=5 benchmark_out=<output_dir>/result_Move.json benchmark_out_format=json doublings 14 relative_experiment_size 0.05 experiment_size 100000
Comments
There are no comments yet.