Preprocessing Ambiguous Imprecise Points

by   Ivor van der Hoog, et al.
TU Eindhoven

Let R = {R_1, R_2, ..., R_n} be a set of regions and let X = {x_1, x_2, ..., x_n} be an (unknown) point set with x_i ∈ R_i. Region R_i represents the uncertainty region of x_i. We consider the following question: how fast can we establish order if we are allowed to preprocess the regions in R? The preprocessing model of uncertainty uses two consecutive phases: a preprocessing phase which has access only to R followed by a reconstruction phase during which a desired structure on X is computed. Recent results in this model parametrize the reconstruction time by the ply of R, which is the maximum overlap between the regions in R. We introduce the ambiguity A(R) as a more fine-grained measure of the degree of overlap in R. We show how to preprocess a set of d-dimensional disks in O(n n) time such that we can sort X (if d=1) and reconstruct a quadtree on X (if d≥ 1 but constant) in O(A(R)) time. If A(R) is sub-linear, then reporting the result dominates the running time of the reconstruction phase. However, we can still return a suitable data structure representing the result in O(A(R)) time. In one dimension, R is a set of intervals and the ambiguity is linked to interval entropy, which in turn relates to the well-studied problem of sorting under partial information. The number of comparisons necessary to find the linear order underlying a poset P is lower-bounded by the graph entropy of P. We show that if P is an interval order, then the ambiguity provides a constant-factor approximation of the graph entropy. This gives a lower bound of Ω(A(R)) in all dimensions for the reconstruction phase (sorting or any proximity structure), independent of any preprocessing; hence our result is tight.


page 1

page 2

page 3

page 4


Preprocessing Imprecise Points for the Pareto Front

In the preprocessing model for uncertain data we are given a set of regi...

On the Shannon entropy of the number of vertices with zero in-degree in randomly oriented hypergraphs

Suppose that you have n colours and m mutually independent dice, each of...

Worst-Case Efficient Dynamic Geometric Independent Set

We consider the problem of maintaining an approximate maximum independen...

Query-Competitive Sorting with Uncertainty

We study the problem of sorting under incomplete information, when queri...

Accelerating Frank-Wolfe Algorithm using Low-Dimensional and Adaptive Data Structures

In this paper, we study the problem of speeding up a type of optimizatio...

Weighted Maximum Independent Set of Geometric Objects in Turnstile Streams

We study the Maximum Independent Set problem for geometric objects given...

Answering (Unions of) Conjunctive Queries using Random Access and Random-Order Enumeration

As data analytics becomes more crucial to digital systems, so grows the ...

1 Introduction

A fundamental assumption in classic algorithms research is that the input data given to an algorithm is exact. Clearly this assumption is generally not justified in practice: real-world data tends to have (measurement or labeling) errors, heterogeneous data sources introduce yet other type of errors, and “big data” is compounding the effects. To increase the relevance of algorithmic techniques for practical applications, various paradigms for dealing with uncertain data have been introduced over the past decades. Many of these approaches have in common that they represent the uncertainty, imprecision, or error of a data point as a disk in a suitable distance metric which we call an uncertainty region. We focus on a fundamental problem from the realm of computation with uncertainties and errors: given a set of imprecise points represented by uncertainty regions, how much proximity information do the regions contain about the imprecise points?

Preprocessing model.

We study this problem within the preprocessing framework initially proposed by Held and Mitchell [14]. In this framework we have a set of regions and an point set with This model has 2 consecutive phases: a preprocessing phase followed by a reconstruction phase. In the preprocessing phase we have access only to and we typically want to preprocess in time to create some linear-size auxiliary data structure which we will denote by . In the reconstruction phase, we have access to and we want to construct a desired output on using faster than would be possible otherwise. Löffler and Snoeyink [21] were the first to use this model as a way to deal with data uncertainty: one may interpret the regions as imprecise points, and the points in as their true (initially unknown) locations. This interpretation of the preprocessing framework has been successfully applied to various problems in computational geometry [5, 6, 9, 10, 19, 27]. Several results restrict to be a set of disjoint (unit) disks in the plane, while others consider partially overlapping disks. Traditionally, the ply of , which measures the maximal number of overlapping regions, has been used to measure the degree of overlap, leading, for example, to reconstruction times of .

Figure 1: Two sets of 16 disks each in the plane, both with a ply of 4. The ambiguity of the set on the right is four times as large as the ambiguity of the set on the left.

The ply is arguably a somewhat coarse measure of the degree of overlap of the regions. Consider the following example: suppose that we have a collection of disks in the plane that overlap in one point and that the remainder of is mutually disjoint (see Figure 1 left). Then and the resulting time complexity of the reconstruction phase is even though it might be possible to achieve better bounds ( is arguably not in a worst-case configuration for that given ply, see Figure 1 right).


We introduce the ambiguity as a more fine-grained measure of the degree of overlap in . The ambiguity is based on the number of regions each individual region intersects (see Figure 1). We count this number with respect to particular permutations of the regions: for each region we count only the overlap with regions that appear earlier in the permutation. A proper technical definition of ambiguity can be found in Section 2. We also show how to compute a -approximation of the ambiguity in time.

Ambiguity and entropy.

In one dimension, is a set of intervals and the ambiguity is linked to interval (and graph) entropy (see Appendix A for a definition), which in turn relates to the well-studied problem of sorting under partial information. Fredman [12] shows that if the only information we are given about a set of values is a partial order , and is the number of linear extensions (total orders compatible with) of , then we need at least comparisons to sort the values. Brightwell and Winkler prove that computing the number of linear extensions is -complete [4]. Hence efforts have concentrated on computing approximations, most notably via the concept of graph entropy as introduced by Körner [17]. Specifically, Khan and Kim [16] prove that where denotes the entropy of the incomparability graph of the poset . To the best of our knowledge there is currently no exact algorithm to compute . Cardinal et al. [7] describe the fastest known algorithm to approximate , which runs in time. See Appendix A for a more in-depth discussion of sorting and its relation to graph entropy.

We consider the special case where the partial order is induced by uncertainty intervals. We define the entropy of a set of intervals as the entropy of their intersection graph (which is also an incomparability graph) using the definition of graph entropy given by Körner. In this setting we prove that the ambiguity provides a constant-factor approximation of the interval entropy (see Section 2). Since we can compute a constant-factor approximation of the ambiguity in time, we can hence also compute a constant-factor approximation of the entropy of interval graphs in time, thereby improving the result by Cardinal et al. [7] for this special case.

Ambiguity and reconstruction.

Since is a lower bound for the number of comparisons needed to complete into a total order, is a lower bound for the reconstruction phase in the preprocessing model when is a set of intervals and the goal is to sort the unknown points in . This lower bound extends to higher dimensions and to proximity structures in general, independent of any preprocessing.

The ambiguity ranges between and for a set of regions . If the value of lies between and then we can preprocess in time and sort in time (in one dimension for arbitrary intervals) or build a quadtree in time (in all dimensions for unit disks).

If the ambiguity lies between and , then reporting the results explicitly in time dominates the reconstruction time. But the ambiguity suggests that the information-theoretic amount of work necessary to compute the results should be lower than . To capture this, we hence introduce a new variant of the preprocessing model, which allows us to return a pointer to an implicit representation of the results.

Specifically, in one dimension, is a set of intervals and we aim to return the sorted order of the unknown points in . If, for example, all intervals are mutually disjoint, then and we have essentially no time for the reconstruction phase. However, a binary search tree on , which we can construct in time in the preprocessing phase, actually captures all necessary information. In the reconstruction phase we can hence return a pointer to as an implicit representation of the sorted order. In Section 3 we show how to handle arbitrary sets of intervals in a similar manner. That is, we describe how to construct in time an auxiliary data structure on in the preprocessing phase (without access to ), such that, in the reconstruction phase (using ), we can construct a linear-size AVL-tree on in time, which is tight.

In all dimensions, we consider to be a set of unit disks and our aim is to return a quadtree on the points in where each point in lies in a unique quadtree cell. Note that in 2 dimensions, also allows us to construct e.g. the Delaunay triangulation of in linear time [5]. However, we show that constructing such a quadtree explicitly in time is not possible, and the work necessary to distinguish individual points could dominate the running time and overshadow the detail in the analysis brought by the ambiguity measure. We hence follow Buchin et al. [5] and use so-called -deflated quadtrees which contain up to a constant points in each leaf. From one can construct a quadtree on where each point lies in a unique quadtree cell in linear time. In Section 4 we describe how to reconstruct a linear-size -deflated quadtree (with a suitable constant ) in time, which is tight (in fact, in one dimension our result also extends to non-unit intervals).

2 Ambiguity

We introduce a new measure on a set of regions to reflect the degree of overlap, which we call the ambiguity. The sequence in which we process regions matters (refer to Section 2.1), thus we distinguish between the -ambiguity defined on a given permutation of the regions in , and the minimum ambiguity defined over all possible permutations. We demonstrate several properties of the ambiguity, and discuss its relation to graph entropy when is a set of intervals in one dimension.

Processing permutation.

Let be a set of regions and let (note that for all , the region could be any region depending on the permutation ) be the sequence of elements in according to a given permutation . Then we say that is a processing permutation of . Furthermore, let be the prefix of , that is, the first elements in the sequence . A permutation is containment-compatible if implies for all and [11]. When is clear from context, we denote by .

Contact set (for a permutation ).

For a region we define its contact set to be the set of regions which precede or are equal to in the order , and which intersect : . Note that a region is always in its own contact set. A region whose contact set contains only itself is called a bottom region (refer to Figure 2).


For a set of regions and a fixed permutation we define the -ambiguity (with the logarithm to the base 2). Observe that bottom regions do not contribute to the value of the -ambiguity. The ambiguity of is now the minimal -ambiguity over all permutations , .

Figure 2: A set of overlapping intervals with a permutation. In this figure . In all figures, bottom intervals are indicated in blue (in this case this is only ).

2.1 Properties of ambiguity

We show the following properties of ambiguity: (1) the -ambiguity may vary significantly with the choice of the processing permutation , (2) in one dimension, the -ambiguity for any containment-compatible permutation on a set of intervals implies a 3-approximation on the entropy of the interval graph of , and (3) the permutation that realizes the ambiguity is containment-compatible. Therefore in one dimension, the ambiguity of a set of intervals implies a 3-approximation of the entropy of the interval graph of .

We start with the first property: it is easy to see that the processing permutation has a significant influence on the value of the -ambiguity (refer to Figure 3). Even though -ambiguity can vary considerably, we show that if we restrict the permutations to be containment-compatible, their -ambiguities lie within a constant factor of the ambiguity.

Figure 3: An example of the -ambiguity induced by two permutations (on the left) and (on the right) of the same five intervals. The -ambiguity is and the -ambiguity is .

Interval entropy.

The entropy of a graph was first introduced by Körner [17]. Since then several equivalent definitions appeared [26]. We define the interval entropy , for a set of intervals , as the entropy of the intersection graph of . While investigating the question of sorting an arbitrary poset, Cardinal et al. [7] found an interesting geometrical interpretation of the poset entropy, which applies to our interval entropy: let a poset describe a set of (open) intervals combinatorially, that is, for each we know which intervals intersect , are contained in , contain , and are disjoint from . Denote by the infinite set of sets of intervals on the domain (that is, each is a set of intervals, where each interval has endpoints in ) which induce the same poset as . Then Cardinal et al. prove the following lemma (see Figure 4 for an illustration): [[7], Lemma 3.2 paraphrased]

We show that the -ambiguity for any containment-compatible is a 3-approximation of . To achieve this we rewrite the lemma from Cardinal et al. in the following way,

An embedding gives each interval a size between and . To simplify the algebra later, we re-interpret this size as the fraction (weight) of the domain that occupies. We associate with each a set of weights such that for all , ; we write . From now on we consider embeddings on the domain : an interval then has a size . The formula for the entropy becomes:

Figure 4: Let be a set of five intervals, where four intervals are mutually disjoint and contained in one larger interval. We show three embeddings of these intervals on the domain with the same combinatorial properties. Embedding shows that . shows that and is the optimal embedding which shows that .

Ambiguity and entropy.

Next, we show that the interval entropy gives an upper bound on the ambiguity. The entropy of is the maximum over all embeddings on , so any embedding of on the domain gives a lower bound on . We will create an embedding with a corresponding weight assignment such that:


We start with the original input embedding of and we sort the coordinates of all the endpoints (both left- and right-). To each endpoint we assign a new coordinate if is the th endpoint in the sorted order (indexing from 0). Thus, we obtain an embedding of on . For any containment-compatible permutation , the length of each interval in this embedding is at least , as each interval contains at least endpoints of the intervals from its contact set in its interior. Also note that the distance between every right endpoint and the consecutive endpoint to the right is . Thus, we can increase the coordinate of every right endpoint by and obtain an embedding of on with a corresponding weight assignment , such that the length of each interval is at least . This allows us to prove the following lemma:

For any containment-compatible permutation of a set of intervals ,



Consider the embedding and corresponding weight assignment constructed above. Consider any containment-compatible permutation . We split the intervals of into four sets depending on the size of their contact set: let , , and . Let these sets contain and intervals respectively. Then, using Equation (1) for the entropy,


On the other hand,


Then, using Equation (3) we get

and therefore

We continue by showing that the ambiguity also gives an upper-bound for the interval entropy. Starting with a helper lemma:

Suppose is partitioned into two sets and such that for each , and are disjoint. In any weight assignment that realizes , the intervals in together have length and the intervals in together have length on the domain .


In Equation (1) we rewrote the formula for entropy in terms of weights: for any weight assignment , is the proportion that occupies on the domain, and we embedded on the domain . We can similarly embed on the domain for an arbitrary scalar . We define the relative entropy of (refer to Figure 5 (top)) as:

Observe that and that:


If the intervals in can occupy a width of at most , then it is always optimal to give the intervals in a total width of (since the entropy maximizes the product of the lengths of intervals in and ). This implies:

See Figure 5 (bottom) for an illustration of the argument. If we now substitute Equation (4) into this equation we get that the maximum is realized if which proves the lemma. ∎

Figure 5: (top left) A set of five intervals and their optimal embedding for the entropy relative to . (top right) The optimal embedding of for the entropy relative to . Observe that the proportion that each interval obtains of the domain is the same in both embeddings. (bottom) An illustration of the argument for Lemma 2.1: we see a set of 7 intervals and a set of 3 intervals with the intervals in disjoint from the intervals in . If we vary , we vary the total width on which and are embedded. The entropy is given by the maximal embedding and therefore found by optimizing .

Let be any containment-compatible permutation, then .


We defined as the prefix of . We prove the lemma with induction on .

For both the lefthand and the righthand side are . So we assume that the lemma holds for all and we prove it for . is the relative entropy of on the domain . We know that . We make a distinction between two cases: or otherwise. If then is disjoint from . Lemma 2.1 guarantees, that if we want to embed on that gets a size of . The remaining intervals get embedded with a total width of which they already had in the previous iteration. So:

In the second case is at least . The other intervals used to be optimally embedded on and are now embedded on . So each of them expands with at most a factor or algebraically:

There are intervals disjoint from so Lemma 2.1 guarantees that . It follows that:

which implies the Lemma. ∎

Lemmas 2.1 and 2.1 imply the following theorem. For any set of intervals in one dimension, for any containment-compatible permutation on , is a -approximation of .

For any set of intervals in one dimension, the ambiguity is a -approximation of .


The permutation which realizes the ambiguity of must always be containment-compatible. This is because swapping a region with a region that contains in the permutation always improves the -ambiguity. ∎

Let be the number of linear extensions of the poset induced by . In the proof of Lemma 3.2 [7] Cardinal et al. show that . This implies that the interval graph entropy is a lower-bound for constructing any unique linear order underlying a poset. Proximity structures depend on sorting [8]. Thus, we conclude: Reconstructing a proximity structure on is lower-bounded by .

3 Sorting

Let be a set of intervals and let be a set of points (values) with . We show how to construct an auxiliary structure on in the preprocessing phase without using , such that, in the reconstruction phase, we can construct a linear-size binary search tree on in time. To achieve this, we first construct a specific containment-compatible permutation of , and then show how to maintain when we process the intervals in this order.

3.1 Level permutation

Figure 6: A set of intervals with a containment graph with quadratic complexity.

We need a processing permutation of with the following conditions:

  1. is containment-compatible,

  2. intervals containing no interval of come first and are ordered from right to left and

  3. we can construct in time.

In Section 2.1 we showed that if condition (i) holds, the -ambiguity is a lower-bound for sorting . In Section 3.2 we show that condition (ii) is useful to reconstruct an AVL-tree on in time. Condition (iii) bounds the time used in the preprocessing phase.

Below, we define two natural partitions of based on the containment graph of : the height partition and the depth partition. However, a permutation compatible with the height partition satisfies conditions (i) and (ii) but not (iii), and a permutation compatible with the depth partition satisfies conditions (i) and (iii) but not (ii). Therefore, we define a hybrid partition, which we call the level partition, which implies a permutation which does satisfy all three conditions, below.

Containment graph.

For a set of intervals , its containment graph represents the containment relations on . is a directed acyclic graph where contains if and only if there is a directed path from to and all intervals that are contained in no other interval of share a common root. The bottom intervals are a subset of the leaves of this graph. Note that can have quadratic complexity (Figure 6).

Height and Depth partition.

We define the height partition as the partition of into levels where all have height (minimal distance from to a leaf) in or equivalently: the intervals in contain no intervals in (Figure 7). We analogously define the depth partition as the partition of into levels where all have depth (maximal distance from the root to ) in . Clearly any permutation compatible with or satisfies condition (i). All leaves of have height so per definition are all in and thus any permutation compatible with that sorts satisfies condition (ii). Clearly the same is not true for . On the other hand, in Lemma 3.1 we show how to construct in time. It is unknown whether the height partition can be created in time (see Appendix C).

Figure 7: (left) A set of intervals and the corresponding containment graph , leaves of are purple. (middle) The height partition. (right) The depth partition.

For any set of intervals we can construct in time.


We iteratively insert intervals from left to right; refer to Appendix B. ∎

Level partition.

We now define the level partition: a hybrid between and : , where all have depth in except for the leaves of , which are in regardless of their depth. We can compute the level partition from in time by identifying all leaves of with a range query. The level permutation is the permutation where intervals in precede intervals in and where within each level the intervals are ordered from right to left. It can be constructed from in time by sorting.

Theorem 3.1 follows directly from the preceding discussion.

The level permutation satisfies conditions (i), (ii) and (iii).

3.2 Algorithm

We continue to describe a preprocessing and reconstruction algorithm to preprocess a set of intervals in time such that we can sort in time.


Let be the level permutation of . In the preprocessing phase we build an AVL-tree on the bottom intervals. In the reconstruction phase, we insert each remaining into in the order in time. This implies that for bottom intervals we are not allowed to spend even constant time and for each non-bottom interval , we want to locate in in time. To achieve this, we supply every non-bottom interval with an anchor denoted by . For a non-bottom interval , we define its anchor as an arbitrary interval contained in . All intervals in are ordered from right to left, so for any non-bottom interval , its right endpoint is contained in the interval preceding it and we make this interval the anchor of (refer to Figure 8).

Preprocessing phase.

The auxiliary structure is an AVL-tree on the bottom intervals, augmented with a set of pointers leading from intervals to their anchors. We will implement as a leaf-based AVL-tree, i.e., where values are stored in the leaves, and inner nodes are decision nodes. Finally, we will use a doubly linked list to connect the leaves of the tree.

Figure 8: The auxiliary structure . In the level all non-bottom intervals are shown their anchor. (top) A schematic representation of intervals in the level permutation (from bottom to top). (bottom) The Fibonacci tree containing the subset corresponding to the bottom intervals. Note that we added one dummy node in red.

Let be the points corresponding to bottom intervals. Bottom intervals are mutually disjoint and we can build an AVL-tree on without knowing their true values. Recall that a Fibonacci tree is a tree binary where for every inner node, its left subtree has a depth 1 greater than its right subtree. A Fibonacci tree is a valid AVL-tree and we construct the AVL-tree over as a Fibonacci tree where we add at most dummy leaves with value to ensure that the total number of nodes is a Fibonacci number. Refer to Figure 8 for an example. We remove the bottom intervals from and for each non-bottom interval we identify its anchor and we supply with a pointer to . As the final step of the preprocessing phase we connect the leaves of in a doubly linked list. To summarize: consists of a graph of intervals connected by anchor pointers and an AVL-tree . Each bottom interval is in and each non-bottom interval has a directed path to a node in .

We can construct the auxiliary structure in time.


The level partition and permutation can be constructed in time and with it we get access to the intervals in sorted from right to left. We scan from right to left and for each interval we either identify it as a bottom interval or to supply it with its anchor. We identify for each its anchor in logarithmic time using a range query. We construct the Fibonacci tree on with leaf pointers in time [23]. ∎

Reconstruction phase.

During the reconstruction phase, we need to maintain the balance of when we insert new values. contains bottom intervals which we are not allowed to charge even constant time, so the classical amortized-constant analysis [22] of AVL-trees does not immediately apply. Nonetheless we show in Appendix E: Let be an AVL-tree where each inner node has two subtrees with a depth difference of 1. We can dynamically maintain the balance of in amortized time.

Figure 9: The tree from Figure 8 after two iterations in the reconstruction phase. We inserted the true values of the two orange intervals. Note that an orange interval requested the true value of a bottom interval. At this iteration we want to insert the point of into . is a non-bottom interval in so its anchor must be the interval preceding it.

Given , we can reconstruct an AVL-tree on in time.


Given and the level permutation we want to sort the points in (insert them into ) in time. Because starts as a Fibonacci tree, Lemma 3.2 guarantees that we can dynamically maintain the balance of with at most operations. The bottom intervals are already in , thus we need to insert only the remaining , in the order , into in time plus some additional time which we charge to the anchor (each anchor will only get charged once).

Whenever we process a non-bottom interval we know that its anchor is already inserted in . By construction, there are at most leaves in which have coordinates on the domain of (because these values can come only from intervals in the contact set of ). We know that we must insert next to one of these leaves in . This means that if we have a pointer to any leaf on the domain of , then we locate in with at most edge traversals. During these traversals, we collapse each interval we encounter to a point. We obtain such a pointer from . Assume . Then the leaf corresponding to must lie on the domain of . Otherwise, and are both in the level (illustrated in Figure 9) and and must contain the right endpoint of . With a similar analysis, can locate the right endpoint of in in time. In both cases we found a leaf of in and locate in in time. Each interval in has a unique anchor, so each anchor in is charged this extra work once. ∎

4 Quadtrees

Let be a set of unit intervals in a bounding box (interval) (we discuss how to extend the approach later) and let be a set of points (values) with . We show how to construct an auxiliary structure on in the preprocessing phase without using , such that, in the reconstruction phase, we can construct a linear-size quadtree on in time. We recall several standard definitions.

Point quadtrees.

Suppose that we have a -dimensional point set in a bounding hypercube . A quadtree on is defined as follows: split operator is an operator that splits any -dimensional hypercube into equal-sized hypercubes called cells. We recursively split until each point lies within a unique cell [24]. A -deflated quadtree is a more relaxed quadtree where is split until each leaf cell contains at most points [6].

Figure 10: A set of points where the quadtree on has linear depth. If the blue points lie very close, the quadtree on needs unbounded complexity.

Region quadtrees.

Let be a set of -dimensional disks in a bounding hypercube . Let be the infinite set of possible quadtree cells on . For each , we define its storing cell denoted by as the largest cell in that is contained in and contains the center of [20]. is the subtree induced by . The neighborhood of is the set of possible cells with size that are intersected by . We consider the quadtree on to be the unique compressed quadtree where for each , its neighborhood is in .

Figure 11: (left) A tree with recursive centroid edges. (right) The corresponding edge-oracle tree . The orange leaf is a subtree of and its corresponding node in is .

Edge oracle tree.

Depending on and , the quadtree on does not necessarily have logarithmic depth (Figure 10) thus, point location in is non-trivial. Har-Peled [13] introduced a fast point-location structure (later dubbed edge-oracle tree [20]) for any quadtree . The edge-oracle tree is created through centroid decomposition. Any tree with bounded degree has at least one centroid edge which separates a tree of nodes into two trees with at least and at most nodes each. Moreover, one of these 2 trees is a subtree of (a tree induced by a node as a root). For any subtree of , we define its corresponding node in (edge in ) as the lowest node in which splits into two parts, one of which contains and the other contains the root of . This node must exist, is unique and the subtree containing has nodes (refer to Figure 11).

Given a query point , we can find the leaf cell that contains in the following way: each decision node of has 2 children where 1 child node corresponds to a subtree of . We test whether is contained in in time by checking the bounding box of .

We wish to preprocess such that we can reconstruct a linear-size -deflated quadtree for with pointers between leaves. However, does not necessarily have linear size and dynamically maintaining pointers between leaves is non-trivial. To achieve this, one needs to maintain a compressed and smooth quadtree (refer to Appendix D for details) and Hoog et al. [15] show how to dynammically maintain a smooth compressed quadtree with constant update time. We will build such a quadtree augmented with an edge-oracle tree initialized as a Fibonacci tree. We proceed analogously to the approach in Section 3.

4.1 1-dimensional quadtrees on unit-size intervals

We show how to construct an auxiliary structure on without using , such that we can construct a -deflated quadtree on in time.

Preprocessing phase.

The auxiliary structure will be a smooth compressed quadtree on the intervals augmented with an edge-oracle tree on , anchor pointers, and a containment-compatible processing permutation of . Given , we initialize as a Fibonacci tree, possibly adding dummy leaves111We may need to allow parents of leaves of to have a single dummy leaf.. We supply each with a pointer to the node in corresponding to and we call this its anchor .

The auxiliary structure can be constructed in time.


Hoog et al. [15] show that for any set of -dimensional disks , its smooth compressed quadtree on with corresponding edge-oracle tree can be constructed in time and that this tree has a worst-case constant update time. We turn into a Fibonacci tree by inserting at most dummy leaves in time in total. ∎

Reconstruction phase.

By construction, each leaf in intersects at most 2 bottom intervals of (since these are mutually disjoint). Therefore, we can construct a -deflated quadtree on by inserting each in the order into . We observe the following:

When we process an interval , intersects leaf cells of .


There can be at most 2 bottom intervals (left and right) of whose neighborhood intersects . All the other leaves on the domain of are caused by either already processed points on the domain of or are dummy nodes. For each dummy node there is a corresponding non-dummy node also on the domain of . ∎

When we process an interval , we can locate, for any point , the leaf which contains in time.


If then has an anchor to and from this anchor we locate in time. Suppose is to the left of . We locate the left-most leaf of in time and traverse its neighbor pointer. The neighboring cell must lie in a subtree neighboring with nodes and this tree must contain (Lemma 4.1). We now have a pointer to a node in and from this node we locate in time. ∎

Given , we can construct a -deflated quadtree on in time.


Given and any containment-compatible permutation , we want to insert into in time. An insertion in creates additional leaves in (and therefore also in ) and Lemma 3.2 guarantees that we can dynamically maintain the balance of with at most operations. If we only consider the point set corresponding to the bottom intervals then is already a -deflated quadtree on independent of where the points of lie in their uncertainty intervals. Therefore, we only need to insert the remaining , in the order , into in time (potentially collapsing some of the bottom intervals when necessary). Using Lemma 4.1 we can locate the quadtree leaf that contains in time. This leaf is intersected by at most 2 bottom intervals, which we collapse into points whose location we locate in constant time using the leaf pointers. Thus each non-bottom interval inserts at most 3 points into in time. ∎

4.2 Generalization

If we stay in one dimension, then the result of Theorem 4.1 in fact generalizes to the case where is a set of arbitrary intervals since Lemma 4.1 and 4.1 do not depend on the intervals being unit size. However, the result also generalizes to the case where is a set of unit-size disks in (constant) dimensions: first of all, any permutation of is containment-compatible. If the disks are unit size then each disk intersects at most bottom disks where is the kissing number so Lemma 4.1 generalizes. For any disk , recall that was the subtree of the storing cell of . Any point must lie in the perimeter of which consists of at most subtrees of size therefore, Lemma 4.1 also generalizes. The result is even more general: this approach works for any collection of unit-size fat convex regions similar to, e.g. [5]. Interestingly, generalizing the result of Theorem 4.1 both to higher dimensions and to non-unit regions at the same time is not possible: in Appendix F we show that, independent of preprocessing, reconstructing a -deflated quadtree has a lower bound of , which could be more than .

5 Conclusion

We introduced the ambiguity of a set of regions as a more fine-grained measure of the degree of their overlap. We applied this concept to uncertainty regions representing imprecise points. In the preprocessing model we show that the ambiguity is a natural lower bound for the time complexity of the reconstruction of any proximity structure. We achieved these results via a link to the entropy of partial orders which is of independent interest. If the regions are intervals in 1D we show how to sort in time, if the regions are unit balls in any dimension we show how to reconstruct quadtrees time.

In the future we plan to investigate if our results can be generalized to other promixity structures such as Delaunay triangulations, minimum spanning trees, and convex hulls. In principle it is possible to convert quadtrees into all of these structures in linear time [18]. However, it is not clear how to do so, when working with an implicit representation of the results in the case that is sub-linear.


  • [1] Peyman Afshani. On dominance reporting in 3d. In European Symposium on Algorithms, pages 41–51. Springer, 2008.
  • [2] Huck Bennett and Chee Yap. Amortized analysis of smooth quadtrees in all dimensions. Computational Geometry, 63:20–39, 2017.
  • [3] Marshall Bern, David Eppstein, and John Gilbert. Provably good mesh generation. Journal of Computer and System Sciences, 48(3):384–409, 1994.
  • [4] Graham Brightwell and Peter Winkler. Counting linear extensions. Order, 8(3):225–242, 1991.
  • [5] Kevin Buchin, Maarten Löffler, Pat Morin, and Wolfgang Mulzer. Delaunay triangulation of imprecise points simplified and extended. Algorithmica, 61:674–693, 2011. doi:
  • [6] Kevin Buchin and Wolfgang Mulzer. Delaunay triangulations in o (sort (n)) time and more. Journal of the ACM (JACM), 58(2):6, 2011.
  • [7] Jean Cardinal, Samuel Fiorini, Gwenaël Joret, Raphaël M Jungers, and J Ian Munro. Sorting under partial information (without the ellipsoid algorithm). Combinatorica, 33(6):655–697, 2013.
  • [8] Mark De Berg, Otfried Cheong, Marc Van Kreveld, and Mark Overmars. Computational Geometry: Introduction. Springer, 2008.
  • [9] Olivier Devillers. Delaunay triangulation of imprecise points, preprocess and actually get a fast query time. Journal of Computational Geometry, 2(1):30–45, 2011.
  • [10] Esther Ezra and Wolfgang Mulzer. Convex hull of points lying on lines in time after preprocessing. Computational Geometry, 46(4):417–434, 2013.
  • [11] P.C. Fishburn and W.T. Trotter. Geometric containment orders: a survey. Order, 15:167–182, 1998.
  • [12] Michael L Fredman. How good is the information theory bound in sorting? Theoretical Computer Science, 1(4):355–361, 1976.
  • [13] Sariel Har-Peled. Geometric approximation algorithms. Number 173 in Mathematical Surveys and Monographs. American Mathematical Soc., 2011.
  • [14] Martin Held and Joseph SB Mitchell. Triangulating input-constrained planar point sets. Information Processing Letters, 109(1):54–56, 2008.
  • [15] Ivor Hoog vd, Elena Khramtcova, and Maarten. Löffler. Dynamic smooth compressed quadtrees. In LIPIcs-Leibniz International Proceedings in Informatics, volume 99. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018.
  • [16] Jeff Kahn and Jeong Han Kim. Entropy and sorting. Journal of Computer and System Sciences, 51(3):390–399, 1995.
  • [17] János Körner. Coding of an information source having ambiguous alphabet and the entropy of graphs. In 6th Prague conference on information theory, pages 411–425, 1973.
  • [18] Maarten Löffler and Wolfgang Mulzer. Triangulating the square and squaring the triangle: quadtrees and delaunay triangulations are equivalent. SIAM Journal on Computing, 41(4):941–974, 2012.
  • [19] Maarten Löffler and Wolfgang Mulzer. Unions of onions: Preprocessing imprecise points for fast onion decomposition. Journal of Computational Geometry, 5:1–13, 2014.
  • [20] Maarten Löffler, Joseph A Simons, and Darren Strash. Dynamic planar point location with sub-logarithmic local updates. In Workshop on Algorithms and Data Structures, pages 499–511. Springer, 2013.
  • [21] Maarten Löffler and Jack Snoeyink. Delaunay triangulation of imprecise points in linear time after preprocessing. Computational Geometry, 43(3):234–242, 2010.
  • [22] Kurt Mehlhorn and Athanasios Tsakalidis. An amortized analysis of insertions into avl-trees. SIAM Journal on Computing, 15(1):22–33, 1986.
  • [23] Jürg Nievergelt and Edward M Reingold. Binary search trees of bounded balance. SIAM journal on Computing, 2(1):33–43, 1973.
  • [24] Hanan Samet. The quadtree and related hierarchical data structures. ACM Computing Surveys (CSUR), 16(2):187–260, 1984.
  • [25] Sanjeev Saxena. Dominance made simple. Information Processing Letters, 109(9):419–421, 2009.
  • [26] Gábor Simonyi. Graph entropy: a survey. Combinatorial Optimization, 20:399–441, 1995.
  • [27] Marc Van Kreveld, Maarten Löffler, and Joseph SB Mitchell. Preprocessing imprecise points and splitting triangulations. SIAM Journal on Computing, 39(7):2990–3000, 2010.

Appendix A Entropy of comparability and incomparability graphs

Körner [17] introduce the notion of the entropy of a graph. Let for any graph , be the space of independent sets of . is a convex subspace of where each integer-coordinate point in the space represents an independent subset of . Let be any (real-valued) point in . Körner defines the graph entropy of as: and this function is inspired by Shannon entropy.

Let be an arbitrary poset. The comparability graph of is the graph where there is an edge between if and are comparable. The incomparability graph of is the graph where there is an edge if and are incomparable and it is denoted by since this is the complement of . Khan and Kim [16] define the entropy of a poset as the entropy of . The more natural quantity to consider, however, is the entropy of the incomparability graph of , which Khan and Kim denote by (note that ). They continue to show that the time it takes to sort a poset is lower-bounded by .

Cardinal et al. [7] further investigate how to sort posets using this notion of entropy. They note that certain posets are induced by a set of intervals ; they call these interval orders. Moreover, they show for every poset , there exists an interval order with (and hence also ). This allows them to approximate for any poset , by searching for a corresponding .

Appendix B Building the depth partition

We present the proof of Lemma 3.1 in Section 3.1, which states that for any set of intervals we can construct the depth partition in time.


To construct we process the intervals of sorted by their left endpoints from left to right. For each level we maintain the value as the maximum of the right endpoints of the intervals in and we maintain the invariant that . Let be the (unknown) maximal level. Initially, we have as the empty set, no other sets and . We insert the first interval into and set to be the right endpoint of the interval.

Figure 12: An iteration of constructing the depth partition. Intervals in black are already inserted. In this example, there are currently three levels, and a new interval is being inserted: is the only value greater than the right endpoint of , thus is inserted into .

We then construct the remaining partition by iterating over the intervals in their sorted order. Consider the iteration where we are inserting an interval (refer to Figure 12). Let there be levels at this iteration . We compare the right endpoint of denoted by with the values . We find the minimal such that using binary search. All intervals in have a left endpoint left of , so must be contained in an interval in and we therefore insert in the level and update . This gives a partition where all intervals in a level have depth in the containment graph . ∎

Appendix C Building the height partition

In Section 3.1, we introduced the height partition as a natural partition of a set of intervals which would suit our needs, except for the fact that it is unclear how to compute it efficiently. We briefly expand on this here.

For any set of intervals , we can construct in time.


Observe that an interval contains if and only if . We use this information plus a 3-dimensional dynamic range tree [8] to construct the height partition. We sort the intervals from narrow to wide and insert them into the correct level in this order. The least wide interval cannot contain an interval of so we store this interval in and we insert it in the dynamic range tree as the 3-dimensional point .

Consider the iteration where we process an interval . By this time we have already processed all intervals which could be contained in . We query the range tree with the following range: and we find the interval in this range with the maximal -coordinate in time. This gives us the interval which of all intervals contained in , is stored in the highest level . Thus, contains no intervals in and must be stored in level . Lastly we insert the point into the range tree in time and we continue the iteration. ∎

Let for an interval , be the intervals in that are contained in . During the construction of the height partition we want for to find the interval in that is stored in the highest level. We project each interval to the point where is the level of . We then perform a 3-dimensional range query on the range: to find the interval on this domain with the maximal -coordinate. This leads to an interesting open problem which we will call dynamic -queries: