1 Introduction
Geometric set cover is a classical problem in computational geometry, with a long history and applications [agarwal2012near, agarwal2014near, bronnimann1995almost, bus2018practical, ChanG14, chan2012weighted, ChanH20, ChanH15, clarkson2007improved, ErlebachL10, MustafaRR15, mustafa2009ptas, varadarajan2010weighted]. A typical formulation involves a set of points in and a family of subsets of , often called ranges, defined by a simple class of geometric objects. For instance, the sets may be defined by intervals of in one dimension or balls, hypercubes or halfspaces in higher dimensions. The goal is to find a smallest subfamily of covering all the points of . In the weighted set cover problem, each range is associated with a nonnegative weight, and the goal is to find a minimum weight set cover. In general, these problems are NPcomplete even for the simplest of geometric families such as unit disks or unit squares in two dimensions, but they often allow efficient approximation algorithms with better (worstcase) performance than the general (combinatorial) set cover.
Recently, an exciting line of research was launched by Agarwal et al. [agarwal2020dynamic] on dynamic geometric set covering with the introduction of sublinear time data structures for fully dynamic maintenance of approximate set covers for intervals in one dimension and unit squares in two dimensions. These sublinear bounds are in sharp contrast with the update time bottleneck faced by the general (combinatorial) set cover problem in dynamic setting [AbboudA0PS19, BhattacharyaHNW21, GuptaK0P17], where is the number of sets containing an element, because inserting an element at a minimum requires updating all the sets that contain it. The implicit form of sets in geometric set covering—an interval or a disk, for instance, takes only pieces of information to add or delete—provides a natural yet challenging problem setting in which to explore the possibility of truly sublinear (possibly polylogarithmic) updates of both the elements and the sets. Indeed, following the work of Agarwal et al. [agarwal2020dynamic], Chan and He [chan2021dynamic] pushed the envelope further and managed to achieve sublinear update time for arbitrary axisaligned squares, and if only the size of an approximate solution is needed, for disks in the plane and halfspaces in three dimensions as well.
In spite of these recent developments, the state of the art for dynamic geometric set covering is far from satisfactory even for the simplest of the set systems: covering points on the line by intervals or covering points in the plane by axisaligned squares. For instance, the best update bound for the former is for a approximation, and for the latter is for an approximation, where is an arbitrarily small constant. More importantly, none of these schemes are able to handle the case of weighted set covers. In this paper, we make substantial progress on these fronts.
1.1 Results.
We present a large collection of new results, as summarized in Table 1, which substantially improve all the main results of Agarwal et al. [agarwal2020dynamic] on unweighted intervals in 1D and unweighted unit squares in 2D, as well as the main result of Chan and He [chan2021dynamic] on unweighted arbitrary squares in 2D. Throughout the paper, all the update bounds are amortized, and denotes an arbitrarily small constant; constant factors hidden in notation may depend on . In particular, our results include the following:

For unweighted intervals in 1D, we obtain the first dynamic data structure with polylogarithmic update time and constant approximation factor. We achieve approximation with update time, which improves Agarwal et al.’s previous update bound of . (The dynamic hitting set data structure for 1D intervals in [agarwal2020dynamic] does have polylogarithmic update time but not the set cover data structure.)

For unweighted unit squares in 2D, we obtain the first dynamic data structure with update time and constant approximation factor. (All squares are axisaligned throughout the paper.) The precise update bound is , which significantly improves Agarwal et al.’s previous update bound of .

For unweighted arbitrary squares in 2D, we obtain a dynamic data structure with update time (with Monte Carlo randomization) and constant approximation factor. This improves Chan and He’s previous (randomized) update bound of .

For unweighted halfplanes in 2D, we obtain the first dynamic data structure with sublinear update time and constant approximation factor that can efficiently report an approximate solution (in time linear in the solution size). The (randomized) update bound is . Although Chan and He’s previous solution [chan2021dynamic] can more generally handle halfspaces in 3D, it has a larger (randomized) update bound of and can only output the size of an approximate solution. (Specializing Chan and He’s solution to halfplanes in 2D can lower the update time a bit, but it would still be worse than the new bound.)
Note that although for the static problem, PTASs were known for unweighted arbitrary squares and disks in 2D [mustafa2009ptas] (and exact polynomialtime algorithms were known for halfplanes in 2D [HarPeledL12]), the running times of these static algorithms are superquadratic. Thus, for any of the 2D problems above, constant approximation factor is the best one could hope for under the current state of the art if the goal is sublinear update time.
A second significant contribution of our paper is to extend the dynamic set cover data structures to weighted instances, thus providing the first nontrivial results for dynamic weighted geometric set cover. (Although there were previous results on weighted independent set for 1D intervals and other ranges by Henzinger, Neumann, and Wiese [Henzinger0W20] and Bhore et al. [BhoreCIK20], no results on dynamic weighted geometric set cover were known even in 1D. This is in spite of the considerable work on static weighted geometric set cover [chan2012weighted, ErlebachL10, HarPeledL12, MustafaRR15, varadarajan2010weighted].) In particular, we present the following results:

For weighted intervals in 1D, we obtain a dynamic data structure with update time and constant approximation factor. The update bound is and the approximation factor is .

For weighted unit squares in 2D, we also obtain a dynamic data structure with update time and constant approximation factor (where the constant depends on and weights are assumed to be polynomially bounded integers). Even when compared to Agarwal et al.’s unweighted result [agarwal2020dynamic], our result is a substantial improvement, besides being more general.
For the cases of (unweighted or weighted) unit squares in 2D and unweighted halfplanes in 2D, the same results hold for the hitting set problem—given a set of points and a set of ranges, find the smallest (or minimum weight) subset of points that hit all the given ranges—because hitting set is equivalent to set cover for these types of ranges by duality.
Ranges  Approx.  Previous update time  New update time 

Unweighted 1D intervals  [agarwal2020dynamic]  
Unweighted 2D unit squares  [agarwal2020dynamic]  
Unweighted 2D arbitrary squares  [chan2021dynamic]  
Unweighted 2D halfplanes  () [chan2021dynamic]  
Weighted 1D intervals  none  
Weighted 2D unit squares  none 
1.2 Techniques.
We give six different methods to achieve these results. Many of these methods require significant new ideas that go beyond minor modifications of previous techniques:

For the unweighted 1D intervals, Agarwal et al. [agarwal2020dynamic] obtained their result with update time by a “bootstrapping” approach, but extra factors accumulate in each round of bootstrapping. To obtain polylogarithmic update time, we refine their approach with a better recursion, whose analysis distinguishes between “onesided” and “twosided” intervals.

For the unweighted 2D unit squares, it suffices to solve the problem for quadrants (i.e., 2sided orthogonal ranges) due to a standard reduction. We adopt an interesting geometric divideandconquer approach (different from more common approaches like kd trees or segment trees). Roughly, we form an nonuniform grid, where each column/row has points, and recursively build data structures for each grid cell and for each grid column and each grid row. Agarwal et al.’s previous data structure [agarwal2020dynamic] also used an grid but did not use recursion per column or row; the boundary of a quadrant intersects out of the grid cells and so updating a quadrant causes recursive calls, eventually leading to update time. With our new ideas, updating a quadrant requires recursive calls in only grid columns/rows and grid cells, leading to update time.

For unweighted 2D arbitrary squares, our method resembles Chan and He’s previous method [chan2021dynamic], dividing the problem into two cases: when the optimal value is small or when is large. Their small algorithm was obtained by modifying a known static approximation algorithm based on multiplicative weight updates [agarwal2014near, bronnimann1995almost, ChanH20, clarkson1993algorithms], and achieved update time.^{4}^{4}4 The notation hides polylogarithmic factors. Their large algorithm employed quadtrees and achieved update time. Combining the two algorithms yielded update time, as the critical case occurs when is near . We modify their large algorithm by incorporating some extra technical ideas (treating socalled “light” vs. “heavy” canonical rectangles differently, and carefully tuning parameters); this allows us to improve the update time to uniformly for all , pushing the approach to its natural limit.

For unweighted 2D halfplanes, we handle the small case by adapting Chan and He’s previous method [chan2021dynamic], but we present a new method for the large case. We propose a geometric divideandconquer approach based on the wellknown Partition Theorem of Matoušek [matouvsek1992efficient]. The Partition Theorem was originally formulated for the design of range searching data structures, but its applicability to decompose geometric set cover instances is less apparent. The key to the approximation factor analysis is a simple observation that the boundary of the union of the halfplanes in the optimal solution is a convex chain with edges, and so in a partition of the plane into disjoint cells, the number of intersecting pairs of edges and cells is .
For weighted dynamic geometric set cover, none of the previous approaches generalizes. Essentially all previous approaches for the unweighted setting make use of the dichotomy of small vs. large : in the small case, we can generate a solution quickly from scratch; on the other hand, in the large case, we can tolerate a large additive error (in particular, this enables divideandconquer with a large number of parts). However, all this breaks down in the weighted setting because the cardinality of the optimal solution is no longer related to its value. A different way to bound approximation factors is required.

For weighted 1D intervals, our key new idea is to incorporate dynamic programming (DP) into the divideandconquer. In addition, we use a common trick of grouping weights by powers of a constant, so that the number of distinct weight groups is logarithmic.

For weighted 2D unit squares, we again use a geometric divideandconquer based on the grid, but the recursion gets even more interesting as we incorporate DP. (We also group weights by powers of a constant.) To keep the approximation factor , the number of levels of recursion needs to be , but we can still achieve update time.
1.3 Preliminaries.
Throughout the paper, we use to denote the size of the optimal set cover (in the unweighted case), and to denote the set . In a size query, we want to output an approximation to the size . In a membership query, we want to determine whether a given object is in the approximate solution maintained by the data structure. In a reporting query, we want to report all elements in the approximate solution (in time sensitive to the output size). As in the previous work [agarwal2020dynamic, chan2021dynamic], in all of our results, the set cover solution we maintain is a multiset of ranges (i.e., each range may have multiple duplicates). We denote by the disjoint union of two multisets and .
2 Unweighted Interval Set Cover
Let be a dynamic (unweighted) interval set cover instance where is the set of points in and is the set of intervals, and let be the approximation factor. Our goal is to design a data structure that maintains a approximate set cover solution for the current instance and supports the desired queries (i.e., size, membership, report queries) to the solution. Without loss of generality, we may assume that the point range of is , i.e., the points in are always in the range .
Let and be parameters to be determined. Consider the initial instance and let . We partition the range into connected portions (i.e., intervals) such that each portion contains points in and endpoints of intervals in . Define and . When the instance changes, the portions remain unchanged while the ’s and ’s will change along with and . Thus, we can view each as a dynamic interval set cover instance with point range . We then recursively build a dynamic interval set cover data structure which maintains a approximate set cover solution for , where . We call subinstances and call substructures. Besides the recursively built substructures, we also need three simple support data structures. The first one is the data structure in the following lemma that can help compute an optimal interval set cover in outputsensitive time. [[agarwal2020dynamic]] One can store a dynamic (unweighted) interval set cover instance in a data structure with construction time and update time such that at any point, an optimal solution for can be computed in time with the access to .
The second one is a dynamic data structure built on which can report, for a given query interval , an interval that contains (if such an interval exists); as shown in [agarwal2020dynamic], there exists such a data structure with update time, query time, and construction time. The third one is a (static) data structure which can report, for a given query point , the portion that contains ; for this one, we can simply use a binary search tree built on which has query time. Our data structure simply consists of the substructures and the support data structures. It is easy to construct in time. To see this, we define as the total number of points in and endpoints of intervals in that are contained in the point range of . We have and for all (as is sufficiently large). Now let denote the time for constructing the data structure on an instance with . We then have the recurrence , where and for all . This recurrence solves to . Since , can be constructed in time, i.e., in time.
Updating the substructures and reconstruction.
Whenever the instance changes due to an insertion/deletion on or , we first update the support data structures. After that, we update the substructures for that changes. An insertion/deletion on only changes one and an insertion/deletion on changes at most two ’s (because an interval has two endpoints). Also, we observe that if the inserted/deleted interval is “onesided” in the sense that one endpoint of is outside the point range , then that insertion/deletion only changes one . This observation is critical in the analysis of our data structure. Besides the update, our data structure will be periodically reconstructed. Specifically, the th reconstruction happens after processing updates from the th reconstruction, where denotes the size of at the point of the th reconstruction. (The th reconstruction is just the initial construction of .)
Constructing a solution.
We now describe how to construct an approximately optimal set cover for the current using our data structure . Denote by the size of an optimal set cover for the current ; we define if does not have a set cover. Set for a sufficiently large constant . If , then we are able to use the algorithm of Lemma 2 to compute an optimal set cover for in time (with the help of the support data structure ). Therefore, we simulate that algorithm within that amount of time. If the algorithm successfully computes a solution, we use it as our . Otherwise, we construct as follows. For each , if can be covered by an interval , we define , otherwise let be the approximate solution for maintained in the substructure . (If for some , cannot be covered by any interval in and the substructure tells us that the current does not have a set cover, then we immediately decide that the current has no feasible set cover.) Then we define , which is clearly a set cover of . Note that for each , we can find in time an interval that covers using the support data structure (if such an interval exists).
Answering queries to the solution.
We show how to store the solution properly so that the desired queries for can be answered efficiently. If is computed by the algorithm of Lemma 2, then the size of is at most and we have all elements of in hand. In this case, we simply build a binary search tree on which can answer the desired queries with the required time costs. On the other hand, if is defined as , the size of can be large and we are not able to retrieve all elements of . However, in this case, each either consists of a single interval that covers or is the solution maintained in the substructure . To support the size query, we only need to compute (which can be done by recursively making size queries to the substructures) and calculate ; we then simply store this quantity so that a size query can be answered in time. To support membership queries, we compute an index set consisting of the indices such that consists of a single interval covering . Then we collect all intervals in the ’s for , the number of which is at most . We store these intervals in a binary search tree which can answer membership queries in time. To answer a membership query , we first check if is stored in . After that, we find the (up to) two instances ’s that contains , and make membership queries to the substructures to check whether (if ). Finally, to answer the reporting query, we first report the intervals stored in and then for every , we make recursively a reporting query to , which reports the intervals in .
Now we analyze the query time. If the solution is computed by the algorithm of Lemma 2, then it is stored in a binary search tree and we can answer a size query, a membership query, and a reporting query in time, time, and time, respectively. So it suffices to consider the case where we construct the solution as . In this case, answering a size query still takes , because we explicitly compute . To analyze the time cost for a membership query, we need to distinguish onesided and twosided queries. We use and to denote the time cost for a onesided membership query (i.e., one endpoint of the query interval is outside the point range) and a twosided membership query (i.e., both endpoints of the query interval are inside the point range), respectively, when the size of the current instance is . Then for , we have the recurrence , which solves to , as we only need to recursively query on one (which is again a onesided query). For , we have the recurrence , which also solves to , as we may need to have a recursive twosided query on one or have recursive onesided queries on two ’s. Therefore, a membership query can be answered in time. Finally, to answer a reporting query, we first report the intervals stored in and recursively query the data structures for all such that . Thus, in the recurrence tree, the number of leaves is bounded by since at each leaf node we need to report at least one element. Since the height of the recurrence tree is and at each node of the recurrence tree the work can be done in time plus per outputted element, the overall time cost for a reporting query is .
Correctness.
First, we observe that makes a nosolution decision iff the current instance has no set cover. Indeed, if we make a nosolution decision, then is not covered by any interval in and the subinstance has no set cover for some ; in this case, has no set cover because the points in can only be covered by the intervals in or by an interval that covers . On the other hand, if we do not make a nosolution decision, then the set we construct is a feasible solution for . Now it suffices to show that the solution is a approximation of an optimal set cover for . Let be an optimal set cover for . We have to show . If is computed by the algorithm of Lemma 2, then . Otherwise, we know that , which implies for a sufficiently large constant , because we cannot have . In this case, we show the following. . For , let be the size of an optimal set cover of if is the solution of maintained by , and let otherwise. Then for all , we have . Since , we have . It suffices to show that . Let be the number of intervals in that are contained in for . Clearly, . We claim that , which implies . If can be covered by some interval in , then . Otherwise, we take all intervals in that are contained in and the (at most) two intervals in with one endpoint in which have maximal intersections with (i.e., the interval containing the left end of with the rightmost right endpoint and the interval containing the right end of with the leftmost left endpoint). These intervals form a set cover of and thus .
Using the above observation and the fact , we conclude that .
Update time.
To analyze the update time of our data structure , it suffices to consider the first period (including the first reconstruction). The first period consists of operations, where is the size of the initial . The size of during the first period is always in between and and is hence , since is a sufficiently large constant. We first observe that, excluding the recursive updates for the substructures, each update of takes (amortized) time, where is the size of the current instance . Updating the support data structures takes time. When constructing the solution , we need to simulate the algorithm of Lemma 2 within time, i.e., time. The time for storing the solution is also bounded by , because we only need to explicitly store when it is computed by the algorithm of Lemma 2, in which case its size is at most . Finally, the reconstruction takes amortized time, because the time cost of the (first) reconstruction is and the first period consists of operations.
Next, we consider the recursive updates for the substructures. The depth of the recursion is . If we set , the approximation parameter is in any level of the recurrence. We distinguish three types of updates according to the current operation. The first type is caused by an insertion/deletion of a point in (we call it point update). The second type is caused by an insertion/deletion of an interval in whose one endpoint is outside the point range of (we call it onesided interval update). The third type is caused by an insertion/deletion of an interval in whose both endpoints are inside the point range (we call it twosided interval update). In a point update, we only need to recursively update one substructure (which is again a point update), because an insertion/deletion on only changes one . Similarly, in a onesided interval update, we only need to do a recursive onesided interval update on one substructure, because the inserted/deleted interval belongs to one . Finally, in a twosided interval update, we may need to do a recursive twosided interval update on one substructure (when the two endpoints of the inserted/deleted interval belong to the same range ) or two recursive onesided interval updates on two substructures (when the two endpoints belong to different ’s). Let , , denote the time costs of a point update, a onesided interval update, a twosided interval update, respectively, when the size of the current instance is . Then for , we have the recurrence
which solves to . Similarly, for , we have the same recurrence, solving to . Finally, the recurrence for is
A simple induction argument shows that . Setting to be a sufficiently large constant, our data structure can be updated in amortized time.
There exists a dynamic data structure for approximate unweighted interval set cover with amortized update time and construction time, which can answer size, membership, and reporting queries in , , and time, respectively, where is the size of the instance and is the size of the maintained solution.
3 Unweighted UnitSquare Set Cover
It was shown in [agarwal2020dynamic] that dynamic unitsquare set cover can be reduced to dynamic quadrant set cover. Specifically, dynamic unitsquare set cover can be solved with the same update time as dynamic quadrant set cover, by losing only a constant factor on the approximation ratio. Therefore, it suffices to consider dynamic quadrant set cover. Note that the problem is still challenging, as we need to simultaneously deal with all four types of quadrants.
Similar to interval set cover, quadrant set cover also admits an outputsensitive algorithm: [[agarwal2020dynamic]] One can store a dynamic (unweighted) quadrant set cover instance in a data structure with construction time and update time such that at any point, a constantapproximate solution for can be computed in time with the access to .
Let be a dynamic (unweighted) quadrant set cover instance where is the set of points in and is the set of quadrants. Suppose is the approximation factor of the algorithm of Lemma 3. Our goal is to design a data structure that maintains a approximate set cover solution for the current instance and supports the desired queries to the solution, for a given parameter . Without loss of generality, we may assume that the point range of is , i.e., the points in are always in the range . We say a quadrant in is trivial (resp., nontrivial) if its vertex is outside (resp., inside) the point range . Note that a trivial quadrant is “equivalent” to a horizontal/vertical halfplane in terms of the coverage in .
Let and be parameters to be determined. Consider the initial instance and let . We partition the point range into rectangular cells using horizontal lines and vertical lines such that each row (resp., column) of cells contains points in and vertices of the quadrants in . Let be the cell in the th row and th column for . We denote by the th row (i.e., ) for and by the th column (i.e., ) for . Define , , and , for . Next, we decompose into small subsets as follows. We say a quadrant left intersects a rectangle if and contains the left boundary of . Among a set of quadrants that left intersect a rectangle , the maximal one refers to the quadrant whose vertex is the rightmost, or equivalently, whose intersection with is maximal. Similarly, we can define the notions of “right intersect”, “top intersect”, and “bottom intersect”. For , we define be the subset consisting of all nontrivial quadrants whose vertices lie in and the (up to) four nontrivial maximal quadrants that left, right, top, bottom intersect ; we call the latter the four special quadrants in . Similarly, for (resp., ), we define (resp., ) be the subset consisting of all nontrivial quadrants whose vertices lie in (resp., ) and the four nontrivial maximal quadrants that left, right, top, bottom intersect (resp., ); we call the latter the four special quadrants in (resp., ). When the instance changes, the cells (as well as the rows and columns ) remain unchanged while the sets , , (resp., , , ) will change along with (resp., ). We view each as a dynamic quadrant set cover instance with point range , and recursively build a substructure that maintains a approximate set cover solution for , where . Similarly, we view each (resp., ) as a dynamic quadrant set cover instance with point range (resp., ), and recursively build a substructure (resp., ) that maintains a approximate set cover solution for (resp., ). For convenience, we call the cell subinstances, the row subinstances, and the column subinstances. Besides the data structures recursively built on the subinstances, we also need some simple support data structures. The first one is the data structure required for the outputsensitive algorithm for quadrant set cover (Lemma 3). The second one is a dynamic data structure built on , which can report, for a given query rectangle , the maximal quadrant in that left/right/top/bottom intersects . The third one is a dynamic data structure built on , which can report, for a given query rectangle , a quadrant in that contains (if such a quadrant exists). The fourth one is a plane pointlocation data structure , which can report, for a given query point , the cell that contains . As shown in [agarwal2020dynamic], all these support data structures can be built in time and updated in time. Our data structure consists of the recursively built substructures , , and the support data structures , , , . It is easy to construct in time. To see this, we notice that the size of each subinstance is of size . Also, the total size of all (row, column, cell) subinstances is bounded by . Therefore, if we denote by the construction time of the data structure when the size of the instance is , we have the recurrence for some satisfying and , , for all . The recurrence solves to .
Update of the substructures and reconstruction.
Whenever the instance changes due to an insertion/deletion on or , we first update the support data structures. After that, we update the substructures , , for which the underlying subinstances change. Observe that an insertion/deletion on only changes one , one , and one (so at most three subinstances). An insertion/deletion of a trivial quadrant does not change any subinstances, while an insertion/deletion of a nontrivial quadrant changes at most subinstances. Besides the update, our data structure will be periodically reconstructed. Specifically, the th reconstruction happens after processing updates from the th reconstruction, where denotes the size of at the point of the th reconstruction. (The th reconstruction is just the initial construction of .)
Constructing a solution.
We now describe how to construct an approximately optimal set cover for the current using our data structure . Denote by the size of an optimal set cover for the current ; we define if does not have a set cover. Set , where is a sufficiently large constant. If , then we are able to use the algorithm of Lemma 3 to compute a approximate set cover solution for in time. Therefore, we simulate that algorithm within that amount of time. If the algorithm successfully computes a solution, we use it as our . Otherwise, we know that . In this case, we construct by combining the solutions maintained by the substructures as follows.
Consider the trivial quadrants in . There are (up to) four maximal trivial quadrants that left, right, top, bottom intersect the point range , which we denote by , respectively. Let (resp., ) be the smallest index such that (resp., ), and (resp., ) be the largest index such that (resp., ). Note that , because otherwise and thus (which contradicts with the fact ). For the same reason, . We include in our solution . By doing this, all points in (resp., ) for or (resp., or ) are covered. The remaining task is to cover the points in the complement of the in ; these points lie in the cells for and .
We cover the points in using two collections of quadrants. The first collection covers all points in the cells contained in , i.e., the cells for and . Specifically, if the cell can be covered by a single quadrant , we define , otherwise we define as the approximate set cover solution for the subinstance maintained by . (If there exists a cell for and that is not covered by any single quadrant and the substructure tells us that the subinstance has no solution, then we make a nosolution decision for .) We include in our solution all quadrants in , which cover the points in for and . Now the only points uncovered are those lie in the rectangular annulus, which is the complement of the union of the cells in (see Figure 1). We partition this rectangular annulus into four rectangles (again see Figure 1), which are contained in , respectively. We obtain a set cover for the points in each of using the corresponding row/column substructure as follows. Consider . We temporarily insert the three virtual quadrants to the subinstance (these quadrants will be deleted afterwards) and update the substructure so that now maintains a solution for . This solution covers all points in . We then remove the quadrants from the solution (if any of them are used), and the set of the remaining quadrants should cover all points in . In a similar way, we can construct sets that cover the points in , respectively, by using the substructures . (If any of those substructures tells us the corresponding subinstance has no solution, then we make a nosolution decision for .) We include in all quadrants in . This completes the construction of . To summarize, we define
(1) 
where . From the construction, it is easy to verify that is a set cover for .
Answering queries to the solution.
We show how to store the solution properly so that the desired queries for can be answered efficiently. If is computed using the outputsensitive algorithm of Lemma 3, then and we have all elements of . In this case, we simply build a binary search tree on , which can answer the desired queries with the required time costs. On the other hand, if is defined using Equation 1, we cannot compute explicitly. Instead, we simply compute the size of . We have , where and can be obtained by querying the substructures and ’s. By storing , we can answer the size query in time. In order to answer membership queries, we need some extra work. The main difficulty is that one quadrant may belong to many ’s, but we cannot afford recursively querying all substructures . To overcome this difficulty, the idea is to store the special quadrants in ’s separately. Recall that consists of all nontrivial quadrants in whose vertices are in and four special quadrants . We collect all special quadrants in for , the number of which is at most . We then store these special quadrants in a binary search tree which can support membership queries. To answer a membership query , we first compute its multiplicity in , which can be done by recursive membership queries on the substructures . Then it suffices to compute the multiplicity of in . Note that although there can be many ’s containing , all of them contain as a special quadrant except the one containing the vertex of . So we only need to query to obtain the multiplicity of contained
Comments
There are no comments yet.