# Dynamic Geometric Set Cover, Revisited

Geometric set cover is a classical problem in computational geometry, which has been extensively studied in the past. In the dynamic version of the problem, points and ranges may be inserted and deleted, and our goal is to efficiently maintain a set cover solution (satisfying certain quality requirement). In this paper, we give a plethora of new dynamic geometric set cover data structures in 1D and 2D, which significantly improve and extend the previous results: 1. The first data structure for (1+ε)-approximate dynamic interval set cover with polylogarithmic amortized update time. Specifically, we achieve an update time of O(log^3 n/ε), improving the O(n^δ/ε) bound of Agarwal et al. [SoCG'20], where δ>0 denotes an arbitrarily small constant. 2. A data structure for O(1)-approximate dynamic unit-square set cover with 2^O(√(log n)) amortized update time, substantially improving the O(n^1/2+δ) update time of Agarwal et al. [SoCG'20]. 3. A data structure for O(1)-approximate dynamic square set cover with O(n^1/2+δ) randomized amortized update time, improving the O(n^2/3+δ) update time of Chan and He [SoCG'21]. 4. A data structure for O(1)-approximate dynamic 2D halfplane set cover with O(n^17/23+δ) randomized amortized update time. The previous solution for halfplane set cover by Chan and He [SoCG'21] is slower and can only report the size of the approximate solution. 5. The first sublinear results for the weighted version of dynamic geometric set cover. Specifically, we give a data structure for (3+o(1))-approximate dynamic weighted interval set cover with 2^O(√(log n loglog n)) amortized update time and a data structure for O(1)-approximate dynamic weighted unit-square set cover with O(n^δ) amortized update time.

## Authors

• 21 publications
• 6 publications
• 7 publications
• 11 publications
03/14/2021

### More Dynamic Data Structures for Geometric Set Cover with Sublinear Update Time

We study geometric set cover problems in dynamic settings, allowing inse...
02/29/2020

### Dynamic geometric set cover and hitting set

We investigate dynamic versions of geometric set cover and hitting set w...
07/16/2020

### Dynamic Products of Ranks

We describe a data structure that can maintain a dynamic set of points g...
01/01/2022

### Dynamic Least-Squares Regression

A common challenge in large-scale supervised learning, is how to exploit...
12/13/2019

### Construction and Maintenance of Swarm Drones

In this paper we study the dynamic version of the covering problem motiv...
11/05/2020

### Competitive Data-Structure Dynamization

Data-structure dynamization is a general approach for making static data...
01/29/2020

### Decremental SSSP in Weighted Digraphs: Faster and Against an Adaptive Adversary

Given a dynamic digraph G = (V,E) undergoing edge deletions and given s∈...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Geometric set cover is a classical problem in computational geometry, with a long history and applications [agarwal2012near, agarwal2014near, bronnimann1995almost, bus2018practical, ChanG14, chan2012weighted, ChanH20, ChanH15, clarkson2007improved, ErlebachL10, MustafaRR15, mustafa2009ptas, varadarajan2010weighted]. A typical formulation involves a set of points in and a family of subsets of , often called ranges, defined by a simple class of geometric objects. For instance, the sets may be defined by intervals of in one dimension or balls, hypercubes or halfspaces in higher dimensions. The goal is to find a smallest subfamily of covering all the points of . In the weighted set cover problem, each range is associated with a non-negative weight, and the goal is to find a minimum weight set cover. In general, these problems are NP-complete even for the simplest of geometric families such as unit disks or unit squares in two dimensions, but they often allow efficient approximation algorithms with better (worst-case) performance than the general (combinatorial) set cover.

Recently, an exciting line of research was launched by Agarwal et al. [agarwal2020dynamic] on dynamic geometric set covering with the introduction of sublinear time data structures for fully dynamic maintenance of approximate set covers for intervals in one dimension and unit squares in two dimensions. These sublinear bounds are in sharp contrast with the update time bottleneck faced by the general (combinatorial) set cover problem in dynamic setting [AbboudA0PS19, BhattacharyaHNW21, GuptaK0P17], where is the number of sets containing an element, because inserting an element at a minimum requires updating all the sets that contain it. The implicit form of sets in geometric set covering—an interval or a disk, for instance, takes only pieces of information to add or delete—provides a natural yet challenging problem setting in which to explore the possibility of truly sublinear (possibly polylogarithmic) updates of both the elements and the sets. Indeed, following the work of Agarwal et al. [agarwal2020dynamic], Chan and He [chan2021dynamic] pushed the envelope further and managed to achieve sublinear update time for arbitrary axis-aligned squares, and if only the size of an approximate solution is needed, for disks in the plane and halfspaces in three dimensions as well.

In spite of these recent developments, the state of the art for dynamic geometric set covering is far from satisfactory even for the simplest of the set systems: covering points on the line by intervals or covering points in the plane by axis-aligned squares. For instance, the best update bound for the former is for a approximation, and for the latter is for an approximation, where is an arbitrarily small constant. More importantly, none of these schemes are able to handle the case of weighted set covers. In this paper, we make substantial progress on these fronts.

### 1.1 Results.

We present a large collection of new results, as summarized in Table 1, which substantially improve all the main results of Agarwal et al. [agarwal2020dynamic] on unweighted intervals in 1D and unweighted unit squares in 2D, as well as the main result of Chan and He [chan2021dynamic] on unweighted arbitrary squares in 2D. Throughout the paper, all the update bounds are amortized, and denotes an arbitrarily small constant; constant factors hidden in notation may depend on . In particular, our results include the following:

1. For unweighted intervals in 1D, we obtain the first dynamic data structure with polylogarithmic update time and constant approximation factor. We achieve approximation with update time, which improves Agarwal et al.’s previous update bound of . (The dynamic hitting set data structure for 1D intervals in  [agarwal2020dynamic] does have polylogarithmic update time but not the set cover data structure.)

2. For unweighted unit squares in 2D, we obtain the first dynamic data structure with update time and constant approximation factor. (All squares are axis-aligned throughout the paper.) The precise update bound is , which significantly improves Agarwal et al.’s previous update bound of .

3. For unweighted arbitrary squares in 2D, we obtain a dynamic data structure with update time (with Monte Carlo randomization) and constant approximation factor. This improves Chan and He’s previous (randomized) update bound of .

4. For unweighted halfplanes in 2D, we obtain the first dynamic data structure with sublinear update time and constant approximation factor that can efficiently report an approximate solution (in time linear in the solution size). The (randomized) update bound is . Although Chan and He’s previous solution [chan2021dynamic] can more generally handle halfspaces in 3D, it has a larger (randomized) update bound of and can only output the size of an approximate solution. (Specializing Chan and He’s solution to halfplanes in 2D can lower the update time a bit, but it would still be worse than the new bound.)

Note that although for the static problem, PTASs were known for unweighted arbitrary squares and disks in 2D [mustafa2009ptas] (and exact polynomial-time algorithms were known for halfplanes in 2D [Har-PeledL12]), the running times of these static algorithms are superquadratic. Thus, for any of the 2D problems above, constant approximation factor is the best one could hope for under the current state of the art if the goal is sublinear update time.

A second significant contribution of our paper is to extend the dynamic set cover data structures to weighted instances, thus providing the first nontrivial results for dynamic weighted geometric set cover. (Although there were previous results on weighted independent set for 1D intervals and other ranges by Henzinger, Neumann, and Wiese [Henzinger0W20] and Bhore et al. [BhoreCIK20], no results on dynamic weighted geometric set cover were known even in 1D. This is in spite of the considerable work on static weighted geometric set cover [chan2012weighted, ErlebachL10, Har-PeledL12, MustafaRR15, varadarajan2010weighted].) In particular, we present the following results:

1. For weighted intervals in 1D, we obtain a dynamic data structure with update time and constant approximation factor. The update bound is and the approximation factor is .

2. For weighted unit squares in 2D, we also obtain a dynamic data structure with update time and constant approximation factor (where the constant depends on and weights are assumed to be polynomially bounded integers). Even when compared to Agarwal et al.’s unweighted result [agarwal2020dynamic], our result is a substantial improvement, besides being more general.

For the cases of (unweighted or weighted) unit squares in 2D and unweighted halfplanes in 2D, the same results hold for the hitting set problem—given a set of points and a set of ranges, find the smallest (or minimum weight) subset of points that hit all the given ranges—because hitting set is equivalent to set cover for these types of ranges by duality.

### 1.2 Techniques.

We give six different methods to achieve these results. Many of these methods require significant new ideas that go beyond minor modifications of previous techniques:

1. For the unweighted 1D intervals, Agarwal et al. [agarwal2020dynamic] obtained their result with update time by a “bootstrapping” approach, but extra factors accumulate in each round of bootstrapping. To obtain polylogarithmic update time, we refine their approach with a better recursion, whose analysis distinguishes between “one-sided” and “two-sided” intervals.

2. For the unweighted 2D unit squares, it suffices to solve the problem for quadrants (i.e., 2-sided orthogonal ranges) due to a standard reduction. We adopt an interesting geometric divide-and-conquer approach (different from more common approaches like k-d trees or segment trees). Roughly, we form an nonuniform grid, where each column/row has points, and recursively build data structures for each grid cell and for each grid column and each grid row. Agarwal et al.’s previous data structure [agarwal2020dynamic] also used an grid but did not use recursion per column or row; the boundary of a quadrant intersects out of the grid cells and so updating a quadrant causes recursive calls, eventually leading to update time. With our new ideas, updating a quadrant requires recursive calls in only grid columns/rows and grid cells, leading to update time.

3. For unweighted 2D arbitrary squares, our method resembles Chan and He’s previous method [chan2021dynamic], dividing the problem into two cases: when the optimal value is small or when is large. Their small algorithm was obtained by modifying a known static approximation algorithm based on multiplicative weight updates [agarwal2014near, bronnimann1995almost, ChanH20, clarkson1993algorithms], and achieved update time.444 The notation hides polylogarithmic factors. Their large algorithm employed quadtrees and achieved update time. Combining the two algorithms yielded update time, as the critical case occurs when is near . We modify their large algorithm by incorporating some extra technical ideas (treating so-called “light” vs. “heavy” canonical rectangles differently, and carefully tuning parameters); this allows us to improve the update time to uniformly for all , pushing the approach to its natural limit.

4. For unweighted 2D halfplanes, we handle the small case by adapting Chan and He’s previous method [chan2021dynamic], but we present a new method for the large case. We propose a geometric divide-and-conquer approach based on the well-known Partition Theorem of Matoušek [matouvsek1992efficient]. The Partition Theorem was originally formulated for the design of range searching data structures, but its applicability to decompose geometric set cover instances is less apparent. The key to the approximation factor analysis is a simple observation that the boundary of the union of the halfplanes in the optimal solution is a convex chain with edges, and so in a partition of the plane into disjoint cells, the number of intersecting pairs of edges and cells is .

For weighted dynamic geometric set cover, none of the previous approaches generalizes. Essentially all previous approaches for the unweighted setting make use of the dichotomy of small vs. large : in the small case, we can generate a solution quickly from scratch; on the other hand, in the large case, we can tolerate a large additive error (in particular, this enables divide-and-conquer with a large number of parts). However, all this breaks down in the weighted setting because the cardinality of the optimal solution is no longer related to its value. A different way to bound approximation factors is required.

1. For weighted 1D intervals, our key new idea is to incorporate dynamic programming (DP) into the divide-and-conquer. In addition, we use a common trick of grouping weights by powers of a constant, so that the number of distinct weight groups is logarithmic.

2. For weighted 2D unit squares, we again use a geometric divide-and-conquer based on the grid, but the recursion gets even more interesting as we incorporate DP. (We also group weights by powers of a constant.) To keep the approximation factor , the number of levels of recursion needs to be , but we can still achieve update time.

### 1.3 Preliminaries.

Throughout the paper, we use to denote the size of the optimal set cover (in the unweighted case), and to denote the set . In a size query, we want to output an approximation to the size . In a membership query, we want to determine whether a given object is in the approximate solution maintained by the data structure. In a reporting query, we want to report all elements in the approximate solution (in time sensitive to the output size). As in the previous work [agarwal2020dynamic, chan2021dynamic], in all of our results, the set cover solution we maintain is a multi-set of ranges (i.e., each range may have multiple duplicates). We denote by the disjoint union of two multi-sets and .

## 2 Unweighted Interval Set Cover

Let be a dynamic (unweighted) interval set cover instance where is the set of points in and is the set of intervals, and let be the approximation factor. Our goal is to design a data structure that maintains a -approximate set cover solution for the current instance and supports the desired queries (i.e., size, membership, report queries) to the solution. Without loss of generality, we may assume that the point range of is , i.e., the points in are always in the range .

Let and be parameters to be determined. Consider the initial instance and let . We partition the range into connected portions (i.e., intervals) such that each portion contains points in and endpoints of intervals in . Define and . When the instance changes, the portions remain unchanged while the ’s and ’s will change along with and . Thus, we can view each as a dynamic interval set cover instance with point range . We then recursively build a dynamic interval set cover data structure which maintains a -approximate set cover solution for , where . We call sub-instances and call sub-structures. Besides the recursively built sub-structures, we also need three simple support data structures. The first one is the data structure in the following lemma that can help compute an optimal interval set cover in output-sensitive time. [[agarwal2020dynamic]] One can store a dynamic (unweighted) interval set cover instance in a data structure with construction time and update time such that at any point, an optimal solution for can be computed in time with the access to .

The second one is a dynamic data structure built on which can report, for a given query interval , an interval that contains (if such an interval exists); as shown in [agarwal2020dynamic], there exists such a data structure with update time, query time, and construction time. The third one is a (static) data structure which can report, for a given query point , the portion that contains ; for this one, we can simply use a binary search tree built on which has query time. Our data structure simply consists of the sub-structures and the support data structures. It is easy to construct in time. To see this, we define as the total number of points in and endpoints of intervals in that are contained in the point range of . We have and for all (as is sufficiently large). Now let denote the time for constructing the data structure on an instance with . We then have the recurrence , where and for all . This recurrence solves to . Since , can be constructed in time, i.e., in time.

#### Updating the sub-structures and reconstruction.

Whenever the instance changes due to an insertion/deletion on or , we first update the support data structures. After that, we update the sub-structures for that changes. An insertion/deletion on only changes one and an insertion/deletion on changes at most two ’s (because an interval has two endpoints). Also, we observe that if the inserted/deleted interval is “one-sided” in the sense that one endpoint of is outside the point range , then that insertion/deletion only changes one . This observation is critical in the analysis of our data structure. Besides the update, our data structure will be periodically reconstructed. Specifically, the -th reconstruction happens after processing updates from the -th reconstruction, where denotes the size of at the point of the -th reconstruction. (The -th reconstruction is just the initial construction of .)

#### Constructing a solution.

We now describe how to construct an approximately optimal set cover for the current using our data structure . Denote by the size of an optimal set cover for the current ; we define if does not have a set cover. Set for a sufficiently large constant . If , then we are able to use the algorithm of Lemma 2 to compute an optimal set cover for in time (with the help of the support data structure ). Therefore, we simulate that algorithm within that amount of time. If the algorithm successfully computes a solution, we use it as our . Otherwise, we construct as follows. For each , if can be covered by an interval , we define , otherwise let be the -approximate solution for maintained in the sub-structure . (If for some , cannot be covered by any interval in and the sub-structure tells us that the current does not have a set cover, then we immediately decide that the current has no feasible set cover.) Then we define , which is clearly a set cover of . Note that for each , we can find in time an interval that covers using the support data structure (if such an interval exists).

#### Answering queries to the solution.

We show how to store the solution properly so that the desired queries for can be answered efficiently. If is computed by the algorithm of Lemma 2, then the size of is at most and we have all elements of in hand. In this case, we simply build a binary search tree on which can answer the desired queries with the required time costs. On the other hand, if is defined as , the size of can be large and we are not able to retrieve all elements of . However, in this case, each either consists of a single interval that covers or is the solution maintained in the sub-structure . To support the size query, we only need to compute (which can be done by recursively making size queries to the sub-structures) and calculate ; we then simply store this quantity so that a size query can be answered in time. To support membership queries, we compute an index set consisting of the indices such that consists of a single interval covering . Then we collect all intervals in the ’s for , the number of which is at most . We store these intervals in a binary search tree which can answer membership queries in time. To answer a membership query , we first check if is stored in . After that, we find the (up to) two instances ’s that contains , and make membership queries to the sub-structures to check whether (if ). Finally, to answer the reporting query, we first report the intervals stored in and then for every , we make recursively a reporting query to , which reports the intervals in .

Now we analyze the query time. If the solution is computed by the algorithm of Lemma 2, then it is stored in a binary search tree and we can answer a size query, a membership query, and a reporting query in time, time, and time, respectively. So it suffices to consider the case where we construct the solution as . In this case, answering a size query still takes , because we explicitly compute . To analyze the time cost for a membership query, we need to distinguish one-sided and two-sided queries. We use and to denote the time cost for a one-sided membership query (i.e., one endpoint of the query interval is outside the point range) and a two-sided membership query (i.e., both endpoints of the query interval are inside the point range), respectively, when the size of the current instance is . Then for , we have the recurrence , which solves to , as we only need to recursively query on one (which is again a one-sided query). For , we have the recurrence , which also solves to , as we may need to have a recursive two-sided query on one or have recursive one-sided queries on two ’s. Therefore, a membership query can be answered in time. Finally, to answer a reporting query, we first report the intervals stored in and recursively query the data structures for all such that . Thus, in the recurrence tree, the number of leaves is bounded by since at each leaf node we need to report at least one element. Since the height of the recurrence tree is and at each node of the recurrence tree the work can be done in time plus per outputted element, the overall time cost for a reporting query is .

#### Correctness.

First, we observe that makes a no-solution decision iff the current instance has no set cover. Indeed, if we make a no-solution decision, then is not covered by any interval in and the sub-instance has no set cover for some ; in this case, has no set cover because the points in can only be covered by the intervals in or by an interval that covers . On the other hand, if we do not make a no-solution decision, then the set we construct is a feasible solution for . Now it suffices to show that the solution is a -approximation of an optimal set cover for . Let be an optimal set cover for . We have to show . If is computed by the algorithm of Lemma 2, then . Otherwise, we know that , which implies for a sufficiently large constant , because we cannot have . In this case, we show the following. . For , let be the size of an optimal set cover of if is the solution of maintained by , and let otherwise. Then for all , we have . Since , we have . It suffices to show that . Let be the number of intervals in that are contained in for . Clearly, . We claim that , which implies . If can be covered by some interval in , then . Otherwise, we take all intervals in that are contained in and the (at most) two intervals in with one endpoint in which have maximal intersections with (i.e., the interval containing the left end of with the rightmost right endpoint and the interval containing the right end of with the leftmost left endpoint). These intervals form a set cover of and thus .

Using the above observation and the fact , we conclude that .

#### Update time.

To analyze the update time of our data structure , it suffices to consider the first period (including the first reconstruction). The first period consists of operations, where is the size of the initial . The size of during the first period is always in between and and is hence , since is a sufficiently large constant. We first observe that, excluding the recursive updates for the sub-structures, each update of takes (amortized) time, where is the size of the current instance . Updating the support data structures takes time. When constructing the solution , we need to simulate the algorithm of Lemma 2 within time, i.e., time. The time for storing the solution is also bounded by , because we only need to explicitly store when it is computed by the algorithm of Lemma 2, in which case its size is at most . Finally, the reconstruction takes amortized time, because the time cost of the (first) reconstruction is and the first period consists of operations.

Next, we consider the recursive updates for the sub-structures. The depth of the recursion is . If we set , the approximation parameter is in any level of the recurrence. We distinguish three types of updates according to the current operation. The first type is caused by an insertion/deletion of a point in (we call it point update). The second type is caused by an insertion/deletion of an interval in whose one endpoint is outside the point range of (we call it one-sided interval update). The third type is caused by an insertion/deletion of an interval in whose both endpoints are inside the point range (we call it two-sided interval update). In a point update, we only need to recursively update one sub-structure (which is again a point update), because an insertion/deletion on only changes one . Similarly, in a one-sided interval update, we only need to do a recursive one-sided interval update on one sub-structure, because the inserted/deleted interval belongs to one . Finally, in a two-sided interval update, we may need to do a recursive two-sided interval update on one sub-structure (when the two endpoints of the inserted/deleted interval belong to the same range ) or two recursive one-sided interval updates on two sub-structures (when the two endpoints belong to different ’s). Let , , denote the time costs of a point update, a one-sided interval update, a two-sided interval update, respectively, when the size of the current instance is . Then for , we have the recurrence

 U(n)≤U(O(n/r))+O(rlog2n/ε),

which solves to . Similarly, for , we have the same recurrence, solving to . Finally, the recurrence for is

 U2(n) ≤max{U2(O(n/r)),2U1(O(n/r))}+O(rlog2n/ε) =max{U2(O(n/r)),O(rlogrnlog2n/ε)}+O(rlog2n/ε).

A simple induction argument shows that . Setting to be a sufficiently large constant, our data structure can be updated in amortized time.

There exists a dynamic data structure for -approximate unweighted interval set cover with amortized update time and construction time, which can answer size, membership, and reporting queries in , , and time, respectively, where is the size of the instance and is the size of the maintained solution.

## 3 Unweighted Unit-Square Set Cover

It was shown in [agarwal2020dynamic] that dynamic unit-square set cover can be reduced to dynamic quadrant set cover. Specifically, dynamic unit-square set cover can be solved with the same update time as dynamic quadrant set cover, by losing only a constant factor on the approximation ratio. Therefore, it suffices to consider dynamic quadrant set cover. Note that the problem is still challenging, as we need to simultaneously deal with all four types of quadrants.

Similar to interval set cover, quadrant set cover also admits an output-sensitive algorithm: [[agarwal2020dynamic]] One can store a dynamic (unweighted) quadrant set cover instance in a data structure with construction time and update time such that at any point, a constant-approximate solution for can be computed in time with the access to .

Let be a dynamic (unweighted) quadrant set cover instance where is the set of points in and is the set of quadrants. Suppose is the approximation factor of the algorithm of Lemma 3. Our goal is to design a data structure that maintains a -approximate set cover solution for the current instance and supports the desired queries to the solution, for a given parameter . Without loss of generality, we may assume that the point range of is , i.e., the points in are always in the range . We say a quadrant in is trivial (resp., nontrivial) if its vertex is outside (resp., inside) the point range . Note that a trivial quadrant is “equivalent” to a horizontal/vertical halfplane in terms of the coverage in .

#### Update of the sub-structures and reconstruction.

Whenever the instance changes due to an insertion/deletion on or , we first update the support data structures. After that, we update the sub-structures , , for which the underlying sub-instances change. Observe that an insertion/deletion on only changes one , one , and one (so at most three sub-instances). An insertion/deletion of a trivial quadrant does not change any sub-instances, while an insertion/deletion of a nontrivial quadrant changes at most sub-instances. Besides the update, our data structure will be periodically reconstructed. Specifically, the -th reconstruction happens after processing updates from the -th reconstruction, where denotes the size of at the point of the -th reconstruction. (The -th reconstruction is just the initial construction of .)

#### Constructing a solution.

We now describe how to construct an approximately optimal set cover for the current using our data structure . Denote by the size of an optimal set cover for the current ; we define if does not have a set cover. Set , where is a sufficiently large constant. If , then we are able to use the algorithm of Lemma 3 to compute a -approximate set cover solution for in time. Therefore, we simulate that algorithm within that amount of time. If the algorithm successfully computes a solution, we use it as our . Otherwise, we know that . In this case, we construct by combining the solutions maintained by the sub-structures as follows.

Consider the trivial quadrants in . There are (up to) four maximal trivial quadrants that left, right, top, bottom intersect the point range , which we denote by , respectively. Let (resp., ) be the smallest index such that (resp., ), and (resp., ) be the largest index such that (resp., ). Note that , because otherwise and thus (which contradicts with the fact ). For the same reason, . We include in our solution . By doing this, all points in (resp., ) for or (resp., or ) are covered. The remaining task is to cover the points in the complement of the in ; these points lie in the cells for and .

We cover the points in using two collections of quadrants. The first collection covers all points in the cells contained in , i.e., the cells for and . Specifically, if the cell can be covered by a single quadrant , we define , otherwise we define as the -approximate set cover solution for the sub-instance maintained by . (If there exists a cell for and that is not covered by any single quadrant and the sub-structure tells us that the sub-instance has no solution, then we make a no-solution decision for .) We include in our solution all quadrants in , which cover the points in for and . Now the only points uncovered are those lie in the rectangular annulus, which is the complement of the union of the cells in (see Figure 1). We partition this rectangular annulus into four rectangles (again see Figure 1), which are contained in , respectively. We obtain a set cover for the points in each of using the corresponding row/column sub-structure as follows. Consider . We temporarily insert the three virtual quadrants to the sub-instance (these quadrants will be deleted afterwards) and update the sub-structure so that now maintains a solution for . This solution covers all points in . We then remove the quadrants from the solution (if any of them are used), and the set of the remaining quadrants should cover all points in . In a similar way, we can construct sets that cover the points in , respectively, by using the sub-structures . (If any of those sub-structures tells us the corresponding sub-instance has no solution, then we make a no-solution decision for .) We include in all quadrants in . This completes the construction of . To summarize, we define

 Qappx={Q↑,Q↓,Q←,Q→}⊔Q∗⊔⎛⎝⨆(i,j)∈PQ∗i,j⎞⎠, (1)

where . From the construction, it is easy to verify that is a set cover for .