Online unit clustering in higher dimensions

08/08/2017 ∙ by Adrian Dumitrescu, et al. ∙ Association for Computing Machinery University of Wisconsin-Milwaukee 0

We revisit the online Unit Clustering problem in higher dimensions: Given a set of n points in R^d, that arrive one by one, partition the points into clusters (subsets) of diameter at most one, so as to minimize the number of clusters used. In this paper, we work in R^d using the L_∞ norm. We show that the competitive ratio of any algorithm (deterministic or randomized) for this problem must depend on the dimension d. This resolves an open problem raised by Epstein and van Stee (WAOA 2008). We also give a randomized online algorithm with competitive ratio O(d^2) for Unit Clustering of integer points (i.e., points in Z^d, d∈N, under L_∞ norm). We complement these results with some additional lower bounds for related problems in higher dimensions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Covering and clustering are ubiquitous problems in the theory of algorithms, computational geometry, optimization, and others. Such problems can be asked in any metric space, however this generality often restricts the quality of the results, particularly for online algorithms. Here we study lower bounds for several such problems in a high dimensional Euclidean space and mostly in the norm. We first consider their offline versions.

Problem 1.

-Center. Given a set of points in and an integer , cover the set by congruent balls centered at the points so that the diameter of the balls is minimized.

The following two problems are dual to Problem 1.

Problem 2.

Unit Covering. Given a set of points in , cover the set by balls of unit diameter so that the number of balls is minimized.

Problem 3.

Unit Clustering. Given a set of points in , partition the set into clusters of diameter at most one so that the number of clusters is minimized.

Problems 1 and 2 are easily solved in polynomial time for points on the line, i.e., for ; however, both problems become NP-hard already in the Euclidean plane [21, 26]. Factor approximations are known for -Center in any metric space (and so for any dimension) [20, 22]; see also [27, Ch. 5][28, Ch. 2], while polynomial-time approximation schemes are known for Unit Covering for any fixed dimension [24]. However, these algorithms are notoriously inefficient and thereby impractical; see also [5] for a summary of results and different time vs. ratio trade-offs.

Problems 2 and 3 look similar; indeed, one can go from clusters to balls in a straightforward way; and conversely one can assign multiply covered points to unique balls. As such, the two problems are identical in the offline setting.

We next consider their online versions. In this paper we focus on Problems 2 and 3 in particular. It is worth emphasizing two common properties: (i) a point assigned to a cluster must remain in that cluster; and (ii) two distinct clusters cannot merge into one cluster, i.e., the clusters maintain their identities.

The performance of an online algorithm ALG is measured by comparing it to an optimal offline algorithm OPT using the standard notion of competitive ratio [6, Ch. 1]. The competitive ratio of ALG is defined as , where is an input sequence of request points, is the cost of an optimal offline algorithm for and denotes the cost of the solution produced by ALG for this input. For randomized algorithms, is replaced by the expectation , and the competitive ratio of ALG is . Whenever there is no danger of confusion, we use ALG to refer to an algorithm or the cost of its solution, as needed.

Charikar et al. [10] have studied the online version of Unit Covering. The points arrive one by one and each point needs to be assigned to a new or to an existing unit ball upon arrival; the metric is used in , . The location of each new ball is fixed as soon as it is opened. The authors provided a deterministic algorithm of competitive ratio and gave a lower bound of on the competitive ratio of any deterministic online algorithm for this problem.

Chan and Zarrabi-Zadeh [9] introduced the online Unit Clustering problem. While the input and the objective of this problem are identical to those for Unit Covering, this latter problem is more flexible in that the algorithm is not required to produce unit balls at any time, but rather the smallest enclosing ball of each cluster should have diameter at most ; moreover, a ball may change (grow or shift) in time. The metric is used in , . The authors showed that several standard approaches for Unit Clustering, namely the deterministic algorithms Centered, Grid, and Greedy, all have competitive ratio at most for points on the line (). Moreover, the first two algorithms above are applicable for Unit Covering, with a competitive ratio at most for , as well.

In fact, Chan and Zarrabi-Zadeh [9] showed that no online algorithm (deterministic or randomized) for Unit Covering can have a competitive ratio better than in one dimension (). They also showed that it is possible to get better results for Unit Clustering than for Unit Covering. Specifically, they developed the first algorithm with competitive ratio below for , namely a randomized algorithm with competitive ratio . Moreover, they developed a general method to achieve competitive ratio below in under metric for any , by “lifting” the one-dimensional algorithm to higher dimensions. In particular, the existence of an algorithm for Unit Clustering with competitive ratio for yields an algorithm with competitive ratio for every for this problem. The current best competitive ratio for Unit Clustering in is obtained in exactly this way: the current best ratio , for , is due to Ehmsen and Larsen [17], and this gives a ratio of for every .

A simple deterministic algorithm (Algorithm Grid below) that assigns points to a predefined set of unit cubes that partition can be easily proved to be -competitive for both Unit Covering and Unit Clustering. Observe that in , each cluster of OPT can be split to at most grid-cell clusters created by the algorithm; hence the competitive ratio of the algorithm is at most , and this analysis is tight. See Fig. 1 for an example in the plane.

Algorithm Grid. Build a uniform grid in where cells are unit cubes of the form , where for . For each new point , if the grid cell containing is nonempty, put in the corresponding cluster; otherwise open a new cluster for the grid cell and put in it.

Figure 1: Example for Algorithm Grid; here and .

Since in each cluster of OPT can be split to at most grid-cell clusters created by the algorithm, its competitive ratio is at most , and this analysis is tight.

The ratio [9] has been subsequently reduced to by the same authors [29]; that algorithm is still randomized. Epstein and van Stee [19] gave the first deterministic algorithm with ratio below , namely one with ratio , and further improving the earlier ratio. In the latest development, Ehmsen and Larsen [17] provided a deterministic algorithm with competitive ratio , which holds the current record in both categories.

From the other direction, the lower bound for deterministic algorithms has evolved from in [9] to in [19], and then to in [25]. Whence the size of the current gap for the competitive ratio of deterministic algorithms for the one-dimensional case of Unit Clustering is quite small, namely , but remains nonzero. The lower bound for randomized algorithms has evolved from in [9] to in [19].

For points in the plane (i.e., ), the lower bound for deterministic algorithms has evolved from in [9] to in [19], and then to in [17]. The lower bound for randomized algorithms has evolved from in [9] to in [19].

As such, the best lower bounds on the competitive ratio for prior to our work are for deterministic algorithms [17] and for randomized algorithms [19].

Notation and terminology.

Throughout this paper the -norm is used. Then the Unit Clustering problem is to partition a set of points in into clusters (subsets), each contained in a unit cube, i.e., a cube of the form for some , so as to minimize the number of clusters used.

denotes the expected value of a random variable

.

Contributions.

We obtain the following results:

(i) The competitive ratio of every online algorithm (deterministic or randomized) for Unit Clustering in under norm is for every (Theorem 1 in Section 2). We thereby give a positive answer to a question of Epstein and van Stee; specifically, they asked whether the competitive ratio grows with the dimension [19, Sec. 4]. The question was reposed in [17, Sec. 7].

(ii) The competitive ratio of every deterministic online algorithm (with an adaptive deterministic adversary) for Unit Covering in under the norm is at least for every . This bound cannot be improved; as such, Algorithm Grid is optimal in this setting (Theorem 2 in Section 3). This generalizes a result by Chan and Zarrabi-Zadeh [9] from to higher dimensions.

(iii) We also give a randomized algorithm with competitive ratio for Unit Covering in , , under norm (Theorem 3 in Section 4). The algorithm applies to Unit Clustering in , , with the same competitive ratio.

(iv) The competitive ratio of Algorithm Greedy for Unit Clustering in under norm is unbounded for every (Theorem 4 in Section 5). The competitive ratio of Algorithm Greedy for Unit Clustering in under norm is at least and at most for every (Theorem 5 in Section 5).

Related work.

Several other variants of Unit Clustering have been studied in [18]. A survey of algorithms for Unit Clustering in the context of online algorithms appears in [11]; see also [15] for a review overview. Clustering with variable sized clusters has been studied in [12, 13]. Grid-based online algorithms for clustering problems have been developed by the same authors [14].

Unit Covering is a variant of Set Cover. Alon et al. [1] gave a deterministic online algorithm of competitive ratio for Set Cover, where is the number of possible points (the size of the ground set) and is the number of sets in the family. If every element appears in at most sets, the competitive ratio of the algorithm can be improved to . Buchbinder and Naor [8] improved these competitive ratio to and

, respectively, under the same assumptions. For several combinatorial optimization problems (e.g., covering and packing), the classic technique that rounds a fractional linear programming solution to an integer solution has been adapted to the online setting 

[2, 3, 4, 8, 23].

In these results, the underlying set system for the covering and packing problem must be finite: The online algorithms and their analyses rely on the size of the ground set. For Unit Clustering and Unit Clustering over infinite sets, such as or , these techniques could only be used after a suitable discretization and a covering of the domain with finite sets, and it is unclear whether they can beat the trivial competitive ratio of in a substantive way.

Recently, Dumitrescu, Ghosh, and Tóth [16] have shown that the competitive ratio of Algorithm Centered for online Unit Covering in , , under the norm is bounded by the Newton number111For a convex body , the Newton number (a.k.a. kissing number) of is the maximum number of nonoverlapping congruent copies of that can be arranged around so that they each touch  [7, Sec. 2.4]. of the Euclidean ball in the same dimension. In particular, it follows that this ratio is . From the other direction, the competitive ratio of every deterministic online algorithm (with an adaptive deterministic adversary) for Unit Covering in under the norm is at least for every (and at least for ).

2 Lower bounds for online Unit Clustering

In this section, we prove the following theorem.

Theorem 1.

The competitive ratio of every (i) deterministic algorithm (with an adaptive deterministic adversary), and (ii) randomized algorithm (with a randomized oblivious adversary), for Unit Clustering in under norm is for every .

Proof.

Let be the competitive ratio of an online algorithm. We may assume , otherwise there is nothing to prove. We may also assume that since this is the smallest value for which the argument gives a nontrivial lower bound. Let be a sufficiently large even integer (that depends on ).

Deterministic Algorithm.

We first prove a lower bound for a deterministic algorithm, assuming an adaptive deterministic adversary. We present a total of points to the algorithm, and show that it creates clusters, where OPT is the offline minimum number of clusters for the final set of points. Specifically, we present the points to the algorithm in rounds. Round consists of the following three events:

  1. The adversary presents (inserts) a set of points;

    is determined by a vector

    to be later defined.

  2. The algorithm may create new clusters or expand existing clusters to cover .

  3. If , the adversary computes from the clusters that cover .

In the first round, the adversary presents points of the integer lattice; namely , where . In round , the point set will depend on the clusters created by the algorithm in previous rounds. We say that a cluster expires in round if it contains some points from but no additional points can (or will) be added to it in any subsequent round. We show that over rounds, clusters expire, which readily implies .

Optimal solutions.

For , denote by the offline optimum for the set of points presented up to (and including) round . Since and is even, . The optimum solution for is unique, and each cluster in the optimum is a Cartesian product , where

is odd for

(Fig. 2(a)).

Consider additional near-optimal solutions for obtained by translating the optimal clusters by a -dimensional vector, and adding new clusters along the boundary of the cube . We shall argue that the points inserted in round , , can be added to some but not all of these solutions. To make this precise, we define these solutions a bit more carefully. First we define an infinite set of hypercubes

For a point set and a vector , let the clusters be the subsets of that lie in translates of hypercubes , that is, let

Since is an integer grid, the clusters contain all points in for all . See Fig. 2(a–d) for examples. Due to the boundary effect, the number of clusters in is

if is sufficiently large with respect to .

In round , the point set is a perturbation of the integer grid (as described below). Further, we ensure that the final point set is covered by the clusters for at least one vector . Consequently,

At the end, we have .

Figure 2: (a) A section of the integer grid and clusters. (b–d) Near-optimal solutions for , , and . (e–f) The perturbation with signature , and clusters for and , where is the union of the perturbed points (full dots), and grid points (empty circles). (g–h) The perturbation with signature and clusters for and and the same .

Perturbation.

A perturbation of the integer grid is encoded by a vector , that we call the signature of the perturbation. Let . For an integer point and a signature , the perturbed point is defined as follows; see Fig. 2(e–h) for examples in the plane: For let be

  • when ;

  • if is odd, and if is even when ;

  • if is odd, and if is even when .

For , the point set is a perturbation of with signature , for some . The signature of is (and so can be viewed as a null perturbation of itself). At the end of round , we compute from and from the clusters that cover . The signature determines the set , for every . Note the following relation between the signatures and the clusters .

Observation 1.

Consider a point set with signature . The clusters cover if and only if for all ,

  • , or

  • and , or

  • and .

It follows from Observation 1 that the final point set is covered by the clusters for at least one vector .

Adversary strategy.

At the end of round , we compute from by changing a 0-coordinate to or . Note that every point in , , has perturbed coordinates and unperturbed coordinates. For all points in , all unperturbed coordinates are integers. The algorithm covers with at most clusters. Project these clusters to the subspace corresponding to the unperturbed coordinates. We say that a cluster is

  • small if its projection to contains at most points, and

  • big otherwise.

Note that we distinguish small and big clusters in round based on how they cover the set (in particular, a small cluster in round may become large in another round, or vice versa).

Since the -diameter of a cluster is at most 1, a small cluster contains at most points of (by definition, it contains at most points in the projection to , each of these points is the projection of points of ; since is a perturbation of the integer grid, any cluster contains at most of these preimages). The total number of points in that lie in small clusters is at most

Consequently, the remaining points in are covered by big clusters. For a big cluster , let denote the number of unperturbed coordinates in which its extent is . Then the number of points in satisfies

We say that a big cluster expires if no point can (or will) be added to in the future. Consider the following experiment: choose one of the zero coordinates of the signature uniformly at random (i.e., all choices are equally likely), and change it to or

with equal probability

. If the -th extent of a cluster is , then it cannot be expanded in dimension . Consequently, a big cluster expires with probability at least

(1)

as and we assume . It follows that there exists an unperturbed coordinate , and a perturbation of the -th coordinate such that

big clusters expire in (at the end of) round . The adversary makes this choice and the corresponding perturbation. In round , all clusters that cover any point in expire, because no point will be added to any of these clusters. Since is a perturbation of , at least clusters expire in the last round, as well.

If a cluster expires in round , then it contains some points of but does not contain any point of for . Consequently, each cluster expire in at most one round, and the total number of expired clusters over all rounds is . Since each of these cluster was created by the algorithm in one of the rounds, we have , which implies , as claimed.

Randomized Algorithm.

We modify the above argument to establish a lower bound of for a randomized algorithm with an oblivious randomized adversary. The adversary starts with the integer grid , with signature as before. At the end of round , it chooses an unperturbed coordinate of uniformly at random, and switches it to or with equal probability (independently of the clusters created by the algorithm) to obtain . By (1), the expected number of big clusters that expire in round , , is ; and all big clusters expire in round . Consequently, the expected number of clusters created by the algorithm is , which implies , as required. ∎

3 Lower bounds for online Unit Covering

The following theorem extends a result from [9] from to higher dimensions.

Theorem 2.

The competitive ratio of every deterministic online algorithm (with an adaptive deterministic adversary) for Unit Covering in under the norm is at least for every .

The lower bound in Theorem 2 cannot be improved; as such, Algorithm Grid is optimal in this setting.

Proof.

Consider a deterministic online algorithm ALG. We present an input instance for ALG and for which the solution is at least times . In particular, consists of points in that fit in a unit cube, hence , and we show that ALG is required to place a new unit cube for each point in . Our proof works like a two player game, played by Alice and Bob. Here, Alice is presenting points to Bob, one at a time. If a new cube is required, Bob (who plays the role of the algorithm) decides where to place it. Alice tries to force Bob to place as many new cubes as possible by presenting the points in a smart way. Bob tries to place new cubes in a way such that they may cover other points presented by Alice in the future, thereby reducing the need of placing new cubes quite often.

Throughout the game, Alice maintains a set of axis-aligned cubes , each of side-length , and Bob places (uses) axis-aligned cubes , to cover points presented by Alice. Let . In step , , Alice obtains from , where . Alice then presents an arbitrary uncovered vertex of as the next point , and Bob covers it by placing the unit cube .

For , let and (, , , etc). Note that , , is a strictly increasing sequence converging to . Let be a cube of side-length ; and let the first point be an arbitrary vertex of . Next, Bob places to cover . In general, is a cube of side-length , for (its construction is explained below).

The remaining points () in are chosen adaptively, depending on Bob’s moves. For , Alice has placed points , and Bob has placed unit cubes (one for each of these points). An illustration of the planar version of the game appears in Fig. 3.

Figure 3: A lower bound of on the competitive ratio. The figure illustrates the case . Left: The first two points in arrive. Right: the last two points in arrive. The cubes placed by Bob () and the vertices that are deeply covered ( and ) are colored in red.

We maintain the following two invariants:

  1. for , all points are included in the cube .

  2. for , before is chosen in step , there are at least uncovered vertices of .

A vertex of is said to be deeply covered by placing in step if is covered by and its distance from the boundary of , , is larger than .

Note that before is placed in step , all the vertices of are uncovered.

Lemma 1.

Consider step , where . Consider the set of uncovered vertices of when is chosen in step of the process. At most one uncovered vertex of can be deeply covered by placing in step .

Proof.

Assume that are two uncovered vertices of that are deeply covered by placing in step . Since and differ in at least one coordinate, the extent of in that coordinate is larger than

which is a contradiction. ∎

If no uncovered vertex of is deeply covered by placing in step , let be the unique axis-aligned cube that contains and has as a vertex and whose side length is . Otherwise, let be the unique uncovered vertex of that is deeply covered by placing in step ; and let be the unique axis-aligned cube that contains and has as a vertex and whose side length is . (Note that is possible.)

Lemma 2.

Consider step , where . Let be any uncovered vertex of that is not deeply covered by in step , and that is not the common vertex of and . Let be the corresponding vertex of , where . Then is uncovered before is chosen in step (from among these uncovered vertices).

Proof.

Since is not deeply covered by after step , its distance to the boundary of is at most . By construction, any two parallel faces of and that are incident to and respectively, are at distance

from each other. This implies that is uncovered by after step . Since was uncovered before step (by previous cubes ) and all previous cubes intersect , it follows that was uncovered before step (by previous cubes ). As such, is uncovered (by ) after step —and thus before is chosen in step . ∎

Invariant I follows inductively, by construction. Invariant II follows inductively from Lemma 1 and Lemma 2. By Invariant I, we immediately have that . By Invariant II after step , the algorithm is required to place a new cube after each point in , hence . This completes the proof of Theorem 2. ∎

4 Online algorithm for Unit Covering over

Note that the lower bound construction used sequences of integer points (i.e., points in ). We substantially improve on the trivial upper bound on the competitive ratio of Unit Covering over (or the upper bound of the greedy algorithm, see Section 5).

The online algorithm by Buchbinder and Naor [8] for Set Cover, for the unit covering problem over , yields an algorithm with competitive ratio under the assumption that a set of possible integer points is given in advance. Recently, Gupta and Nagarajan [23] gave an online randomized algorithm for a broad family of combinatorial optimization problems that can be expressed as sparse integer programs. For unit covering over the integers in , their results yield a competitive ratio of , where . The competitive ratio does not depend on , but the algorithm must know in advance.

We now remove the dependence on so as to get a truly online algorithm for Unit Covering over . Consider the following randomized algorithm.

Algorithm Iterative Reweighing. Let be the set of points presented to the algorithm and the set of cubes chosen by the algorithm; initially . The algorithm chooses cubes for two different reasons, and it keeps them in sets and , where . It also maintains a third set of cubes, , for bookkeeping purposes; initially . In addition, the algorithm maintains a weight function on all integer unit cubes. Initially for all integer unit cubes (this is the default value for all cubes that are disjoint from ).
We describe one iteration of the algorithm. Let be a new point; put . Let be the set of integer unit cubes that contain .

  1. If , then do nothing.

  2. Else if , then let be an arbitrary cube and put .

  3. Else if , then let be an arbitrary cube in and put .

  4. Else, the weights give a probability distribution on

    . Successively choose cubes from at random with this distribution in independent trials and add them to . Let be an arbitrary cube and put . Double the weight of every cube in .

Theorem 3.

The competitive ratio of Algorithm Iterative Reweighing for Unit Covering in under norm is for every .

Proof.

Suppose that a set of points is presented to the algorithm sequentially, and the algorithm created unit cubes in . Note that . We show that and . This immediately implies that .

First consider . New cubes are added to in step 4. In this case, the algorithm places at most cubes into , and doubles the weight of all cubes in that contain . Let be an offline optimum set of unit cubes. Each point lies in some cube . The weight of is initially , and it never exceeds 2; indeed, since , its weight before the last doubling must have been at most in step 4 of the algorithm; thus its weight is doubled in at most iterations. Consequently, the algorithm invokes step 4 in at most iterations. In each such iteration, it adds at most cubes to . Overall, we have , as required.

Next consider . A new cube is added to in step 3. In this case, none of the cubes in is in and when point is presented, and the algorithm increments by one. At the beginning of the algorithm, we have . Assume that the weights of the cubes in were increased in iterations, starting from the beginning of the algorithm, and the sum of weights of the cubes in increases by (the weights of several cubes may have been doubled in an iteration). Since , then implies . For every , the sum of weights of some cubes in , say, , increased by in step 4 of a previous iteration. Since the weights doubled, the sum of the weights of these cubes was at the beginning of that iteration, and the algorithm added one of them into with probability at least in one random draw, which was repeated times independently. Consequently, the probability that the algorithm did not add any cube from to in that iteration is at most . The probability that none of the cubes in has been added to before point arrives is (by independence) at most

The total number of points for which step 3 applies is at most . Since each unit cube contains at most points, we have . Therefore , as claimed. ∎

The above algorithm applies to Unit Clustering of integer points in with the same competitive ratio:

Corollary 1.

The competitive ratio of Algorithm Iterative Reweighing for Unit Clustering in under norm is for every .

5 Lower bound for Algorithm Greedy for Unit Clustering

Chan and Zarrabi-Zadeh [9] showed that the greedy algorithm for Unit Clustering on the line () has competitive ratio of (this includes both an upper bound on the ratio and a tight example). Here we show that the competitive ratio of the greedy algorithm is unbounded. We first recall the algorithm:

Algorithm Greedy. For each new point , if fits in some existing cluster, put in such a cluster (break ties arbitrarily); otherwise open a new cluster for .

Theorem 4.

The competitive ratio of Algorithm Greedy for Unit Clustering in under norm is unbounded for every .

Proof.

It suffices to consider ; the construction extends to arbitrary dimensions . The adversary presents points in pairs for . Each pair of points spans a unit square that does not contain any subsequent point. Consequently, the greedy algorithm will create clusters, one for each point pair. However, since the clusters and are contained in the unit squares and , respectively. ∎

When we restrict Algorithm Greedy to integer points, its competitive ratio is exponential in .

Theorem 5.

The competitive ratio of Algorithm Greedy for Unit Clustering in under norm is at least and at most for every .

Proof.

We first prove the lower bound. Consider an integer input sequence implementing a a barycentric subdivision of the space, as illustrated in Fig. 4. Let be a sufficiently large positive multiple of (that depends on ). We present a point set , where points to the algorithm, and show that it creates clusters.

Figure 4: A planar instance for the greedy algorithm with ; the edges in are drawn in red.

Let , where

Note that each element of is the barycenter (center of mass) of elements of , namely the vertices of a cell of containing the element. Here is a set of pairs of lattice points (edges) that can be put in one-to-one correspondence with the points in . As such, we have

OPT

It follows that . The input sequence presents the points in pairs, namely those in . The greedy algorithm makes one new non-extendable cluster for each such “diagonal” pair (each cluster is a unit cube), so its competitive ratio is at least for every .

An upper bound of follows from the fact that each cluster in OPT contains at most integer points; we further reduce this bound. Let be the clusters of an optimal partition (). Assume that the algorithm produces clusters of size at least and singleton clusters. Since each cluster of OPT contains at most one singleton cluster created by the algorithm, we have

ALG

as required. ∎

6 Conclusion

Our results suggest several directions for future study. We summarize a few specific questions of interest. Presently there is no online algorithm for Unit Clustering in under the norm with a competitive ratio . The best one known under this norm (for large ) has ratio for every , which is only marginally better than the trivial ratio.

Question 1.

Is there an upper bound of on the competitive ratio for Unit Clustering in under the norm?

Question 2.

Is there a lower bound on the competitive ratio for Unit Clustering that is exponential in ? Is there a superlinear lower bound?

Question 3.

Do our lower bounds for Unit Clustering in and under the norm carry over to the norm (or the norm for )?

For online Unit Covering in under the norm, the competitive ratio of the deterministic Algorithm Grid is , which is the best possible. One remaining issue is in regard to randomized algorithms and oblivious222An oblivious adversary must construct the entire input sequence in advance, without having access to the actions of the algorithm. adversaries.

Question 4.

Is there an upper bound of on the competitive ratio of randomized algorithms for Unit Covering in under the norm?

Question 5.

Is there a superlinear lower bound on the competitive ratio of randomized algorithms (against oblivious adversaries) for Unit Covering in under the norm?

Refer to [15] for a collection of further related problems.

References

  • [1] Noga Alon, Baruch Awerbuch, Yossi Azar, Niv Buchbinder, and Joseph Naor, The online set cover problem, SIAM J. Comput. 39(2) (2009), 361–370.
  • [2] Yossi Azar, Niv Buchbinder, T.-H. Hubert Chan, Shahar Chen, Ilan Reuven Cohen, Anupam Gupta, Zhiyi Huang, Ning Kang, Viswanath Nagarajan, Joseph Naor, and Debmalya Panigrahi, Online algorithms for covering and packing problems with convex objectives, in Proc. 57th IEEE Symposium on Foundations of Computer Science (FOCS), IEEE, 2016, pp. 148–157.
  • [3] Yossi Azar, Umang Bhaskar, Lisa Fleischer, and Debmalya Panigrahi, Online mixed packing and covering, in Proc. 24th ACM-SIAM Symposium on Discrete Algorithms (SODA), SIAM, 2013, pp. 85–100.
  • [4] Yossi Azar, Ilan Reuven Cohen, and Alan Roytman, Online lower bounds via duality, in Proc. 28th ACM-SIAM Symposium on Discrete Algorithms (SODA), SIAM, 2017, pp. 1038–1050.
  • [5] Ahmad Biniaz, Peter Liu, Anil Maheshwari, and Michiel Smid, Approximation algorithms for the unit disk cover problem in 2D and 3D, Comput. Geom. 60 (2017), 8–18.
  • [6] Allan Borodin and Ran El-Yaniv, Online Computation and Competitive Analysis, Cambridge University Press, Cambridge, 1998.
  • [7] Peter Brass, William Moser, and János Pach, Research Problems in Discrete Geometry, Springer, New York, 2005.
  • [8] Niv Buchbinder and Joseph Naor, Online primal-dual algorithms for covering and packing, Math. Oper. Res. 34(2) (2009), 270–286.
  • [9] Timothy M. Chan and Hamid Zarrabi-Zadeh, A randomized algorithm for online unit clustering, Theory Comput. Syst. 45(3) (2009), 486–496.
  • [10] Moses Charikar, Chandra Chekuri, Tomás Feder, and Rajeev Motwani, Incremental clustering and dynamic information retrieval, SIAM J. Comput. 33(6) (2004), 1417–1440.
  • [11] Marek Chrobak, SIGACT news online algorithms column 13, SIGACT News Bulletin 39(3) (2008), 96–121.
  • [12] János Csirik, Leah Epstein, Csanád Imreh, and Asaf Levin, Online clustering with variable sized clusters, Algorithmica 65(2) (2013), 251–274.
  • [13] Gabriella Divéki and Csanád Imreh, An online 2-dimensional clustering problem with variable sized clusters, Optimization and Engineering 14(4) (2013), 575–593.
  • [14] Gabriella Divéki and Csanád Imreh, Grid based online algorithms for clustering problems, in Proc. 15th IEEE Int. Sympos. Comput. Intel. Infor. (CINTI), IEEE, 2014, pp. 159.
  • [15] Adrian Dumitrescu, Computational geometry column 68, SIGACT News 49(4), (2018), 46–54.
  • [16] Adrian Dumitrescu, Anirban Ghosh, and Csaba D. Tóth, Online unit covering in Euclidean space, Proc. 12th Conference on Combinatorial Optimization and Applications (COCOA), LNCS  11346, Springer, Cham, 2018, pp. 609–623.
  • [17] Martin R. Ehmsen and Kim S. Larsen, Better bounds on online unit clustering, Theoret. Comput. Sci. 500 (2013), 1–24.
  • [18] Leah Epstein, Asaf Levin, and Rob van Stee, Online unit clustering: Variations on a theme, Theoret. Comput. Sci. 407(1-3) (2008), 85–96.
  • [19] Leah Epstein and Rob van Stee, On the online unit clustering problem, ACM Trans. Algorithms 7(1) (2010), 1–18.
  • [20] Tomás Feder and Daniel H. Greene, Optimal algorithms for approximate clustering, in

    Proc. 20th ACM Symposium on Theory of Computing (STOC)

    , 1988, pp. 434–444.
  • [21] Robert J. Fowler, Mike Paterson, and Steven L. Tanimoto, Optimal packing and covering in the plane are NP-complete, Inform. Process. Lett. 12(3) (1981), 133–137.
  • [22] Teofilo F. Gonzalez, Clustering to minimize the maximum intercluster distance, Theoret. Comput. Sci. 38 (1985), 293–306.
  • [23] Anupam Gupta and Viswanath Nagarajan, Approximating sparse covering integer programs online, Math. Oper. Res. 39(4) (2014), 998–1011.
  • [24] Dorit S. Hochbaum and Wolfgang Maass, Approximation schemes for covering and packing problems in image processing and VLSI, J. ACM 32(1) (1985), 130–136.
  • [25] Jun Kawahara and Koji M. Kobayashi, An improved lower bound for one-dimensional online unit clustering, Theoret. Comput. Sci. 600 (2015), 171–173.
  • [26] Nimrod Megiddo and Kenneth J. Supowit, On the complexity of some common geometric location problems, SIAM J. Comput. 13(1) (1984), 182–196.
  • [27] Vijay Vazirani, Approximation Algorithms, Springer Verlag, New York, 2001.
  • [28] David P. Williamson and David B. Shmoys, The Design of Approximation Algorithms, Cambridge University Press, 2011.
  • [29] Hamid Zarrabi-Zadeh and Timothy M. Chan, An improved algorithm for online unit clustering, Algorithmica 54(4) (2009), 490–500.