1 Introduction
Clustering is a challenging task particularly due to two impediments. The first problem is that clustering, in the absence of domain knowledge, is usually an underspecified task; the solution of choice may vary significantly between different intended applications. The second one is that performing clustering under many natural models is computationally hard.
Consider the task of dividing the users of an online shopping service into different groups. The result of this clustering can then be used for example in suggesting similar products to the users in the same group, or for organizing data so that it would be easier to read/analyze the monthly purchase reports. Those different applications may result in conflicting solution requirements. In such cases, one needs to exploit domain knowledge to better define the clustering problem.
Aside from trial and error, a principled way of extracting domain knowledge is to perform clustering using a form of ‘weak’ supervision. For example, Balcan and Blum [BB08] propose to use an interactive framework with ’split/merge’ queries for clustering. In another work, Ashtiani and BenDavid [ABD15] require the domain expert to provide the clustering of a ’small’ subset of data.
At the same time, mitigating the computational problem of clustering is critical. Solving most of the common optimization formulations of clustering is NPhard (in particular, solving the popular means and median clustering problems). One approach to address this issues is to exploit the fact that natural data sets usually exhibit some nice properties and likely to avoid the worstcase scenarios. In such cases, optimal solution to clustering may be found efficiently. The quest for notions of niceness that are likely to occur in real data and allow clustering efficiency is still ongoing (see [Ben15] for a critical survey of work in that direction).
In this work, we take a new approach to alleviate the computational problem of clustering. In particular, we ask the following question: can weak supervision (in the form of answers to natural queries) help relaxing the computational burden of clustering? This will add up to the other benefit of supervision: making the clustering problem better defined by enabling the accession of domain knowledge through the supervised feedback.
The general setting considered in this work is the following. Let be a set of elements that should be clustered and a dissimilarity function over it. The oracle (e.g., a domain expert) has some information about a target clustering in mind. The clustering algorithm has access to , and can also make queries about . The queries are in the form of samecluster queries. Namely, the algorithm can ask whether two elements belong to the same cluster or not. The goal of the algorithm is to find a clustering that meets some predefined clusterability conditions and is consistent with the answers given to its queries.
We will also consider the case that the oracle conforms with some optimal means solution. We then show that access to a ’reasonable’ number of samecluster queries can enable us to provide an efficient algorithm for otherwise NPhard problems.
1.1 Contributions
The two main contributions of this paper are the introduction of the semisupervised active clustering (SSAC) framework and, the rather unusual demonstration that access to simple query answers can turn an otherwise NP hard clustering problem into a feasible one.
Before we explain those results, let us also mention a notion of clusterability (or ‘input niceness’) that we introduce. We define a novel notion of niceness of data, called margin property that is related to the previously introduced notion of center proximity [ABS12]. The larger the value of , the stronger the assumption becomes, which means that clustering becomes easier. With respect to that
parameter, we get a sharp ‘phase transition’ between
means being NP hard and being optimally solvable in polynomial time^{1}^{1}1The exact value of such a threshold depends on some finer details of the clustering task; whether is required to be Euclidean and whether the cluster centers must be members of ..We focus on the effect of using queries on the computational complexity of clustering. We provide a probabilistic polynomial time (BPP) algorithm for clustering with queries, that succeeds under the assumption that the input satisfies the margin condition for . This algorithm makes samecluster queries to the oracle and runs in time, where is the number of clusters and is the size of the instance set.
On the other hand, we show that without access to query answers, means clustering is NPhard even when the solution satisfies margin property for and (for any ). We further show that access to queries is needed to overcome the NP hardness in that case. These results, put together, show an interesting phenomenon. Assume that the oracle conforms to an optimal solution of means clustering and that it satisfies the margin property for some . In this case, our lower bound means that without making queries means clustering is NPhard, while the positive result shows that with a reasonable number of queries the problem becomes efficiently solvable.
This indicates an interesting (and as far as we are aware, novel) tradeoff between query complexity and computational complexity in the clustering domain.
1.2 Related Work
This work combines two themes in clustering research; clustering with partial supervision (in particular, supervision in the form of answers to queries) and the computational complexity of clustering tasks.
Supervision in clustering (sometimes also referred to as ‘semisupervised clustering’) has been addressed before, mostly in applicationoriented works [BBM02, BBM04, KBDM09]. The most common method to convey such supervision is through a set of pairwise link/donotlink constraints on the instances. Note that in contrast to the supervision we address here, in the setting of the papers cited above, the supervision is noninteractive. On the theory side, Balcan et. al [BB08] propose a framework for interactive clustering with the help of a user (i.e., an oracle). The queries considered in that framework are different from ours. In particular, the oracle is provided with the current clustering, and tells the algorithm to either split a cluster or merge two clusters. Note that in that setting, the oracle should be able to evaluate the whole given clustering for each query.
Another example of the use of supervision in clustering was provided by Ashtiani and BenDavid [ABD15]. They assumed that the target clustering can be approximated by first mapping the data points into a new space and then performing means clustering. The supervision is in the form of a clustering of a small subset of data (the subset provided by the learning algorithm) and is used to search for such a mapping.
Our proposed setup combines the userfriendliness of link/don’tlink queries (as opposed to asking the domain expert to answer queries about whole data set clustering, or to cluster sets of data) with the advantages of interactiveness.
The computational complexity of clustering has been extensively studied. Many of these results are negative, showing that clustering is computationally hard. For example, means clustering is NPhard even for [Das08], or in a 2dimensional plane [Vat09, MNV09]. In order to tackle the problem of computational complexity, some notions of niceness of data under which the clustering becomes easy have been considered (see [Ben15] for a survey).
The closest proposal to this work is the notion of center proximity introduced by Awasthi et. al [ABS12]. We discuss the relationship of that notion to our notion of margin in Appendix B. In the restricted scenario (i.e., when the centers of clusters are selected from the data set), their algorithm efficiently recovers the target clustering (outputs a tree such that the target is a pruning of the tree) for . Balcan and Liang [BL12] improve the assumption to . BenDavid and Reyzin [BDR14] show that this problem is NPHard for .
Variants of these proofs for our margin condition yield the feasibility of means clustering when the input satisfies the condition with and NP hardness when , both in the case of arbitrary (not necessarily Euclidean) metrics^{2}^{2}2In particular, the hardness result of [BDR14] relies on the ability to construct nonEuclidean distance functions. Later in this paper, we prove hardness for for Euclidean instances. .
2 Problem Formulation
2.1 Centerbased clustering
The framework of clustering with queries can be applied to any type of clustering. However, in this work, we focus on a certain family of common clusterings – centerbased clustering in Euclidean spaces^{3}^{3}3In fact, our results are all independent of the Euclidean dimension and apply to any Hilbert space..
Let be a subset of some Euclidean space, . Let be a clustering (i.e., a partitioning) of . We say if and belong to the same cluster according to . We further denote by the number of instances () and by the number of clusters.
We say that a clustering is centerbased if there exists a set of centers such that the clustering corresponds to the Voroni diagram over those center points. Namely, for every in and , .
Finally, we assume that the centers corresponding to are the centers of mass of the corresponding clusters. In other words, . Note that this is the case for example when the oracle’s clustering is the optimal solution to the Euclidean kmeans clustering problem.
2.2 The margin property
Next, we introduce a notion of clusterability of a data set, also referred to as ‘data niceness property’.
Definition 1 (margin).
Let be set of points in metric space . Let be a centerbased clustering of induced by centers . We say that satisfies the margin property if the following holds. For all and every and ,
2.3 The algorithmic setup
For a clustering , a oracle is a function that answers queries according to that clustering. One can think of such an oracle as a user that has some idea about its desired clustering, enough to answer the algorithm’s queries. The clustering algorithm then tries to recover by querying a oracle. The following notion of query is arguably most intuitive.
Definition 2 (Samecluster Query).
A samecluster query asks whether two instances and belong to the same cluster, i.e.,
(we omit the subscript when it is clear from the context).
Definition 3 (Query Complexity).
An SSAC instance is determined by the tuple . We will consider families of such instances determined by niceness conditions on their oracle clusterings .

A SSAC algorithm is called a solver for a family of such instances, if for every instance , it can recover by having access to and making at most queries to a oracle.

Such an algorithm is a polynomial solver if its timecomplexity is polynomial in and (the number of clusters).

We say admits an query complexity if there exists an algorithm that is a polynomial solver for every clustering instance in .
3 An Efficient SSAC Algorithm
In this section we provide an efficient algorithm for clustering with queries. The setting is the one described in the previous section. In particular, it is assumed that the oracle has a centerbased clustering in his mind which satisfies the margin property. The space is Euclidean and the center of each cluster is the center of mass of the instances in that cluster. The algorithm not only makes samecluster queries, but also another type of query defined as below.
Definition 4 (Clusterassignment Query).
A clusterassignment query asks the cluster index that an instance belongs to. In other words if and only if .
Note however that each clusterassignment query can be replaced with samecluster queries (see appendix A in supplementary material). Therefore, we can express everything in terms of the more natural notion of samecluster queries, and the use of clusterassignment query is just to make the representation of the algorithm simpler.
Intuitively, our proposed algorithm does the following. In the first phase, it tries to approximate the center of one of the clusters. It does this by asking clusterassignment queries about a set of randomly (uniformly) selected point, until it has a sufficient number of points from at least one cluster (say ). It uses the mean of these points, , to approximate the cluster center.
In the second phase, the algorithm recovers all of the instances belonging to . In order to do that, it first sorts all of the instances based on their distance to . By showing that all of the points in lie inside a sphere centered at (which does not include points from any other cluster), it tries to find the radius of this sphere by doing binary search using samecluster queries. After that, the elements in will be located and can be removed from the data set. The algorithm repeats this process times to recover all of the clusters.
The details of our approach is stated precisely in Algorithm 1. Note that is a small constant^{4}^{4}4It corresponds to the constant appeared in generalized Hoeffding inequality bound, discussed in Theorem 26 in appendix D in supplementary materials.. Theorem 7 shows that if then our algorithm recovers the target clustering with high probability. Next, we give bounds on the time and query complexity of our algorithm. Theorem 8 shows that our approach needs queries and runs with time complexity .
Lemma 5.
Let be a clustering instance, where is centerbased and satisfies the margin property. Let be the set of centers corresponding to the centers of mass of . Let be such that , where . Then implies that
Proof.
Fix any and . . Similarly, . Combining the two, we get that . ∎
Lemma 6.
Proof.
Define a uniform distribution
over . Then and are the true and empirical mean of this distribution. Using a standard concentration inequality (Thm. 26 from Appendix D) shows that the empirical mean is close to the true mean, completing the proof.∎
Theorem 7.
Let be a clustering instance, where is centerbased and satisfies the margin property. Let be the center of mass of . Assume and . Then with probability at least , Algorithm 1 outputs .
Proof.
In the first phase of the algorithm we are making clusterassignment queries. Therefore, using the pigeonhole principle, we know that there exists cluster index such that . Then Lemma 6 implies that the algorithm chooses a center such that with probability at least we have . By Lemma 5, this would mean that for all and . Hence, the radius found in the phase two of Alg. 1 is such that . This implies that (found in phase two) equals to . Hence, with probability at least one iteration of the algorithm successfully finds all the points in a cluster . Using union bound, we get that with probability at least , the algorithm recovers the target clustering. ∎
Theorem 8.
Proof.
In each iteration (i) the first phase of the algorithm takes time and makes clusterassignment queries (ii) the second phase takes times and makes samecluster queries. Each clusterassignment query can be replaced with samecluster queries; therefore, each iteration runs in and uses samecluster queries. By replacing and noting that there are iterations, the proof will be complete. ∎
Corollary 9.
The set of Euclidean clustering instances that satisfy the margin property for some admits query complexity .
4 Hardness Results
4.1 Hardness of Euclidean means with Margin
Finding means solution without the help of an oracle is generally computationally hard. In this section, we will show that solving Euclidean means remains hard even if we know that the optimal solution satisfies the margin property for . In particular, we show the hardness for the case of for any .
In Section 3, we proposed a polynomialtime algorithm that could recover the target clustering using queries, assuming that the clustering satisfies the margin property for . Now assume that the oracle conforms to the optimal means clustering solution. In this case, for , solving means clustering would be NPhard without queries, while it becomes efficiently solvable with the help of an oracle ^{5}^{5}5To be precise, note that the algorithm used for clustering with queries is probabilistic, while the lower bound that we provide is for deterministic algorithms. However, this implies a lower bound for randomized algorithms as well unless .
Given a set of instances , the means clustering problem is to find a clustering which minimizes . The decision version of means is, given some value , is there a clustering with cost ? The following theorem is the main result of this section.
Theorem 10.
Finding the optimal solution to Euclidean means objective function is NPhard when for any , even when the optimal solution satisfies the margin property for .
This results extends the hardness result of [BDR14] to the case of Euclidean metric, rather than arbitrary one, and to the margin condition (instead of the center proximity there). The full proof is rather technical and is deferred to the supplementary material (appendix C). In the next sections, we provide an outline of the proof.
4.1.1 Overview of the proof
Our method to prove Thm. 10 is based on the approach employed by [Vat09]. However, the original construction proposed in [Vat09] does not satisfy the margin property. Therefore, we have to modify the proof by setting up the parameters of the construction more carefully.
To prove the theorem, we will provide a reduction from the problem of Exact Cover by 3Sets (X3C) which is NPComplete [GJ02], to the decision version of means.
Definition 11 (X3c).
Given a set containing exactly elements and a collection of subsets of such that each contains exactly three elements, does there exist elements in such that their union is ?
We will show how to translate each instance of X3C, , to an instance of means clustering in the Euclidean plane, . In particular, has a gridlike structure consisting of rows (one for each ) and roughly columns (corresponding to ) which are embedded in the Euclidean plane. The special geometry of the embedding makes sure that any lowcost means clustering of the points (where is roughly ) exhibits a certain structure. In particular, any lowcost means clustering could cluster each row in only two ways; One of these corresponds to being included in the cover, while the other means it should be excluded. We will then show that has a cover of size if and only if has a clustering of cost less than a specific value . Furthermore, our choice of embedding makes sure that the optimal clustering satisfies the margin property for .
4.1.2 Reduction design
Given an instance of X3C, that is the elements and the collection , we construct a set of points in the Euclidean plane which we want to cluster. Particularly, consists of a set of points in a gridlike manner, and the sets corresponding to . In other words, .
The set is as described in Fig. 2. The row is composed of points . Row is composed of points . The distances between the points are also shown in Fig. 2. Also, all these points have weight , simply meaning that each point is actually a set of points on the same location.
Each set is constructed based on . In particular, , where is a subset of and is constructed as follows: iff , and iff . Similarly, iff , and iff . Furthermore, and are specific locations as depicted in Fig. 2. In other words, exactly one of the locations and , and one of and will be occupied. We set the following parameters.
Lemma 12.
The set has a clustering of cost less or equal to if and only if there is an exact cover for the X3C instance.
Lemma 13.
Any clustering of with cost has the margin property where . Furthermore, .
4.2 Lower Bound on the Number of Queries
In the previous section we showed that means clustering is NPhard even under margin assumption (for ). On the other hand, in Section 3 we showed that this is not the case if the algorithm has access to an oracle. In this section, we show a lower bound on the number of queries needed to provide a polynomialtime algorithm for means clustering under margin assumption.
Theorem 14.
For any , finding the optimal solution to the means objective function is NPHard even when the optimal clustering satisfies the margin property and the algorithm can ask samecluster queries.
Proof.
Proof by contradiction: assume that there is polynomialtime algorithm that makes samecluster queries to the oracle. Then, we show there exists another algorithm for the same problem that is still polynomial but uses no queries. However, this will be a contradiction to Theorem 10, which will prove the result.
In order to prove that such exists, we use a ‘simulation’ technique. Note that makes only binary queries, where is a constant. The oracle therefore can respond to these queries in maximum different ways. Now the algorithm can try to simulate all of possible responses by the oracle and output the solution with minimum means clustering cost. Therefore, runs in polynomialtime and is equivalent to . ∎
5 Conclusions and Future Directions
In this work we introduced a framework for semisupervised active clustering (SSAC) with samecluster queries. Those queries can be viewed as a natural way for a clustering mechanism to gain domain knowledge, without which clustering is an underdefined task. The focus of our analysis was the computational and query complexity of such SSAC problems, when the input data set satisfies a clusterability condition – the margin property.
Our main result shows that access to a limited number of such query answers (logarithmic in the size of the data set and quadratic in the number of clusters) allows efficient successful clustering under conditions (margin parameter between 1 and ) that render the problem NPhard without the help of such a query mechanism. We also provided a lower bound indicating that at least queries are needed to make those NP hard problems feasibly solvable.
With practical applications of clustering in mind, a natural extension of our model is to allow the oracle (i.e., the domain expert) to refrain from answering a certain fraction of the queries, or to make a certain number of errors in its answers. It would be interesting to analyze how the performance guarantees of SSAC algorithms behave as a function of such abstentions and error rates. Interestingly, we can modify our algorithm to handle a sublogarithmic number of abstentions by chekcing all possible orcale answers to them (i.e., similar to the “simulation” trick in the proof of Thm. 14).
Acknowledgments
We would like to thank Samira Samadi and Vinayak Pathak for helpful discussions on the topics of this paper.
References
 [ABD15] Hassan Ashtiani and Shai BenDavid. Representation learning for clustering: A statistical framework. In Uncertainty in AI (UAI), 2015.
 [ABS12] Pranjal Awasthi, Avrim Blum, and Or Sheffet. Centerbased clustering under perturbation stability. Information Processing Letters, 112(1):49–54, 2012.

[AG15]
Hassan Ashtiani and Ali Ghodsi.
A dimensionindependent generalization bound for kernel supervised principal component analysis.
InProceedings of The 1st International Workshop on “Feature Extraction: Modern Questions and Challenges”, NIPS
, pages 19–29, 2015.  [BB08] MariaFlorina Balcan and Avrim Blum. Clustering with interactive feedback. In Algorithmic Learning Theory, pages 316–328. Springer, 2008.

[BBM02]
Sugato Basu, Arindam Banerjee, and Raymond Mooney.
Semisupervised clustering by seeding.
In
In Proceedings of 19th International Conference on Machine Learning (ICML2002
, 2002.  [BBM04] Sugato Basu, Mikhail Bilenko, and Raymond J Mooney. A probabilistic framework for semisupervised clustering. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 59–68. ACM, 2004.

[BBV08]
MariaFlorina Balcan, Avrim Blum, and Santosh Vempala.
A discriminative framework for clustering via similarity functions.
In
Proceedings of the fortieth annual ACM symposium on Theory of computing
, pages 671–680. ACM, 2008.  [BDR14] Shalev BenDavid and Lev Reyzin. Data stability in clustering: A closer look. Theoretical Computer Science, 558:51–61, 2014.
 [Ben15] Shai BenDavid. Computational feasibility of clustering under clusterability assumptions. CoRR, abs/1501.00437, 2015.
 [BL12] Maria Florina Balcan and Yingyu Liang. Clustering under perturbation resilience. In Automata, Languages, and Programming, pages 63–74. Springer, 2012.
 [Das08] Sanjoy Dasgupta. The hardness of kmeans clustering. Department of Computer Science and Engineering, University of California, San Diego, 2008.
 [GJ02] Michael R Garey and David S Johnson. Computers and intractability, volume 29. wh freeman New York, 2002.
 [KBDM09] Brian Kulis, Sugato Basu, Inderjit Dhillon, and Raymond Mooney. Semisupervised graph clustering: a kernel approach. Machine learning, 74(1):1–22, 2009.
 [MNV09] Meena Mahajan, Prajakta Nimbhorkar, and Kasturi Varadarajan. The planar kmeans problem is nphard. In WALCOM: Algorithms and Computation, pages 274–285. Springer, 2009.
 [Vat09] Andrea Vattani. The hardness of kmeans clustering in the plane. Manuscript, accessible at http://cseweb. ucsd. edu/avattani/papers/kmeans_hardness. pdf, 617, 2009.
Appendix A Relationships Between Query Models
Proposition 15.
Any clustering algorithm that uses only samecluster queries can be adjusted to use clusterassignment queries (and no samecluster queries) with the same order of time complexity.
Proof.
We can replace each samecluster query with two clusterassignment queries as in . ∎
Proposition 16.
Any algorithm that uses only clusterassignment queries can be adjusted to use samecluster queries (and no clusterassignment queries) with at most a factor increase in computational complexity, where is the number of clusters.
Proof.
If the clustering algorithm has access to an instance from each of clusters (say ), then it can simply simulate the clusterassignment query by making samecluster queries (). Otherwise, assume that at the time of querying it has only instances from clusters. In this case, the algorithm can do the same with the instances and if it does not find the cluster, assign to a new cluster index. This will work, because in the clustering task the output of the algorithm is a partition of the elements, and therefore the indices of the clusters do not matter. ∎
Appendix B Comparison of Margin and Center Proximity
In this paper, we introduced the notion of margin niceness property. We further showed upper and lower bounds on the computational complexity of clustering under this assumption. It is therefore important to compare this notion with other previouslystudied clusterability notions.
An important notion of niceness of data for clustering is center proximity property.
Definition 17 (center proximity [Abs12]).
Let be a clustering instance in some metric space , and let be the number of clusters. We say that a centerbased clustering induced by centers satisfies the center proximity property (with respect to and ) if the following holds
This property has been considered in the past in various studies [BL12, ABS12]. In this appendix we will show some connections between margin and center proximity properties.
It is important to note that throughout this paper we considered clustering in Euclidean spaces. Furthermore, the centers were not restricted to be selected from the data points. However, this is not necessarily the case in other studies.
Euclidean  General Metric  








An overview of the known results under center proximity is provided in Table LABEL:table:alphacenter. The results are provided for the case that the centers are restricted to be selected from the training set, and also the unrestricted case (where the centers can be arbitrary points from the metric space). Note that any upper bound that works for general metric spaces also works for the Euclidean space.
We will show that using the same techniques one can prove upper and lower bounds for margin property. It is important to note that for margin property, in some cases the upper and lower bounds match. Hence, there is no hope to further improve those bounds unless P=NP. A summary of our results is provided in 2.
Euclidean  General Metric  








b.1 Centers from data
Theorem 18.
Let be a clustering instance and . Then, Algorithm 1 in [BL12] outputs a tree with the following property:
Any clustering which satisfies the margin property and its cluster centers are in , is a pruning of the tree . In other words, for every , there exists a node in the tree such that .
Proof.
Let and . [BL12] prove the correctness of their algorithm for . Their proof relies only on the following three properties which are implied when . We will show that these properties are implied by instances as well.

[nolistsep,noitemsep]

. 
This is trivially true since . 
Let . Observe that . Also, .
∎
Theorem 19.
Let be a clustering instance and be the number of clusters. For , finding a clustering of which satisfies the margin property and where the corresponding centers belong to is NPHard.
Proof.
For , [BDR14] proved that in general metric spaces, finding a clustering which satisfies the center proximity and where the centers is NPHard. Note that the reduced instance in their proof, also satisfies margin for . ∎
b.2 Centers from metric space
Theorem 20.
Let be a clustering instance and . Then, the standard singlelinkage algorithm outputs a tree with the following property:
Any clustering which satisfies the margin property is a pruning of . In other words, for every , there exists a node in the tree such that .
Proof.
[BBV08] showed that if a clustering has the strong stability property, then singlelinkage outputs a tree with the required property. It is simple to see that if then instances have strongstability and the claim follows. ∎
Theorem 21.
Let be a clustering instance and . Then, finding a clustering of which satisfies the margin is NPHard.
Proof.
[ABS12] proved the above claim but for instances. Note however that the construction in their proof satisfies margin for . ∎
Appendix C Proofs of Lemmas 12 and 13
In Section 4 we proved Theorem 10 based on two technical results (i.e., lemma 12 and 13). In this appendix we provide the proofs for these lemmas. In order to start, we first need to establish some properties about the Euclidean embedding of proposed in Section 4.
Definition 22 ( and Clustering of ).
An Clustering of row is a clustering in the form of . A Clustering of row is a clustering in the form of .
Definition 23 (Good point for a cluster).
A cluster is good for a point if adding to increases cost by exactly
Given the above definition, the following simple observations can be made.

[nolistsep,noitemsep]

The clusters , and are good for and .

The clusters and are good for and .
Definition 24 (Nice Clustering).
A clusteirng is nice if every is a singleton cluster, each is grouped in the form of either an clustering or a clustering, and each point in is added to a cluster which is good for it.
It is straightforward to see that a row grouped in a clustering costs while a row in clustering costs . Hence, a nice clustering of costs at most . More specifically, if rows are group in a clustering, the niceclustering costs . Also, observe that any nice clustering of has only the following four different types of clusters.

[label=(0),nolistsep,leftmargin=*]

Type E 
The cost of this cluster is and the contribution of each location to the cost (i.e., ) is . 
Type F  or or or
The cost of any cluster of this type is and the contribution of each location to the cost is at most . This is equal to because we had set . 
Type I  or or or
The cost of any cluster of this type is and the contribution to the cost of each location is . For our choice of , the contribution is . 
Type J  or
The cost of this cluster is (or ) and the contribution of each location to the cost is at most .
Hence, observe that in a niceclustering, any location contributes at most to the total clustering cost. This observation will be useful in the proof of the lemma below.
Lemma 25.
For large enough , any nonnice clustering of costs at least .
Proof.
We will show that any nonnice clustering of costs at least more than any nice clustering. This will prove our result. The following cases are possible.

[nolistsep,leftmargin=*]

contains a cluster of cardinality (i.e., contains weighted points)
Observe that any has at least locations at a distance greater than 4 to it, and locations at a distance at least to it. Hence, the cost of is at least . allows us to use at most singletons. This is because a nice clustering of these points uses at most clusters and the clustering uses clusters for these points. The cost of the nice cluster on these points is . While the nonnice clustering costs at least . For and the claim follows. Note that in this case the difference in cost is at least . 
Contains a cluster of cardinality
Simple arguments show that amongst all clusters of cardinality , the following has the minimum cost. . The cost of this cluster is . Arguing as before, this allows us to use singletons. Hence, a nice cluster on these points costs at most . The difference of cost is at least . 
Contains a cluster of cardinality
Simple arguments show that amongst all clusters of cardinality , the following has the minimum cost. . The cost of this cluster is . Arguing as before, this allows us to use singletons. Hence, a nice cluster on these points costs at most . The difference of cost is at least . 
Contains a cluster of cardinality
It is easy to see that amongst all clusters of cardinality , the following has the minimum cost. . The cost of this cluster is . Arguing as before, this allows us to use singletons. Hence, a nice cluster on these points costs at most . The difference of cost is at least . 
All the clusters have cardinality
Observe that amongst all nonnice clusters of cardinality , the following has the minimum cost. . The cost of this cluster is . Arguing as before, this allows us to use at most more singleton. Hence, a nice cluster on these points costs at most . The difference of cost is at least .It is also simple to see that any nonnice clustering of size causes an increase in cost of at least .
∎
Proof of lemma 12.
The proof is identical to the proof of Lemma 11 in [Vat09]. Note that the parameters that we use are different with those utilized by [Vat09]; however, this is not an issue, because we can invoke our lemma 25 instead of the analogous result in Vattani (i.e., lemma 10 in Vattani’s paper). The sketch of the proof is that based on lemma 25, only nice clusterings of cost . On the other hand, a nice clustering corresponds to an exact 3set cover. Therefore, if there exists a clustering of of cost , then there is an exact 3set cover. The other way is simpler to proof; assume that there exists an exact 3set cover. Then, the corresponding construction of makes sure that it will be clustered nicely, and therefore will cost .
∎
Proof of lemma 13.
As argued before, any nice clustering has four different types of clusters. We will calculate the minimum ratio for each of these clusters (where , and is mean of all the points in .) Then, the minimum will give the desired .

[label=(0),nolistsep,leftmargin=*]

For Type E clusters .

For Type F clusters. .

For Type I clusters, standard calculation show that .

For Type J clusters .
Furthurmore, and . Hence for poly our hardness result holds for for any . ∎