1 Introduction
The means problems is to partition a set of points in dimensional space into subsets such that is minimized, where is the center of , and is the distance between two points of and . The means problem is one of the classical NPhard problems in the field of computer science, and has broad applications as well as theoretical importance. The means problem is NPhard even for the case [3]. The classical means problem and median problem have received a lot of attentions in the last decades [28, 8, 12, 19, 25, 1, 9, 21, 16, 30].
Inaba, Katoh, and Imai [20] showed that means problem has an exact algorithm [20] with running time . For the means problem, Arthur and Vassilvitskii [5] gave a )approximation algorithm. A approximation scheme was derived by de la Vega et al. [12] with time . Kumar, Sabharwal, and Sen [25] presented a approximation algorithm for the means problem with running time . Ostrovsky et al. [30] developed a approximation for the means problem under the separation condition with running time . Feldman, Monemizadeh, and Sohler [16] gave a approximation scheme for the means problem using corset with running time . Jaiswal, Kumar, and Yadav [21] presented a approximation algorithm for the means problem using sampling method with running time . Jaiswal, Kumar, and Yadav [22] gave a approximation algorithm with running time . Kanungo et al. [23] presented a approximation algorithm for the problem in polynomial time by applying local search. Ahmadian et al. [2] gave a approximation algorithm for the means problem in Euclidean space. For fixed and arbitrary , Friggstad, Rezapour, and Salavatipour [18] and CohenAddad, Klein, and Mathieu [11] proved that the local search algorithm yields a PTAS for the problem, which runs in time. CohenAddad [10] further showed that the running time can be improved to .
The input data of the means problem always satisfies local properties. However, for many applications, each cluster of the input data may satisfy some additional constraints. It seems that the constrained means problem has different structure from the classical means problem, which lets each point go to the cluster with nearest center. The constrained means problems have been paid lots of attention in the literature, such as the chromatic clustering problem [4, 14], the capacity clustering problem [37], gather clustering [33], fault tolerant clustering [32], uncertain data clustering [36], semisupervised clustering [35, 34], and diversity clustering [26]. As given in Ding and Xu [15], all means problems with constraint conditions can be defined as follows.
Definition 1
[Constrained means problem] Given a point set , a list of constraints , and a positive integer , the constrained means problem is to partition into clusters such that all the constraints in are satisfied and is minimized, where denotes the centroid of .
Recent years, there are some progress for the constrained means problem. The first polynomial time approximation scheme with running time for the constrained means problem was shown by Ding and Xu [15], and a collection of size of candidate approximate centers can be obtained. The existing fastest approximation schemes for the constrained means problem takes time [6, 7, 17], which was first derived by Bhattacharya, Jaiswai, and Kumar [6, 7]. Their algorithm gives a collection of size of candidate approximate centers. Feng et al. [17] analyzed the complexity of [6, 7] and gave an algorithm with running time , which outputs a collection of size of candidate approximate centers.
It is known that 2means problem is the smallest version of the means problem, and remains being NPhard. Obviously, all the approximation algorithms of the means problem can be directly applied to get approximation algorithms for the 2means problem. However, not all the approximation algorithms for 2means problem can be generalized to solve the means problem. The understanding of the characteristics of the 2means problem will give new insight to the means problem. Meanwhile, getting two clusters of the input data is useful in many interesting applications, such as the “good” and “bad” clusters of input data, the “normal” and “abnormal” clusters of input data, etc.
For the 2means problem, Inaba, Katoh, and Imai [20] presented an approximation scheme for means with running time . Matoušek [27] gave a deterministic approximation algorithm with running time log. Sabharwal and Sen [31] presented a approximation algorithm with linear running time . Kumar, Sabharwal, and Sen [24] gave a randomized approximation algorithm with running time .
This paper develops a new technology to deal with the constrained 2means problem. It is based on how balance between the sizes of clusters in the constrained means problem. This brings an algorithm with running time . Our algorithm outputs a collection of size of candidate approximate centers, in which one of them induces a approximation for the constrained means problem. The technology shows a faster way to obtain first two approximate centers when applied to the constrained means, and can speed up the existing approximation schemes for constrained means with greater than 2. Using this method developed in this paper, we point out every existing PTAS for the constrained means so far with time can be transformed to a new PTAS with time complexity . Therefore, we provide a unified approach to speed up the existing approximation scheme for the constrained means problem.
This papers is organized with a few sections. In Section 2, we give some basic notations. In section 3, we give an overview of the new algorithm for the constrained means problem. In section 4, we give a much faster approximation scheme for the constrained means problem. In section 5, we apply the method to the general constrained means problem, and show faster approximation schemes.
2 Preliminaries
This section gives some notations that are used in the algorithm design.
Definition 2
Let be a real number in . Let be a set of points in .

A partition of is balanced if for .

A balanced means problems is to partition into such that for all .
Definition 3
Let be a set of points in , and .

Define .

Define .
Definition 4
Let be a set of points in , and be a partition of .

Define .

Define .

Define .

Define .

Define .
Chernoff Bound (see [29]) is used in the approximation algorithm when our main result is applied in some concrete model.
Theorem 5
Let be independent random  variables, where takes
with probability at least
for . Let . Then for any , .The union bound is expressed by the inequality
(1) 
where are events that may not be independent. We will use the famous Stirling formula
(2) 
For two points and in , both and represent their Euclidean distance . For a finite set , is the number of elements in it.
Lemma 6
[25] For a set of points, and any point , .
Lemma 7
[20] Let be a set of points in . Assume that is a set of points obtained by sampling points from uniformly and independently. Then for any , with probability at least , where .
Lemma 8
[15] Let be a set of points in , and be an arbitrary subset of with points for some . Then , where .
3 Overview of Our Method
In order to develop a faster algorithm for the constrained Means problem, we assume that the input set has two clusters and . We will try to find a subset and of size from and , respectively, where is an integer to be large enough to derive an approximate center by Lemma 8. We consider two different cases. The first case is that the two clusters and with have a balanced sizes of points (. We get a set of random samples, and another set of random samples from . An approximate center for the cluster will be generated via one of the subsets of size from . An approximate center for will be generated via one of the subsets of size from . The two parameters and are selected based on the balanced condition between the sizes of and .
We discuss the case that is much larger than . We generate a subset with that will be used to generate an approximate center for . The set can be obtained via random samples from since is much larger than . It also has two cases to find another approximate center for . The first case is that almost all points of is close to . In this case, we just let be the same as , which is based on Lemma 8. The second case is that there are enough points of to be far from . This transforms the problem into finding the second approximate center for the second cluster assuming the approximate center is good enough for .
Phase of the algorithm lets be equal to . Phase extracts the set of half elements from with larger distances to than the rest half. It will have phases to search . The next phase will shrink the search area by a constant factor. This method was used in the existing algorithms. As we only have one approximate center for , it saves the amount of time by a factor to find the first approximate center. This makes our approximation algorithm run in time for the constrained means problem.
4 Approximation Algorithm for Constrained 2means
In this section, an approximation scheme will be presented for the constrained means problem. The methods used in this section will be applied to the general constrained means problem in Section 5. We define some parameters before describing the algorithm for the constrained means problem.
4.1 Setting Parameters
Assume that real parameter is used to control the approximation ratio, and real parameter is used to control failure probability of the randomized algorithm. We define some constants for our algorithm and its analysis. All the parameters that are set up through (3) to (17) in this section are positive real constants.
(3)  
(4)  
(5)  
(6) 
We select to satisfy inequality (7).
(7) 
(8)  
(9)  
(10)  
(11)  
(12)  
(13)  
(14)  
(15)  
(16) 
We select and in to satisfy inequality (17).
(17) 
4.2 Algorithm Description
In this section, an approximation algorithm for the constrained 2means problem is given. It outputs a collection of centers, and one of them brings a approximation for the constrained 2means problem.
Algorithm Means
Input: is a set of points in , and real parameter to control accuracy of approximation.
Output: A collection of two centers .

Let ;

Let ;

Let be defined as that in equation (16);

Let ;

Let ;

Let ;

Select a set of random samples from ;

Select a set of random samples from ;

For every two subsets of and of of size ,

{

Compute the centroid of , and of ;

Add to ;

}

Select a set of random samples from ;

Compute the centroid of ;

Let ;

Repeat

Select a set of random samples from ;

For each size subset of copies of

{

Compute the centroid of ;

Add to ;

}

Let be the th largest of ;

Let contain all of the points in with ;

Let ;

Until is empty;

Output ;
End of Algorithm
Definition 11
Let be the approximate center of via the algorithm.

Define .

Define .

Define .

Define .

Let be the center of for .

Let be the center for for

For each , let .

Let be the multiset with number of . It transforms every element of to .

Let .

Let be the center of for .
Lemma 12
Let be a real number in and be positive real number with . Then we have ,
Proof: By Taylor formula, we have for some . Thus, we have .
Lemma 13
The algorithm Means(.) has the following properties:

With probability at least , at least random points are from in , where .

If the two clusters and satisfy , then with probability at least , at least random points are from in , where .

If the clusters and satisfy , then with probability at least , contains no element of , where for all , where .
Proof: The Lemma is proven with the following cases.
Statement 1: Since , we have . Let . With elements from , with probability at most (by inequality (19)), there are less than elements from by Theorem 5.
Statement 2: Let . By line 5 of the algorithm, we have . When elements are selected from , by Theorem 5, with probability at most (by inequality (19) and the range of determined nearby equation (7)), multiset has less than random points from .
Statement 3: After getting and of sizes and , respectively, it takes cases to enumerate their subsets of size . If contains elements from and contains elements from , then it generates pairs of and .
Statement 4: Let . When elements are selected in , the probability that contains no element of is at least by Lemma 12, and equations (16) and (8). Let . We have for all small positive when .
Statement 5: The loop from line 17 to line 27 iterates at most times since . Each iteration of the internal loop from line 19 to line 23 generates pairs of centers.
Lemma 14
Assume that only contains elements in ). Then with probability at least (), the approximate center satisfies the inequality
(27) 
Proof: It follows from Lemma 7. Let . This is because
(28)  
(29)  
(30)  
(31) 
Thus, . Therefore, the failure probability is at most by Lemma 7. Let .
We assume that if the unbalanced condition of Statement (4) of Lemma 13 is satisfied, then inequality (27) holds for with and . In otherwords, inequality holds at the unbalanced condition since it has a large probability to be true by Lemma 14 and Statement 4 of Lemma 13.
Lemma 15
.
Proof: By Lemma 6 and inequality (27), we have
(32)  
(33)  
(34)  
(35) 
Note that the transition from (33) to (34) is by item 3 of Definition 4 .
We discuss the two different cases. They are based on the size of .
Case 1: .
In this case, we let .
Lemma 16
.
Proof: Since , we have by the condition of Case 1. Let . We have . By Lemma 8, we have
Lemma 17
.
Proof: By the definition of , we have the following inequalities:
(36)  
(37)  
(38)  
(39)  
(40) 
Lemma 18
.
Proof:
(41)  
(42)  
(43)  
(44)  
Comments
There are no comments yet.