Further heuristics for k-means: The merge-and-split heuristic and the (k,l)-means

Finding the optimal k-means clustering is NP-hard in general and many heuristics have been designed for minimizing monotonically the k-means objective. We first show how to extend Lloyd's batched relocation heuristic and Hartigan's single-point relocation heuristic to take into account empty-cluster and single-point cluster events, respectively. Those events tend to increasingly occur when k or d increases, or when performing several restarts. First, we show that those special events are a blessing because they allow to partially re-seed some cluster centers while further minimizing the k-means objective function. Second, we describe a novel heuristic, merge-and-split k-means, that consists in merging two clusters and splitting this merged cluster again with two new centers provided it improves the k-means objective. This novel heuristic can improve Hartigan's k-means when it has converged to a local minimum. We show empirically that this merge-and-split k-means improves over the Hartigan's heuristic which is the de facto method of choice. Finally, we propose the (k,l)-means objective that generalizes the k-means objective by associating the data points to their l closest cluster centers, and show how to either directly convert or iteratively relax the (k,l)-means into a k-means in order to reach better local minima.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/16/2020

Structures of Spurious Local Minima in k-means

k-means clustering is a fundamental problem in unsupervised learning. Th...
09/30/2020

On Approximability of Clustering Problems Without Candidate Centers

The k-means objective is arguably the most widely-used cost function for...
11/14/2019

Distributional Clustering: A distribution-preserving clustering method

One key use of k-means clustering is to identify cluster prototypes whic...
10/19/2016

Clustering by connection center evolution

The determination of cluster centers generally depends on the scale that...
06/17/2020

Socially Fair k-Means Clustering

We show that the popular k-means clustering algorithm (Lloyd's heuristic...
06/19/2019

Robust Clustering Using Tau-Scales

K means is a popular non-parametric clustering procedure introduced by S...
05/19/2020

k-sums: another side of k-means

In this paper, the decades-old clustering method k-means is revisited. T...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Clustering is the task that consists in grouping data into homogeneous clusters with the goal that intra-cluster data should be more similar than inter-cluster data. Let be a set of points111For the sake of clarity and without loss of generality, we do not consider weighted points. in . Let be the non-empty clusters partitioning and denote by the set of cluster centers, the cluster prototypes. -Means is one of the oldest and yet prevalent clustering technique that consists in minimizing:

(1)

where denotes the squared Euclidean distance, and the index (or label) of the center of that is the closest nearest neighbor to (say, in case of ties, choose the minimum integer). Finding an optimal clustering minimizing globally is NP-hard when and  [21, 8], and polynomial when using dynamic programming [4] or when setting to the center of mass. Note that there is an exponential number of optimal -means clustering yielding the same optimal objective function: Indeed, consider an equilateral triangle with and , we thus get equivalent optimal clustering related by rotational symmetries. Then make far away separated copies so that and consider , we end up with optimal -means clustering. Minimizing the -means function of Eq. 1 is equivalent to minimizing the sum of intra-cluster squared distances or maximizing the sum of inter-cluster squared distances:

(2)

Many heuristics have been proposed to overcome the NP-hardness of

-means. They can be classified into two main groups: The

local search heuristics and the global heuristics that can be used to initialize the local heuristics. For example, the following four heuristics are classically222See for example the R language for statistical computing, http://www.r-project.org/ implemented:

  • Forgy [10] (random): Draw uniformly at random points from to set the cluster prototypes inducing the partition. It can be proved that this best discrete -means (with ) yields a -approximation factor compared to the ordinary -means using a proof by contradiction based on the variance-bias decomposition: , where

    denotes the variance and

    the centroid. In fact, , the sum of intra-cluster variances (and ).

  • MacQueen [20] (online): From a given initialization of the centers defining singleton clusters (say, for the clusters), we add a new point at a time to the cluster that contains the closest center, update that cluster centroid, and reiterate until convergence. This heuristic is also called the online or single-point -means [11].

  • Lloyd [19] (batched): From a given initialization of cluster prototypes, (1) assign points to their closest cluster, (2) relocate cluster centers to their cluster centroids, and reiterate those two steps until convergence.

  • Hartigan [12, 13] (single-point relocation): From a given initialization, find how to move a point from a cluster to another cluster so that the -means cost of Eq. 1 strictly decreases and reiterate those single-point relocations until convergence is reached. Note that a point maybe assigned to a cluster which center is not its closest center [24].

In general, a -means clustering technique partitions the data into pairwise non-overlapping convex hulls : The Voronoi partition. A partition is said stable when a local improvement of the heuristic cannot improve its -means score. Let denotes the maximum number of stable -means partitions obtained by Forgy’s, MacQueen’s, Lloyd’s and Hartigan’s schemes, respectively.

Fact 1 (Voronoi partitions)

We have and , where denotes the number of partitions of elements into non-empty subsets (that is, the Stirling numbers of the second kind) and denotes the number of partitions with non-overlapping (and non-empty) convex hulls (that is, the number of -Voronoi partitions).

Hartigan’s single-point relocation heuristic may improved Lloyd’s clustering but not the converse [23]. Note that Lloyd’s heuristic may require an exponential number of iterations to converge [25]. It is an open question [24] to bound the maximum number of Hartigan’s iterations.

On one hand, for those local heuristics performing pivots on Voronoi partitions using primitives, initialization (i.e., the initial Voronoi partition) is crucial [7] to obtain a good clustering, and several restarts, denoted by , are performed in practice to choose the best clustering. In practice, Forgy’s initialization has been replaced by -means++ [2] which provides an expected

competitive initialization. However, it was shown that there exits point sets (even in 2D) for which the probability to get such a good initialization is exponentially low 

[6] (and thus requiring exponentially many initialization restarts to reach a good Voronoi partition with high probability).

On the other hand, the global -means [18, 26] builds incrementally the clustering by adding one seed at a time. Given a current -clustering it chooses the point in that minimizes the -means objective function. Thus initialization is limited to choosing the first point, and all points can be considered as this first starting point. However, Global -means requires more computation.

In this paper, we do not address the problem of choosing the most appropriate number, , of clusters: This model selection problem has been investigated in [22, 17]. We also consider the squared Euclidean distance although the results apply to any other Bregman divergence [3, 23].

The paper is organized as follows: We investigate the blessing of empty-cluster exceptions in Lloyd’s heuristic in Section 2, and of single-point-cluster exceptions in Hartigan’s scheme in Section 3. In Section 4, we describe our novel heuristic merge-split-cluster -means and report on its performances with respect to Hartigan’s heuristic. In Section 5, we present a generalization of the -means objective function where each point is associated to its closest clusters: the -means clustering. We show how to directl convert or iteratively relax a sequence of -means to a -means and compare experimentally those solutions with a direct -means. Finally, Section 6 wrap ups the contributions and discusses further perspectives.

2 The blessing of empty-cluster exceptions in Lloyd’s batched -means

Lloyd’s -means [19] starts by initializing the seeds of the cluster centers , and then iterates by assigning the data to their closest cluster center with respect to the squared Euclidean distance, and then relocates the cluster centers to their centroids. Those batched assignment/relocation iterations are repeated until convergence is reached: The -means cost monotonically decreases with guaranteed convergence after a finite number of iterations [15]. The complexity of Lloyd’s -means is where denotes the number of iterations. It has been proved that Lloyd’s -means performs a maximum number of iterations exponential [25] or polynomial in , and the spread333The spread is the ratio of the maximum point inter-distance over the minimum point inter-distance. of the point set [16]. Some 1D point set are reported to take iterations even for , see [11]. We first, report a lower bound on the number of Lloyd’s stable optima :

Fact 2 (Exponentially many Lloyd’s -means minima)

Lloyd’s -means may have stable local minima.

The proof follows from the gadget illustrated in Figure 1.

(a) (b) (c) (d)
(a) (b) (c) (d)

Figure 1: Top: Lloyd’ s -means may have an exponential number of stable optima: Use locally the -gon (here ) gadget that admits global solution (a) and (b). Lloyd’s -means can be trapped into a local minimum: Cost in (c) and (d) is compared to the global minima ) in (a) and (b). Centroids are depicted by large colored disks. Bottom: Lloyd’s -means local optimization technique may produce empty cluster exceptions. Consider points and clusters: , , , and with and “random” Forgy initialization: and . Then the initial -means cost (a) is , the first iteration (b) and (c) yields cost and then at the second iteration we have an empty cluster exception in (d): The green cluster.
Data: a data set of size , : number of clusters
Result: A clustering partition where each point belongs to exactly one cluster (hard membership)
Initialization: Get cluster centers by choosing cluster prototypes at random from (e.g., Forgy or -means++);
;
while not converged do
       Increment , ;
       (a) Assign each point to its closest cluster ;
       /* denotes the index of the nearest neighbor */
      
(b) Relocate each cluster prototype by taking the center of mass of its assigned points;
      
if  then
             Non-empty cluster and centroid relocation:
      else
             ;
            
       end if
      c New seeding ;
       /* Empty cluster exception (may have occured overall times) */
      1 Choose new seeds for the empty clusters using -means++ or global -means, etc.;
       Check for convergence by checking if at least one is different from the previous iteration;
       if  then
            break;
       end if
      
end while
Algorithm 1 Extended Lloyd’s -means clustering: batched updates handling empty cluster exceptions.

The Hartigan’s heuristic [12, 13] proceeds by relocating a single point between two clusters provided that the -means cost function decreases. It can thus further decrease the -means score when Lloyd’s batched algorithm is stuck into a local minimum (but not the converse). Recently, Hartigan’s heuristic [23] was suggested to replace Lloyd’s heuristic on the basis that Hartigan’s local minima is a subset of Lloyd’s optima (Theorem 2.2 of [24]). We argue that this is true only when no Empty Cluster Exceptions (ECEs) are met by Lloyd’s iterations. Figure 1 illustrates a toy data set where Lloyd’s -means meets such an empty-cluster exception. In general at the end of the relocation stage, when points are assigned to their closest current centroids, we may have some empty clusters.

Fact 3 (empty-cluster exceptions)

Lloyd’s batched -means may produce empty cluster exceptions in a round.

Proof follows from Figure 1 by creating far apart (non-interacting) copies of the gadget and setting .

However, those empty-cluster exceptions are a blessing because we may add new seeds that will further decrease significantly the cost of -means: This is a partial re-seeding. Thus the extended Lloyd’s heuristic is: (a) assignment, (b) relocation, and (c) partial reseeding to keep exactly non-empty clusters for the next stage. We may use various heuristics for partially re-seeding like the incremental global -means [26] starting from to , etc.

To evaluate the frequency at which those empty-cluster exceptions occur and their number , let us take the Iris data set from the UCI repository [1]: It consists of samples with features (classified into

labels) that we first renormalize the data-set so that coordinates on each dimension have zero mean and unit standard deviation. Let us run Lloyd’s

-means with (Forgy’s) random seed initialization (with a maximum number of iterations) for . We count the number of empty cluster exceptions and report their frequency in the graph of Figure 2. We observe that the larger the , and the more frequent the exceptions. This phenomenon was also noticed in [5]. Furthermore, increases with the dimension too [5]. However, note that this is a tendency and the number of empty-cluster exceptions vary a lot from a data set to another one (given an initialization heuristic).

Figure 2: Left: Graph plot of the frequency of empty-cluster exceptions () for Lloyd’s -means using Forgy’s initialization on the normalized Iris data set computed by averaging over a million runs. Right: Number of ECEs depend on the initialization method: At , we observe a frequency of % for one empty cluster, % for two empty clusters, etc. for Forgy’s seeding but -means++ initialization produces less such exceptions.

Let us now run -means and report the empirical frequency of having simultaneous empty-cluster exceptions. (Note that our replicated toy data-sets of Figure 1 may provide values). The empty-cluster frequency depends on the initialization scheme: It is higher when using Forgy’s heuristic and lower when using -means++ or global -means. Table 2 demonstrates empirically this observation. As noticed in [5], the number of cluster-empty exceptions rise with and and the authors [5] avoided this problem by setting minimum input size on clusters. They surprisingly show empirically that -means with constraints gave better clustering than -means without constraints in practice!

Finally, let us compare the best minimum -means score when performing Lloyd’s heuristic (and stopping when we meet an empty cluster exception), and the extended Lloyd’s heuristic that partially reseeds the current clustering when the algorithm meets empty-cluster exceptions. Partial reseeding can be done in many ways by starting from the current number of cluster centers the usual seeding methods (Forgy, -means++ or global -means). Table 1 presents the results for the proof of concept using Forgy’s re-seeding: We observe that partial reseeding at ECEs allows to reach (slightly) better local minima (see in Table 1).

Table 1: Comparing Lloyd’s -means heuristics without or without partial reseeding (Forgy) when meeting empty-cluster exceptions on Iris dataset with a million restart using the same Forgy’s initialization at each round. Observe that some better local minima are reached when using partial reseeding at empty-cluster exceptions.

3 The blessing of single-point cluster exceptions in Hartigan’s heuristic

Hartigan’s heuristic [24] consider relocating a single-point provided that it decreases the -means objective function. In [23], a synthetic noisy data-set is built so that with probability tending to (as the dimension tends to infinity) any initial random partition is stable wrt. Lloyd’s -means while Hartigan’s converges to the correct solution. We recall that Hartigan’s local minima are a subset of Lloyd’s minima [24] provided that Lloyd’s heuristic did not encounter empty-cluster exceptions. Note that a single-point cluster (with associated cluster having zero variance) cannot be relocated to other clusters since it necessarily increases the -means energy (sum of intra-cluster variances):

(3)

Table 2 provides statistics on the Hartigan’s -means score and the number of single-point-cluster exceptions (SPCEs) met when performing Hartigan’s heuristic.

Table 2: Some statistics on Hartigan’s heuristic on the Iris data set: min/avg/max -means score and min/avg/max number of single-point cluster exceptions (SPCEs).

Consider the case of Single-point-Cluster Exceptions (SCEs) in Hartigan’s scheme where we decide to merge this single-point cluster with another cluster and redraw another center from (that can thus decrease significantly the variance of the change cluster). We accept this relocation iff. this merge&re-seed operation decreases the -means loss. For example, when (and ), the classical Hartigan’s best clustering has -means score while the heuristic with partial reseeding (associating the single-point clusters to their closest other clusters), we obtain . We keep the experiments short here since the next Section improves Hartigan’s heuristic with detailed experiments.

4 A novel heuristic: The merge-and-split-cluster -means

This novel heuristic proceeds by considering pairs of clusters with corresponding centers and . The basic local search primitive (pivot) consists in computing the best -means score difference by merging and splitting again with two new centers and . Let and denote the Voronoi partition of induced by and . Since the clusters other than and are untouched, the difference of the -means score is written as:

(4)

where denotes the -means objective function: namely, the cluster variance of with respect to center . There are several ways (randomized or deterministic) to implement the merge-and-split operation: For example, the two new centers can be found by computing:

  • a

    -means: A brute-force method computes all hyperplanes

    444We do not need to compute explicitly the equation of the hyperplane since clockwise/counterclockwise orientation predicates are used instead. Those predicates rely on computing the sign of a matrix determinant. passing through (extreme) points and the induced sum of variances of the below-above clusters in -time. Using topological sweep [15], it can be reduced to time. Note that for and unfixed , -means is NP-hard [8]. We can also use coresets to get a -approximation of a -means [9] in linear time .

  • a discrete -means: We choose among the points of the two best centers (naively implemented in ). This yields a -approximation of -means.

  • a -means++ heuristic: We pick at random, then pick

    randomly according to the normalized distribution of the squared distances of the points in

    to , see -means++ [2]. We repeat a given number of rounds this initialization (say, ) and keeps the best one.

When , we accept replacing and by and , respectively. Otherwise, we consider another pair of clusters and stop iterating when all pairs do not produce a lower -means score. This heuristic can be classified as a macro kind of Hartigan-type heuristic that is not based on local Voronoi assignment. Indeed, Hartigan’s heuristic moves a point from a cluster to a cluster and update the two centroids correspondingly. Our heuristic also change these two clusters but can accept further improvements with respect to a -means operation on . Thus at the last stage of a Hartigan’s heuristic, we can perform this merge-and-split heuristic to further improve the clustering. (This heuristic can further be generalized by simultaneous merging-and-splitting of clusters.)

Theorem 1

The merge-and-split -means heuristic decreases monotonically the objective function and converges after a finite number of iterations.

Since each pivot step between Voronoi partitions strictly decreases the -means score by and that is lower bounded, it follows that the merge-and-split -means converges after a finite number of iterations. We compare our heuristic with both Hartigan’s ordinary and discrete variants that consists in moving a point to another cluster iff. the two recomputed medoids of the selected clusters yield a better -means score. Heuristic performances are compared with the same initialization (Forgy’s or -means++ seeding) and by averaging over a number of rounds: Observe in Table 3 that our heuristic (MSC for short) always outperforms discrete Hartigan’s method not suprisingly. Although the number of basic primitives (#ops) is lower for MSC, each such operation is more costly. Thus MSC -means is overall more time consuming but gets better local optima solutions. Note that the discrete -means medoid splitting procedure is very well suited for the -modes algorithm [14], a -means extension working on categorical data sets.

Table 3: Average performance over trials of the merge-and-split -means heuristic compared to Hartigan’s and discrete Hartigan’s heuristics. Top: Common Forgy’s initialization and the MSC -means has been implemented using an optimal discrete -means. Bottom: Common -means++ initialization and the MSC -means has been implemented using a -means++ with . We observe experimentally that MSC heuristic yields always better performance than Hartigan’s discrete single-point relocation heuristic, and is often signigicantly better than Hartigan’s heuristic. Note that -means++ seeding performs better than Forgy’s seeding

5 Clustering with the -means objective function

Let us generalize the -means objective function as follows: For each data , we associate to its nearest cluster centers (with denoting the cluster center indexes), and ask to minimize the following -means objective function (with ):

(5)

When , this is exactly the -means objective function of Eq. 1. Otherwise the clusters overlap and . Note that when , since all cluster centers coincide to the centroid (or barycenter), the center of mass. We observe that:

Fact 4

with equality reached when .

Both Lloyd’s and Hartigan’s heuristics can be adapted straightforwardly to this setting.

Theorem 2

Lloyd’s -means decreases monotonically the objective function and converge after a finite number of steps.

Proof: Let and denote the cost at round , for the assignment () and relocation () stages. Let be the initial cost (say, from Forgy’s initialization of ). For , we have: In the assignment stage , each point is assigned to its nearest neighbor centers . Therefore, we have . In the relocation stage , each cluster is updated by taking its centroid . Thus we have . When (and thus ), we stop the batched iterations.

Figure 3 illustrates a -means on a toy data-set.

Since for all and the iterations strictly decreases the score function, the algorithm converges. Moreover, since the number of different cluster sets induced by means is upper bounded by , and that cluster sets cannot be repeated, it follows that -means converges after a finite number of iterations. The bound can further be improved by considering the -order weighted Voronoi diagrams, similarly to [15]. Note that the basic Lloyd’s -means may also produce empty-cluster exceptions although those become rarer as increase (checked experimentally).

Figure 3: -means: Each data point is associated to its two closest cluster center neighbors. After converging, we relax the -means solution by keeping only the closest neighbor on the current centroids and run the classic -means. Alternatively, we can relax iteratively the means into a -means until we get a -means.

Although -means is interesting in its own (see Discussion in Section 6), it can also be used for -means. Indeed, instead of running a local search -means heuristic that may be trapped too soon into a “bad” local minimum, we prefer to run a means for a prescribed . We can then convert a -means by assigning to each point its closest neighbor (among the assigned at the end of the -means), and then compute the centroids and launch a regular Lloyd’s -means to finalize: Let -means denote this conversion. For example, for and , the converted -means beats the -means of the time for using Forgy’s initialization on Iris. Table 4 shows experimentally that converted -means beats on average the regular -means (for the Iris data-set) and this phenomenon increases not surprisingly with . However the best minimum score is often obtained by classical -means. Thus it suggests that performs better when the number of restarts is limited. In fact, -means tend to smooth the -means optimization landscape and produce less local minima but also smooth the best minima.

kwin-means-meansminavgminavg320.878.9492.3978.9478.94424.2957.3163.1557.3170.33557.7646.5352.8849.7451.10680.5538.9345.6038.9341.63776.6734.1840.0034.2936.85880.3629.8736.0529.8732.52978.8527.7632.9127.9130.151079.8825.8130.2425.9728.02 klwin-means-meansminavgminavg5258.346.5352.7249.7451.245462.446.5352.5549.7449.748280.829.8736.4029.8732.548361.129.8736.1932.7634.048655.529.8836.18932.7535.2610278.825.8130.6125.9728.2310382.525.9530.2326.4727.7610564.725.9030.3226.9928.61
Table 4: Comparing -means with -means (left) and with -means (right). The percentage of times it outperforms -means is denoted by win.

We can also perform a cascading conversion of -means to -means: Once a local minimum is reached for -means, we initialize a means by dropping for each point its farthest cluster, perform a Lloyd’s -means, and we reiterate this scheme until we get a -means: An ordinary -means. Let -means denote this scheme. Table 4 (right) presents the performance comparisons of a regular Lloyd’s -means with a Lloyd’s -means for various values of with the initialization of both algorithms performed by the same seeding for fair comparisons.

6 Discussion

We have extended the classical Lloyd’s and Hartigan’s heuristics with partial re-seeding and proposed new local heuristics for -means. We summarize our contributions as follows: First, we showed the blessing of empty-cluster events in Lloyd’s heuristic and of single-point-cluster events in Hartigan’s heuristic. These events happen increasingly when the number of cluster or the dimension increase, or when running those heuristics a given number of trials to choose the best solution. Second, we proposed a novel merge-and-split-cluster -means heuristic that improves over Hartigan’s heuristic that is currently the de facto method of choice [23]. We showed experimentally that this method brings better -means result at the expense of computational cost. Third, we generalized the -means objective function to the -means objective function and show how to directly convert or iteratively relax a -means heuristic to a -means avoiding potentially being trapped into too many local optima. -Means is yet another exploratory clustering technique for browsing the space of hard clustering partitions. For example, when -means is trapped, we may consider a -means to get out of the local minimum and then convert the -means to a -means to explore a new (local) minimum.

References

  • [1] D.J. Newman A. Asuncion.

    UCI machine learning repository, 2007.

  • [2] D. Arthur and S. Vassilvitskii. -means++ : the advantages of careful seeding. In SODA, pages 1027 – 1035, 2007.
  • [3] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. Journal of Machine Learning Research, 6:1705–1749, 2005.
  • [4] R. Bellman.

    A note on cluster analysis and dynamic programming.

    Mathematical Biosciences, 18(3-4):311 – 312, 1973.
  • [5] K. Bennett, Paul S. Bradley, and Ayhan Demiriz. Constrained -means clustering. MSR-TR-2000-65, 2000.
  • [6] A. Bhattacharya, R. Jaiswal, and N. Ailon. A tight lower bound instance for -means++ in constant dimension. In Theory and Applications of Models of Computation, LNCS 8402, pages 7–22, 2014.
  • [7] S. Bubeck, M. Meila, and U. von Luxburg. How the initialization affects the stability of the -means algorithm. ESAIM: Probability and Statistics, 16:436–452, 1 2012.
  • [8] S. Dasgupta. The hardness of -means clustering. CS2007-0890, University of California, USA, 2007.
  • [9] D. Feldman, M. Monemizadeh, and C. Sohler. A PTAS for -means clustering based on weak coresets. In SoCG, pages 11–18. 2007.
  • [10] E. W. Forgy. Cluster analysis of multivariate data: efficiency vs interpretability of classifications. Biometrics, 1965.
  • [11] S. Har-Peled and B. Sadri. How fast is the -means method? In SODA, pages 877–885. SIAM, 2005.
  • [12] J. A. Hartigan. Clustering Algorithms. John Wiley & Sons, Inc., New York, NY, USA, 99th edition, 1975.
  • [13] J. A. Hartigan and M. A. Wong. Algorithm AS 136: A -means clustering algorithm. Journal of the Royal Statistical Society. Series C, 28(1):100–108, 1979.
  • [14] Z. Huang. Extensions to the -means algorithm for clustering large data sets with categorical values. Data Min. Knowl. Discov., 2(3):283–304, September 1998.
  • [15] M. Inaba, N. Katoh, and H. Imai. Applications of weighted Voronoi diagrams and randomization to variance-based -clustering. In SoCG, pages 332–339, 1994.
  • [16] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. Piatko, R. Silverman, and A. Y Wu. The analysis of a simple -means clustering algorithm. In SoCG, pages 100–109. 2000.
  • [17] B. Kulis and M. I. Jordan. Revisiting -means: New algorithms via Bayesian nonparametrics. In ICML, 2012.
  • [18] A. Likas, N. Vlassis, and J. J Verbeek. The global -means clustering algorithm. Pattern recognition, 36(2):451–461, 2003.
  • [19] S. P. Lloyd. Least squares quantization in PCM. Technical report, Bell Laboratories, 1957. reprinted in IEEE Transactions on Information Theory, March 1982.
  • [20] J. B. MacQueen. Some methods of classification and analysis of multivariate observations. Proceedings Fifth Berkeley Symposium on Mathematical Statistics and Probability, 1967.
  • [21] M. Mahajan, P. Nimbhorkar, and K. Varadarajan. The planar -means problem is NP-hard. In WALCOM: Algorithms and Computation, pages 274–285. Springer, 2009.
  • [22] D. Pelleg and A. W. Moore. -means: Extending

    -means with efficient estimation of the number of clusters.

    In Proceedings of the Seventeenth International Conference on Machine Learning, pages 727–734, 2000.
  • [23] N. Slonim, E. Aharoni, and . Crammer. Hartigan’s -means versus Lloyd’s -means: Is it time for a change? In IJCAI, pages 1677–1684, 2013.
  • [24] M. Telgarsky and A. Vattani. Hartigan’s method: -means clustering without Voronoi. In

    International Conference on Artificial Intelligence and Statistics

    , pages 820–827, 2010.
  • [25] A. Vattani. -means requires exponentially many iterations even in the plane. Discrete & Computational Geometry, 45(4):596–616, 2011.
  • [26] J. Xie, S. Jiang, W. Xie, and X. Gao. An efficient global -means clustering algorithm. Journal of computers, 6(2), 2011.