Recovery guarantees for exemplar-based clustering

09/12/2013 ∙ by Abhinav Nellore, et al. ∙ The University of Texas at Austin 0

For a certain class of distributions, we prove that the linear programming relaxation of k-medoids clustering---a variant of k-means clustering where means are replaced by exemplars from within the dataset---distinguishes points drawn from nonoverlapping balls with high probability once the number of points drawn and the separation distance between any two balls are sufficiently large. Our results hold in the nontrivial regime where the separation distance is small enough that points drawn from different balls may be closer to each other than points drawn from the same ball; in this case, clustering by thresholding pairwise distances between points can fail. We also exhibit numerical evidence of high-probability recovery in a substantially more permissive regime.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 14

page 16

page 17

Code Repositories

medoids

Sims/figs for "Recovery guarantees for exemplar-based clustering," by Abhinav Nellore and Rachel Ward.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Consider a collection of points in Euclidean space that forms roughly isotropic clusters. The centroid

of a given cluster is found by averaging the position vectors of its points, while the

medoid, or exemplar, is the point from within the collection that best represents the cluster. To distinguish clusters, it is popular to pursue the -means objective: partition the points into clusters such that the average squared distance between a point and its cluster centroid is minimized. This problem is in general NP-hard [1, 2]. Further, it has no obvious convex relaxation, which could recover the global optimum while admitting efficient solution; practical algorithms like Lloyd’s[3] and Hartigan-Wong [4] typically converge to local optima. -medoids clustering111-medoids clustering is sometimes called -medians clustering in the literature. is also in general NP-hard [5, 6], but it does admit a linear programming (LP) relaxation. The objective is to select points as medoids such that the average squared distance (or other measure of dissimilarity) between a point and its medoid is minimized. This paper obtains guarantees for exact recovery of the unique globally optimal solution to the -medoids integer program by its LP relaxation. Commonly used algorithms that may only converge to local optima include partitioning around medoids (PAM) [7, 8] and affinity propagation [9, 10].

Figure 1: 18 images of 3 faces 6 facial expressions from the Yale Face Database were clustered using affinity propagation and Lloyd’s algorithm. The medoids identified by affinity propagation (framed) are representative faces from the clusters, while the centroids found by Lloyd’s algorithm are averaged faces.

To illustrate the difference between a centroid and a medoid, let us put faces to points. The Yale Face Database [11] has grayscale images of several faces, each captured wearing a range of expressions—normal, happy, sad, sleepy, surprised, and winking. Suppose every point encodes an image from this database as the vector of its pixel values. Intuitively, facial expressions represent perturbations of a background composed of distinguishing image features; it is thus natural to expect that the faces cluster by individual rather than expression. Both Lloyd’s algorithm and affinity propagation are shown to recover this partitioning in Figure 1, which also displays centroids and medoids of clusters.222 randomly initialized repetitions of Lloyd’s algorithm were run; the clustering that gave the smallest objective function value is shown. The package APCluster [12] was used to perform affinity propagation. The centroids are averaged faces, but the medoids are actual faces from the dataset. Indeed, applications of -medoids clustering are numerous and diverse: besides finding representative faces from a gallery of images [13], it can group tumor samples by gene expression levels [14] and pinpoint the influencers in a social network [15].

1.1 Setup and principal result

We formulate -medoids clustering on a complete weighted undirected graph with vertices, although recovery guarantees are proved for the case where vertices correspond to points in Euclidean space and each edge weight is the squared distance between the points it connects.333We use squared distances rather than unsquared distances only because we were able to derive stronger theoretical guarantees using squared distances. Let characters in boldface (“”) refer to matrices/vectors and italicized counterparts with subscripts (“”) refer to matrix/vector elements. Denote as the nonnegative weight of the edge connecting vertices and , and note that since is simple. -medoids clustering (KMed) finds the minimum-weight bipartite subgraph of such that and every vertex in has unit degree. The vertices in are the medoids. Expressed as a binary integer program, KMed is

(1)
s.t. (2)
(3)
(4)
(5)

Above, means the set . When , vertex serves as vertex ’s medoid; that is, among all edges between medoids and , the edge between and has the smallest weight. Otherwise, . A cluster is identified as a maximal set of vertices that share a given medoid.

Like many clustering programs, KMed is in general NP-hard and thus computationally intractable for a large . Replacing the binary constraints (5) with nonnegativity constraints, we obtain the linear program relaxation LinKMed:

(6)
s.t. (7)
(8)
(9)
(10)

For a vector (point) , let denote its norm. It is known that for any configuration of points in one-dimensional Euclidean space, the LP relaxation of -medoids clustering invariably recovers clusters when unsquared distances are used to measure dissimilarities between points[16]. Therefore, we confine our attention to . The following is our main recovery result, and its proof is obtained in the third section.

Theorem 1.

Consider unit balls in -dimensional Euclidean space (with ) for which the centers of any two balls are separated by a distance of at least . From each ball, draw points as independent samples from a spherically symmetric distribution supported in the ball satisfying

(11)

Suppose that squared distances are used to measure dissimilarities between points, . Then there exist values of and for which the following statement holds: with probability exceeding , the optimal solution to -medoids clustering (KMed) is unique and agrees with the unique optimal solution to (LinKMed), and assigns the points in each ball to their own cluster.

Remark 2.

The uniform distribution satisfies (

11) in dimension , but for , a distribution satisfying (11) concentrates more probability mass towards the center of the ball. This means that the recovery results of Theorem 1 are stronger for smaller . However, by applying a random projection, points in dimensions can be projected into dimensions while preserving pairwise Euclidean distances up to a multiplicative factor . In this sense, clustering problems in high-dimensional Euclidean space can be reduced to problems in low-dimensional Euclidean space[17].

Remark 3.

Once the centers of any two unit balls are separated by a distance of 4, points from within the same ball are necessarily at closer distance than points from different balls. For the k-medoid problem, cluster recovery guarantees in this regime are given in [30]. As far as the authors are aware, Theorem 1 provides the first recovery guarantees for k-medoids beyond this regime.

1.2 Relevant works

While the literature on clustering is extensive, three lines of inquiry are closely related to the results contained here.

  • Recovery guarantees for clustering by convex programming. Our work is aligned in spirit with the tradition of the compressed sensing community, which has sought probabilistic recovery guarantees for convex relaxations of nonconvex problems. Reference [18] presents such guarantees for the densest -clique problem [19]: partition a complete weighted graph into disjoint cliques so that the sum of their average edge weights is minimized. Also notable are [20, 21, 22, 23], which find recovery guarantees for correlation clustering [24] and variants. Correlation clustering outputs a partitioning of the vertices of a complete graph whose edges are labeled either “” (agreement) or “” (disagreement); the partitioning minimizes the number of agreements within clusters plus the number of disagreements between clusters.

    In all papers mentioned in the previous paragraph, the probabilistic recovery guarantees apply to the stochastic block model (also known as the planted partition model) and generalizations. Consider a graph with vertices, initially without edges. Partition the vertices into clusters. The stochastic block model [25, 26] is a random model that draws each edge of the graph independently: the probability of a “” (respectively, “”) edge between two vertices in the same cluster is (respectively, ), and the probability of a “” (respectively, “”) edge between two vertices in different clusters is (respectively, ). Unfortunately, any model in which edge weights are drawn independently does not include graphs that represent points drawn independently in a metric space. For these graphs, the edge weights are interdependent distances.

    A recent paper [27] builds on [28, 29] to derive probabilistic recovery guarantees for subspace clustering: find the union of subspaces of

    that lies closest to a set of points. This problem has only trivial overlap with ours; exemplars are “zero-dimensional hyperplanes” that lie close to clustered points, but there is only one zero-dimensional

    subspace of —the origin. Reference [30], on the other hand, introduces a tractable convex program that does find medoids. This program can be recast as a dualized form of -medoids clustering. However, the deterministic guarantee of [30]:

    1. applies only to the case where the clusters are recoverable by thresholding pairwise distances; that is, two points in the same cluster must be closer than two points in different clusters. Our probabilistic guarantees include a regime where such thresholding may fail.

    2. specifies that a regularization parameter in the objective function must be lower than some critical value for medoids to be recovered. is essentially a dual variable associated with the of -medoids, and it remains unchosen in the Karush-Kuhn-Tucker conditions used to derive the guarantee of [30]. The number of medoids obtained is thus unspecified. By contrast, we guarantee recovery of a specific number of medoids.

  • Recovery guarantees for learning mixtures of Gaussians.

    We derive recovery guarantees for a random model where points are drawn from isotropic distributions supported in nonoverlapping balls. This is a few steps removed from a Gaussian mixture model. Starting with the work of Dasgupta

    [31], several papers (a representative sample is [32, 33, 34, 35, 36, 37, 38, 39]) already report probabilistic recovery guarantees for learning the parameters of Gaussian mixture models using algorithms unrelated to convex programming. Hard clusters can be found after obtaining the parameters by associating each point with the Gaussian whose contribution to the mixture model is largest at . The questions here are different from our ours: under what conditions does a given polynomial-time algorithm—not a convex program, which admits many algorithmic solution techniques—recover the global optimum? How close are the parameters obtained to their true values? The progression of this line of research had been towards reducing the separation distances between the centers of the Gaussians in the guarantees; in fact, the separation distances can be zero if the covariance matrices of the Gaussians differ [40, 41]. Our results are not intended to compete with these guarantees. Rather, we seek to provide complementary insights into how often clusters of points in Euclidean space are recovered by LP.

  • Approximation algorithms for -medoids clustering and facility location. As mentioned above, for any configuration of points in one-dimensional Euclidean space, the LP relaxation of -medoids clustering exactly recovers medoids for dissimilarities that are unsquared distances [16]. In more than one dimension, nonintegral optima whose costs are lower than that of an optimal integral solution may be realized. There is a large literature on approximation algorithms for -medoids clustering based on rounding the LP solution and other methods. This literature encompasses a family of related problems known as facility location. The only differences between the uncapacitated facility location problem (UFL) and -medoids clustering are that 1) only certain points are allowed to be medoids, 2) there is no constraint on the number of clusters, and 3) there is a cost associated with choosing a given point as a medoid.

    Constant-factor approximation algorithms have been obtained for metric flavors of UFL and -medoids clustering, where the measures of distance between points used in the objective function must obey the triangle inequality. Reference [42] obtains the first polynomial-time approximation algorithm for metric UFL; it comes within a factor of 3.16 of the optimum. Several subsequent works give algorithms that improve this approximation ratio [43, 44, 45, 46, 47, 48, 49, 50, 51, 52]. It is established in [43] that unless , the lower bounds on approximation ratios for metric UFL and metric -medoids clustering are, respectively, and . Here, is the solution to . In unpublished work, Sviridenko strengthens the complexity criterion for these lower bounds to .444See Theorem 4.13 of Vygen’s notes [53] for a proof. We thank Shi Li for drawing our attention to this result. The best known approximation ratios for metric UFL and metric -medoids clustering are, respectively, [54] and [55]. Before the 2012 paper [55], only a -approximation algorithm had been available since 2001 [56]. Because there is still a large gap between the current best approximation ratio for -medoids clustering () and the theoretical limit (), finding novel approximation algorithms remains an active area of research. Along related lines, a recent paper [57] gives the first constant-factor approximation algorithm for a generalization of -medoids clustering in which more than one medoid can be assigned to each point.

    We emphasize that our results are recovery guarantees; instead of finding a novel rounding scheme for LP solutions, we give precise conditions for when solving an LP yields the -medoids clustering. In addition, our proofs are for squared distances, which do not respect the triangle inequality.

1.3 Organization

The next section of this paper uses linear programming duality theory to derive sufficient conditions under which the optimal solution to the -medoids integer program KMed coincides with the unique optimal solution of its linear programming relaxation LinKMed. The third section obtains probabilistic guarantees for exact recovery of an integer solution by the linear program, focusing on recovering clusters of points drawn from separated balls of equal radius. Numerical experiments demonstrating the efficacy of the linear programming approach for recovering clusters beyond our analytical results are reviewed in the fourth section. The final section discusses a few open questions, and an appendix contains one of our proofs.

2 Recovery guarantees via dual certificates

Let be the index of vertex ’s medoid and be . For points in Euclidean space, is the index of the second-closest medoid to point . For simplicity of presentation, take when there is only one medoid. Denote as the set of points whose medoid is indexed by . Let refer to the positive part of the term enclosed in parentheses. Begin by writing a necessary and sufficient condition for a unique integral solution to LinKMed.

Proposition 4.

LinKMed has a unique optimal solution that coincides with the optimal solution to KMed if and only if there exist some and such that

(12)

Proposition 4 rewrites the Karush-Kuhn-Tucker (KKT) conditions corresponding to the linear program LinKMed in a convenient way; refer to the appendix for a derivation. Let be the number of points in the same cluster as point . Choose to obtain the following tractable sufficient condition for medoid recovery.

Corollary 5.

LinKMed has a unique optimal solution that coincides with the optimal solution to KMed if there exists a such that

(13)

for , , and

Remark 6.

The choice of the KKT multipliers made here is democratic: each cluster has a total of “votes,” which it distributes proportionally among the for .

Now consider the dual certificates contained in the following two corollaries.

Corollary 7.

If KMed has a unique optimal solution , LinKMed also has a unique optimal solution when

(14)
Remark 8.

Choose points from within balls in , each of radius , for which the centers of any two balls are separated by a distance of at least . Measure in units of a ball’s radius by setting . Take , where is the distance between points and and . The inequality (14) is satisfied for

(15)

by assigning the points chosen from each ball to their own cluster. Here, and are the maximum and minimum numbers of points drawn from any one of the balls, respectively. In the limit , (15) becomes .

Proof.

Impose both when points and are in the same cluster and when points and are in different clusters. Combined with (13), the restrictions on are then

(16)
(17)

Condition (16) holds by definition of a medoid unless the optimal solution to KMed itself is not unique. In that event, it may be possible for a nonmedoid and a medoid in the same cluster to trade roles while maintaining solution optimality, making the LHS of (16) vanish for some . The phrasing of the corollary accommodates this edge case. ∎

Remark 9.

The inequality (17) requires for in the same cluster as but a different cluster from . So any two points in the same cluster must be closer than any two points in different clusters.

Corollary 7 does not illustrate the utility of LP for solving KMed. Given the conditions of a recovery guarantee, clustering could be performed without LP using some distance threshold : place two points in the same cluster if the distance between them is smaller than , and ensure two points are in different clusters if the distance between them is greater than . In the separated balls model of Remark 8, guarantees that two points in the same ball are closer than two points in different balls. The next corollary is needed to obtain results for .

Corollary 10.

Let

(18)

LinKMed has a unique optimal solution that coincides with the optimal solution to KMed if

(19)
(20)
Proof.

Impose only when points and are in the same cluster so that together with (13), the restrictions on are

(21)
(22)

To minimize the LHS of (21), choose so it approaches its lower bound.. ∎

Remark 11.

The inequality (22) requires for and in the same cluster.

Corollary 7 imposes both extra upper bounds and extra lower bounds on in its proof. When two points in different clusters are closer than two points in the same cluster, cannot simultaneously satisfy these upper and lower bounds. To break this “thresholding barrier,” Corollary 10 imposes only extra lower bounds on and permits large . Stronger recovery guarantees are obtained for large when medoids are sparsely distributed among the points. (Note that the optimal solution is -column sparse.) The next subsection obtains probabilistic guarantees using Corollary 10 for a variant of the separated balls model of Remark 8.

3 A recovery guarantee for separated balls

The theorem stated in the introduction is proved in this section. Consider nonoverlapping -dimensional unit balls in Euclidean space for which the centers of any two balls are separated by a distance of at least . Take to be the squared distance between points and . Under a mild assumption about how points are drawn within each ball, the exact recovery guarantee of Remark 8 is extended in this subsection to the regime , where two points in the same cluster are not necessarily closer to each other than two points in different clusters. In particular, let the points in each ball correspond to independent samples of an isotropic distribution supported in the ball and which obeys

(23)

Above, is the vector extending from the center of the ball to a given point, and refers to the norm of . In dimensions, the assumption (23) holds for the uniform distribution supported in the ball. For larger , (23) requires distributions that concentrate more probability mass closer to the ball’s center. For simplicity, we assume in the sequel that the number of points drawn from each ball is equal. Let denote an expectation and

a variance. We state a preliminary lemma.

Lemma 12.

Consider sampled independently from an isotropic distribution supported in a -dimensional unit ball which satisfies

Use squared Euclidean distances to measure dissimilarities between points. Let be the medoid of the set and . Assume that , , and . With probability exceeding , all of the following statements are true.

  1. for all .

  2. .

  3. .

Proof.

First prove statement 1. Note that for ,

(24)
(25)
(26)

Above, . Since the are drawn from an isotropic distribution, the direction of the unit vector is independent of and drawn uniformly at random. It follows that the for

are i.i.d. zero-mean random variables despite how

depends on the . Indeed, for ,

where the last equality is obtained by integrating in generalized spherical coordinates. Bernstein’s inequality thus gives

(27)

So is bounded from above with high probability given (27). Further, from the triangle inequality. These facts together with (26) imply that for a given ,

(28)
(29)
(30)
(31)

with probability exceeding . For , the inequalities above clearly hold with unit probability. Take a union bound over the other to obtain that statement 1 holds with probability exceeding (for valid as specified in (27)).

Now observe that

It follows that statement 2 holds with probability exceeding . Moreover, statements 1 and 2 together hold with probability exceeding . Condition on them, and prove statement 3 by contradiction: suppose that exceeds . Then because ,

(32)

But from statement 1 for , this implies that

(33)

which violates the assumption that is a medoid. So all three statements hold with probability exceeding , which is the content of the lemma. ∎

We now write the main result of this section.

Theorem 13.

(Restatement of Theorem 1.) Consider unit balls in -dimensional Euclidean space (with ) for which the centers of any two balls are separated by a distance of , . From each ball, draw points as independent samples from an isotropic distribution supported in the ball which satisfies

(34)

Suppose that squared distances are used to measure dissimilarities between points, . For each , there exist values of and for which the following statement holds: with probability exceeding , the unique optimal solution to each of -medoids clustering and its linear programming relaxation assigns the points in each ball to their own cluster.

Remark 14.

A table of valid combinations of , , and (for ) follows.

More such combinations may be obtained by satisfying the inequalities (38), (47), and (49) in the proof below.

Proof.

Condition on the events that the three statements of Lemma 12 hold for each ball. Choose so that the probability these events occur together exceeds . Because is bounded by Lemma 12, this requires

(35)

Now simplify the sufficient condition of Corollary 10 with and every . Let

be the maximum distance of a medoid to the center of its respective ball from statement 3 of Lemma 12. Note that

where the upper bound is surmised by considering point collinear with and between points and , both on one ball’s boundary. So take to narrow the sufficient condition of Corollary 10. Also note the requirement (19):

(36)

Obtain a lower bound of on the RHS by considering a point on the boundary of one ball collinear with points and . Impose

(37)

to ensure the RHS exceeds the LHS. This is equivalent to

(38)

Given the stipulations of the previous paragraph and (20) of Corollary 10, the following holds: each of -medoids clustering and its LP relaxation has a unique optimal solution that assigns the points in each ball to their own cluster if for ,

(39)

Denote as the center of the ball associated with point .555This becomes a slight abuse of notation because is not an index of any point drawn, but all its usages contained here should be clear. Find conditions under which (39) holds by treating two complementary cases of the separately:

  1. . Then for , consider point collinear with and between points and to obtain

    (40)

    It follows that the LHS of (39) has an upper bound that vanishes. Moreover, the RHS of (39) must be positive by definition of a (unique) medoid. So (39) holds with unit probability.

  2. . First, bound the number of points in a given cluster for which the inequalities spanning the previous sentence hold. From the distribution (11),

    (41)

    Hoeffding’s inequality thus gives

    (42)

    Take to obtain

    (43)

    Condition on the event captured in the equality above holding for every cluster. This occurs with probability exceeding . Next, observe that for , considering (as for (40)) point collinear with and between points and gives the deterministic bound

    (44)

    Combine this with the bound on from (43) to find that for all , the LHS of (39) obeys

    (45)

    Statement 1 of Lemma 12 bounds from below the RHS of (39):

    (46)

    where extends from the center of the ball corresponding to . The expression on the RHS has a minimum at . Because , provided

    (47)

    a lower bound on the LHS of (46) is given by

    (48)

    Combining (45) and (48) provides a sufficient condition for (39):

    (49)

    It is easily verified numerically that this inequality is satisfied when for —as are the other bounds (38) and (47) on . Further, for any dimension , there exists some finite large enough such that (35) is satisfied. Similar checks may be performed to obtain other valid combinations of the parameters; more such combinations are contained in Remark 14.

In the proof above, the sole events conditioned on are that Lemma 12 holds for each cluster and that the Hoeffding inequality (43) for holds for each cluster. As recorded in the theorem, the probability all of these events occur exceeds . All components of the theorem are now proved. ∎

4 Simulations

Consider nonoverlapping -dimensional unit balls in for which the separation distance between the centers of any two balls is exactly . Consider the two cases that follow, referenced later as Case 1 and Case 2.

  1. Each ball is the support of a uniform distribution.

  2. Each ball is the support of a distribution that satisfies

    (50)

    where is the Euclidean distance from the center of the ball, and is some vector in . For , this is a uniform distribution. Equation (50) saturates the inequality (23), which is the distributional assumption of our probabilistic recovery guarantees.

Given one of these cases, sample each of the distributions times so that points are drawn from each ball. Solve LinKMed for this configuration of points and record when

  1. a solution to KMed is recovered. (Call this “cluster recovery.”)

  2. a recovered solution to KMed places points drawn from distinct balls in distinct clusters, the situation for which our recovery guarantees apply. (Call this “ball recovery,” a sufficient condition for cluster recovery.)

Examples of ball recoveries and cluster recoveries that are not ball recoveries are displayed in Figure 2 for .

Figure 2: In the failed ball recovery for two clusters b), just one point in the left ball is not placed in the same cluster as the other points in the ball. In the failed ball recovery for three clusters d), four points in the bottom right ball are not placed in the same cluster as the other points in the ball.

We performed such simulations using MATLAB in conjunction with Gurobi Optimizer 5.5’s barrier method implementation for every combination of the choices in the table below.

5, 10, 15, 20, 25, 30 2, 3 2, 2.2, 2.4, 2.6, 2.8, 3, 3.2, 3.4, 3.6, 3.8, 4, 4.2, 4.4, 4.6, 4.8, 5 2, 3, 4, 10 Cases 1, 2
Remarkably, cluster recovery failed no more than 12 (8) times out of 1000 across all sets of 1000 simulations for Case 1 (2). It therefore appears that high-probability cluster recovery is always realized when drawing samples from the distributions we consider. However, since the KKT conditions require some assumption about how points cluster, general cluster recovery guarantees are difficult to prove. In the previous section, we obtain guarantees assuming the points cluster into the balls from which they are drawn. The ball recovery results of our simulations for Cases 1 and 2 are depicted in, respectively, Figures 3 and 4. Note that the vertical axis of each plot measures the number of failed ball recoveries. We conclude this section with the following observations.

  • On the whole, Case 2 yields more ball recoveries than Case 1. This is not unexpected: with the exception of , Case 2 concentrates more probability mass towards the centers of the balls than does Case 1, typically making the points drawn from each ball cluster more tightly. For , the plots in both Figures 3 and 4 correspond to draws from uniform distributions supported in the balls; they are repetitions and thus look essentially the same.

  • In general, as the number of points drawn from each ball is increased, the number of ball recoveries increases for fixed , , and

    . This is again not unexpected: if fewer points are drawn, clustering is more susceptible to outliers that can prevent ball recovery.

  • As increases, the number of ball recoveries increases for fixed , , and because points drawn from different balls tend to get further apart. For , high-probability ball recovery appears to be guaranteed for greater than somewhere between and even for the small values of considered here. This is considerably better than the guarantee of Theorem 13: it holds for and only if is at least , as shown toward the end of its proof.

  • For , , and fixed, there are more ball recoveries for two balls than there are for three balls. This suggests that as increases, the probability of recovery decreases, which is consistent with intuition from Theorem 13.

  • For , , and fixed, as increases, the number of ball recoveries increases, even for the uniform distributions of Case 1. There is thus substantial room for improving our recovery guarantees, which require concentrating more probability mass towards the centers of the balls as increases.

Figure 3: Exact recovery appears to be guaranteed with high probability for uniform distributions for values of in the double digits and values of below . This is substantially better than Theorem 13 suggests.

Figure 4: The plots here and in Figure 3 are repetitions of the same set of simulations. As increases, the number of exact recoveries increases faster than it does for uniform distributions.

5 Concluding remarks

We proved that with high probability, the -medoids clustering problem and its LP relaxation share a unique globally optimal solution in a nontrivial regime, where two points in the same cluster may be further apart than two points in different clusters. However, our theoretical guarantees are preliminary; they fall far short of explaining the success of LP in distinguishing points drawn from different balls at small separation distance and with few points in each ball. More generally, in simulations we did not present here, the -medoids LP relaxation appeared to recover integer solutions for very extreme configurations of points—in the presence of extreme outliers as well as for nonisotropic clusters with vastly different numbers of points. We thus conclude with a few open questions that interest us.

  • How do recovery guarantees change for different choices of the dissimilarities between points—for example, for Euclidean distances rather than for the squared Euclidean distances used here? What about for Gaussian and exponential kernels?

  • Can exact recovery be used to better characterize outliers?

  • Is it possible to obtain cluster recovery guarantees instead of just ball recovery guarantees? (“Cluster recovery” and “ball recovery” are defined right after (50).)

Acknowledgements

We thank Shi Li and Chris White for helpful suggestions. We are extremely grateful to Sujay Sanghavi for offering his expertise on clustering and for pointing us in the right directions as we navigated the literature. A.N. is especially grateful to Jun Song for his constructive suggestions and for general support during the preparation of this work. R.W. was supported in part by a Research Fellowship from the Alfred P. Sloan Foundation, an ONR Grant N00014-12-1-0743, an NSF CAREER Award, and an AFOSR Young Investigator Program Award. A.N. was supported by Jun Song’s grant R01CA163336 from the National Institutes of Health.

References

  • [1] D. Aloise, A. Deshpande, P. Hansen, and P. Popat, “NP-hardness of Euclidean sum-of-squares clustering,” Machine Learning, vol. 75, no. 2, pp. 245–248, 2009.
  • [2] S. Dasgupta and Y. Freund, “Random projection trees for vector quantization,” Information Theory, IEEE Transactions on, vol. 55, no. 7, pp. 3229–3242, 2009.
  • [3] S. Lloyd, “Least squares quantization in PCM,” Information Theory, IEEE Transactions on, vol. 28, no. 2, pp. 129–137, 1982.
  • [4] J. A. Hartigan and M. A. Wong, “Algorithm as 136: A k-means clustering algorithm,” Journal of the Royal Statistical Society. Series C (Applied Statistics), vol. 28, no. 1, pp. 100–108, 1979.
  • [5] C. H. Papadimitriou, “Worst-case and probabilistic analysis of a geometric location problem,” SIAM Journal on Computing, vol. 10, no. 3, pp. 542–557, 1981.
  • [6] N. Megiddo and K. J. Supowit, “On the complexity of some common geometric location problems,” SIAM journal on computing, vol. 13, no. 1, pp. 182–196, 1984.
  • [7] M. Van der Laan, K. Pollard, and J. Bryan, “A new partitioning around medoids algorithm,” Journal of Statistical Computation and Simulation, vol. 73, no. 8, pp. 575–584, 2003.
  • [8] L. Kaufman and P. J. Rousseeuw,

    Finding groups in data: an introduction to cluster analysis

    , vol. 344.
    Wiley. com, 2009.
  • [9] B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” Science, vol. 315, no. 5814, pp. 972–976, 2007.
  • [10]

    I. E. Givoni and B. J. Frey, “A binary variable model for affinity propagation,”

    Neural computation, vol. 21, no. 6, pp. 1589–1600, 2009.
  • [11] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 19, no. 7, pp. 711–720, 1997.
  • [12] U. Bodenhofer, A. Kothmeier, and S. Hochreiter, “Apcluster: an R package for affinity propagation clustering,” Bioinformatics, vol. 27, no. 17, pp. 2463–2464, 2011.
  • [13] M. Mézard, “Computer science. where are the exemplars?,” Science (New York, NY), vol. 315, no. 5814, pp. 949–951, 2007.
  • [14] M. Leone, M. Weigt, et al., “Clustering by soft-constraint affinity propagation: applications to gene-expression data,” Bioinformatics, vol. 23, no. 20, pp. 2708–2715, 2007.
  • [15] J. Tang, J. Sun, C. Wang, and Z. Yang, “Social influence analysis in large-scale networks,” in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 807–816, ACM, 2009.
  • [16] S. de Vries, M. Posner, and R. Vohra, “The k-median problem on a tree,” tech. rep., Citeseer, 1998.
  • [17] C. Boutsidis, A. Zouzias, and P. Drineas, “Random projections for -means clustering,” in Proc. NIPS, pp. pp.298–306, 2010.
  • [18] B. P. Ames, “Guaranteed clustering and biclustering via semidefinite programming,” arXiv preprint arXiv:1202.3663, 2012.
  • [19] B. P. Ames and S. A. Vavasis, “Convex optimization for the planted k-disjoint-clique problem,” arXiv preprint arXiv:1008.2814, 2010.
  • [20] S. Oymak and B. Hassibi, “Finding dense clusters via low rank+ sparse decomposition,” arXiv preprint arXiv:1104.5186, 2011.
  • [21] A. Jalali and N. Srebro, “Clustering using max-norm constrained optimization,” arXiv preprint arXiv:1202.5598, 2012.
  • [22] Y. Chen, S. Sanghavi, and H. Xu, “Clustering sparse graphs,” arXiv preprint arXiv:1210.3335, 2012.
  • [23] A. Jalali, Y. Chen, S. Sanghavi, and H. Xu, “Clustering partially observed graphs via convex optimization,” arXiv preprint arXiv:1104.4803, 2011.
  • [24] N. Bansal, A. Blum, and S. Chawla, “Correlation clustering,” Machine Learning, vol. 56, no. 1-3, pp. 89–113, 2004.
  • [25] A. Condon and R. M. Karp, “Algorithms for graph partitioning on the planted partition model,” Random Structures and Algorithms, vol. 18, no. 2, pp. 116–140, 2001.
  • [26] P. W. Holland, K. B. Laskey, and S. Leinhardt, “Stochastic blockmodels: First steps,” Social networks, vol. 5, no. 2, pp. 109–137, 1983.
  • [27] M. Soltanolkotabi, E. Elhamifar, and E. Candès, “Robust subspace clustering,” arXiv preprint arXiv:1301.2603, 2013.
  • [28] E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm, theory, and applications,” arXiv preprint arXiv:1203.1005, 2012.
  • [29] E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 2790–2797, IEEE, 2009.
  • [30] E. Elhamifar, G. Sapiro, and R. Vidal, “Finding exemplars from pairwise dissimilarities via simultaneous sparse recovery,” in Advances in Neural Information Processing Systems, pp. 19–27, 2012.
  • [31] S. Dasgupta, “Learning mixtures of Gaussians,” in Foundations of Computer Science, 1999. 40th Annual Symposium on, pp. 634–644, IEEE, 1999.
  • [32] A. Sanjeev and R. Kannan, “Learning mixtures of arbitrary gaussians,” in

    Proceedings of the thirty-third annual ACM symposium on Theory of computing

    , pp. 247–257, ACM, 2001.
  • [33] S. Vempala and G. Wang, “A spectral algorithm for learning mixture models,” Journal of Computer and System Sciences, vol. 68, no. 4, pp. 841–860, 2004.
  • [34] R. Kannan, H. Salmasian, and S. Vempala, “The spectral method for general mixture models,” in Learning Theory, pp. 444–457, Springer, 2005.
  • [35] D. Achlioptas and F. McSherry, “On spectral learning of mixtures of distributions,” in Learning Theory, pp. 458–469, Springer, 2005.
  • [36] J. Feldman, R. A. Servedio, and R. O’Donnell, “PAC learning axis-aligned mixtures of gaussians with no separation assumption,” in Learning Theory, pp. 20–34, Springer, 2006.
  • [37] S. C. Brubaker, “Robust PCA and clustering in noisy mixtures,” in Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1078–1087, Society for Industrial and Applied Mathematics, 2009.
  • [38] M. Belkin and K. Sinha, “Learning gaussian mixtures with arbitrary separation,” arXiv preprint arXiv:0907.1054, 2009.
  • [39] K. Chaudhuri, S. Dasgupta, and A. Vattani, “Learning mixtures of gaussians using the k-means algorithm,” arXiv preprint arXiv:0912.0086, 2009.
  • [40] A. T. Kalai, A. Moitra, and G. Valiant, “Efficiently learning mixtures of two gaussians,” in Proceedings of the 42nd ACM symposium on Theory of computing, pp. 553–562, ACM, 2010.
  • [41] M. Belkin and K. Sinha, “Polynomial learning of distribution families,” in Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pp. 103–112, IEEE, 2010.
  • [42] D. B. Shmoys, É. Tardos, and K. Aardal, “Approximation algorithms for facility location problems,” in Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pp. 265–274, ACM, 1997.
  • [43] S. Guha and S. Khuller, “Greedy strikes back: Improved facility location algorithms,” in Proceedings of the ninth annual ACM-SIAM symposium on Discrete algorithms, pp. 649–657, Society for Industrial and Applied Mathematics, 1998.
  • [44]

    M. R. Korupolu, C. G. Plaxton, and R. Rajaraman, “Analysis of a local search heuristic for facility location problems,” in

    Proceedings of the ninth annual ACM-SIAM symposium on Discrete algorithms, pp. 1–10, Society for Industrial and Applied Mathematics, 1998.
  • [45] M. Charikar and S. Guha, “Improved combinatorial algorithms for the facility location and k-median problems,” in Foundations of Computer Science, 1999. 40th Annual Symposium on, pp. 378–388, IEEE, 1999.
  • [46] M. Mahdian, E. Markakis, A. Saberi, and V. Vazirani, “A greedy facility location algorithm analyzed using dual fitting,” in

    Approximation, Randomization, and Combinatorial Optimization: Algorithms and Techniques

    , pp. 127–137, Springer, 2001.
  • [47] K. Jain, M. Mahdian, and A. Saberi, “A new greedy approach for facility location problems,” in Proceedings of the thirty-fourth annual ACM symposium on Theory of Computing, pp. 731–740, ACM, 2002.
  • [48] F. A. Chudak and D. B. Shmoys, “Improved approximation algorithms for the uncapacitated facility location problem,” SIAM Journal on Computing, vol. 33, no. 1, pp. 1–25, 2003.
  • [49] K. Jain, M. Mahdian, E. Markakis, A. Saberi, and V. V. Vazirani, “Greedy facility location algorithms analyzed using dual fitting with factor-revealing lp,” Journal of the ACM (JACM), vol. 50, no. 6, pp. 795–824, 2003.
  • [50] M. Sviridenko, “An improved approximation algorithm for the metric uncapacitated facility location problem,” in Integer programming and combinatorial optimization, pp. 240–257, Springer, 2006.
  • [51] M. Mahdian, Y. Ye, and J. Zhang, “Approximation algorithms for metric facility location problems,” SIAM Journal on Computing, vol. 36, no. 2, pp. 411–432, 2006.
  • [52] J. Byrka, “An optimal bifactor approximation algorithm for the metric uncapacitated facility location problem,” in Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pp. 29–43, Springer, 2007.
  • [53] J. Vygen, Approximation Algorithms Facility Location Problems. Forschungsinstitut für Diskrete Mathematik, Rheinische Friedrich-Wilhelms-Universität, 2005.
  • [54] S. Li, “A 1.488 approximation algorithm for the uncapacitated facility location problem,” Information and Computation, 2012.
  • [55] S. Li and O. Svensson, “Approximating k-median via pseudo-approximation,” in Proceedings of the 45th annual ACM symposium on Symposium on theory of computing, pp. 901–910, ACM, 2013.
  • [56] V. Arya, N. Garg, R. Khandekar, A. Meyerson, K. Munagala, and V. Pandit, “Local search heuristics for k-median and facility location problems,” SIAM Journal on Computing, vol. 33, no. 3, pp. 544–562, 2004.
  • [57] M. Hajiaghayi, W. Hu, J. Li, S. Li, and B. Saha, “A constant factor approximation algorithm for fault-tolerant k-median,” arXiv preprint arXiv:1307.2808, 2013.

Appendix: derivation of Proposition 4

Proposition 15.

(Restatement of Proposition 4.) LinKMed has a unique solution that coincides with the solution to KMed if and only if there exist some and such that

(51)
Proof.

Suppose the solution to KMed is known. Let be the index set of nonzero entries of , and let be its complement. For some matrix , denote as the vector of variables for which . Eliminating the from LinKMed using the constraints (7) yields the following equivalent program:

(52)
s.t. (53)
(54)
(55)
(56)
(57)
(58)

where . The only in the program (52)-(58) have . Associate the nonnegative dual variables , , , , , and with (53), (54), (55), (56), (57), and (58), respectively. Enforcing stationarity of the Lagrangian gives

(59)