1. Introduction
Clustering is one of the basic problems of statistics and machine learning: having a collection of
data points and a measure of their pairwise similarity the task is to partition the data into meaningful groups. There is a variety of criteria for the quality of partitioning and a plethora of clustering algorithms, overviewed in [14, 34, 50, 51]. Among most widely used are centroid based (for example the means algorithm), agglomeration based (or hierarchical) and graph based ones. Many graph partitioning approaches are based on dividing the graph representing the data into clusters of balanced sizes which have as few as possible edges between them [4, 5, 24, 35, 39, 40, 49]. Spectral clustering is a relaxation of minimizing graph cuts, which in any of its variants, [29, 35, 47], consists of two steps. The first step is the embedding step where data points are mapped to a euclidean space by using the spectrum of a graph Laplacian. In the second step, the actual clustering is obtained by applying a clustering algorithm like means to the transformed points.The input of a spectral clustering algorithm is a weight matrix which captures the similarity relation between the data points. Typically, the choice of edge weights depends on the distance between the data points and a parameter which determines the length scale over which points are connected. We assume that the data set is a random sample of an underlying groundtruth measure. We investigate the convergence of spectral clustering as the number of available data points goes to infinity.
For any given clustering procedure, a natural and important question is whether the procedure is consistent. That is, if it is true that as more data is collected, the partitioning of the data into groups obtained converges to some meaningful partitioning in the limit. Despite the abundance of clustering procedures in the literature, not many results establish their consistency in the nonparametric setting, where the data is assumed to be obtained from a unknown general distribution. Consistency of means clustering was established by Pollard [31]. Consistency of means clustering for paths with regularization was recently studied by Thorpe, Theil and Cade [44], using a similar viewpoint to those of this paper. Consistency for a class of single linkage clustering algorithms was shown by Hartigan [22]
. AriasCastro and Pelletier have proved the consistency of maximum variance unfolding
[3]. Pointwise estimates between graph Laplacians and the continuum operators were studied by Belkin and Niyogi
[8], Coifman and Lafon [11], Giné and Koltchinskii [18], Hein, Audibert and von Luxburg [23], and Singer [37]. Spectral convergence was studied in the works of Ting, Huang, and Jordan [45], Belkin and Niyogi [7] on the convergence of Laplacian eigenmaps, von Luxburg, Belkin and Bousquet on graph Laplacians, and of Singer and Wu [38]on connection graph Laplacian. The convergence of the eigenvalues and eigenvectors these works obtain is of great relevance to machine learning. However obtaining practical and rigorous rates at which the connectivity length scale
as remained an open problem. Also relevant to point cloud analysis are studies of Laplacians on discretized manifolds by Burago, Ivanov and Kurylev [10] who obtain precise error estimates for eigenvalues and eigenvectors.Recently the authors in [15], and together with Laurent, von Brecht and Bresson in[17], introduced a framework for showing the consistency of clustering algorithms based on minimizing an objective functional on graphs. In [17] they applied the technique to Cheeger and Ratio cuts. Here the framework of [15, 17] is used to prove new results on consistency of spectral clustering, which establish the (almost) optimal rate at which the connectivity radius can be taken to as . We prove the convergence of the spectrum of the graph Laplacian towards the spectrum of a corresponding continuum operator. An important element of our work is that we establish the convergence of the discrete clusters obtained via spectral clustering to their continuum counterparts. That is, as the number of data points the discrete clusters (obtained via spectral clustering) are show to converge towards continuum objects (measures), which themselves are obtained via a clustering procedure in the continuum setting (performed on the ground truth measure). That is, the discrete clusters are shown to converge to continuum clusters obtained via spectral clustering procedure with full information (ground truth measure) available. We obtain results for unnormalized (Theorem 1.2), and normalized (Theorems 1.5 and 1.7) graph Laplacians. The bridge connecting the spectrum of the graph Laplacian and the spectrum of a limiting operator in the continuum is built by using the notion of variational convergence known as convergence. The setting of convergence, combined with techniques of optimal transportation, provides an effective viewpoint to address a range of consistency and stability problems based on minimizing objective functionals on a random sample of a measure.
1.1. Description of spectral clustering
Let be a set of vertices and let be a symmetric matrix with nonnegative entries. We define , the degree matrix of the weighted graph , to be the diagonal matrix with for every . Also, we define , the unnormalized graph Laplacian matrix of the weighted graph , to be
(1.1) 
We also consider the matrices and given by
both of which we refer to as normalized graph Laplacians. The superscript indicates the fact that is symmetric, whereas the superscript indicates the fact that
is connected to the transition probabilities of a random walk that can be defined on the graph. Each of the matrices
is used in a version of spectral clustering. The so called unnormalized spectral clustering uses the spectrum of the unnormalized graph Laplacian to embed the point cloud into a lower dimensional space, typically a method like means on the embedded points then provides the desired clusters (see [47]). This is Algorithm 1 below.In the same spirit, the normalized graph Laplacians are used. An algorithm for normalized spectral clustering using was introduced in [29] (see Algorithm 2), and an algorithm using was introduced in [35] (see Algorithm 3).
Spectral properties of graph Laplacians have connections to balanced graph cuts. For example, the spectrum of is shown to be connected to the Ncut problem, whereas the spectrum of is connected to RatioCut (see [47]). A probabilistic interpretation of the spectrum of may be found in [28]. In addition, connections between normalized graph Laplacians, data parametrization and dimensionality reduction via diffusion maps are developed in [25].
We now present some facts about the matrices and , all of which may be found in [47]. First of all is a positive semidefinite symmetric matrix. In fact for every vector
(1.2) 
where on the left hand side we are using the usual inner product in . The smallest eigenvalue of is equal to zero, and its multiplicity is equal to the number of connected components of the weighted graph. The matrix is symmetric and positive semidefinite as well. Moreover, for every
(1.3) 
In addition, is an eigenvalue of , with multiplicity equal to the number of connected components of the weighted graph. The vector (where 1 is the vector with all entries equal to one) is an eigenvector of with eigenvalue .
The two forms of normalized graph Laplacians are closely related due to the correspondence between the spectruma of and . In fact, it is straightforward to show that
(1.4) 
That is, and have the same eigenvalues, and there is a simple relation between their corresponding eigenvectors.
1.2. Spectral clustering of point clouds.
Let be a point cloud in . To give a weighted graph structure to the set , we consider a kernel , that is, we consider a radially symmetric, radially decreasing function decaying to zero sufficiently fast. The kernel is appropriately rescaled to take into account data density. In particular, let depend on the length scale where we take to be defined by
In this way we impose that significant weight is given to edges connecting points up to distance . We consider the similarity matrix defined by
(1.5) 
We denote by the unnormalized graph Laplacian (1.1) of the weighted graph , that is
(1.6) 
where is the diagonal matrix with .
We define the Dirichlet energy on the graph of a function to be
(1.7) 
The fact that is a symmetric function guarantees that is symmetric and thus all the facts presented in Subsection 1.1 apply. In particular, (1.2) can be stated as: for every function
(1.8) 
where on the left hand side we have identified the function with the vector in and where denotes the usual inner product in .
The symmetric normalized graph Laplacian is given by
Since the kernel is assumed radially symmetric, it can be defined as for all , where is the radial profile. We assume the following properties on :

and is continuous at .

is nonincreasing.

The integral is finite.
Remark 1.1.
We remark that the last assumption on is equivalent to imposing that the surface tension
(1.9) 
is finite, where represents the first component of . The second condition implies that more relevance is given to the interactions between points that are close to each other. We notice that the class of acceptable kernels is quite broad and includes both Gaussian kernels and discontinuous kernels like one defined by a function of the form for and for .
We focus on point clouds that are obtained as independent samples from a given distribution . Specifically, consider an open, bounded, and connected set with Lipschitz boundary (i.e. locally the graph of a Lipschitz function) and consider a probability measure supported on . We assume has a continuous density , which is bounded above and below by positive constants on . We assume that the points (i.i.d. random points) are chosen according to the distribution . We consider the graph with nodes and edge weights defined in (1.5). For an appropriate scaling of with respect to , we study the limiting behavior of the eigenvalues and eigenvectors of the graph Laplacians as . We now describe the continuum problems which characterize the limit.
1.3. Description of spectral clustering in the continuum setting: the unnormalized case
Let domain , ”groundtruth” measure with density be as above. The object that characterizes the limit of the graph Laplacians as is the differential operator:
(1.10) 
We consider the pairs and (the Sobolev space of functions with distributional derivative in ), with not identically equal to zero, such that
(1.11) 
A function
as above is said to be an eigenfunction of
with corresponding eigenvalue . In Subsection 2.4 we discuss the precise definition of a solution of (1.11) and present some facts about it. In particular is a positive semidefinite selfadjoint operator with respect to the inner product and has a discrete spectrum that can be arranged as an increasing sequence converging to infinitywhere each eigenvalue is repeated according to (finite) multiplicity. Furthermore, there exists a orthonormal basis of (with respect to the inner product ) consisting of eigenfunctions of .
Given a mapping by we denote the push forward of the measure , namely the measure for which , for any Borel set . The continuum spectral clustering analogous to the discrete one of Algorithm 1 is as follows. Let be the orthonormal set of eigenfunctions corresponding to eigenvalues . Consider the measure . Let
be the clusters obtained by kmeans clustering of
. Then for define the spectral clustering of .1.4. Description of spectral clustering in the continuum setting: the normalized cases
The object that characterizes the limit of the symmetric normalized graph Laplacians as is the differential operator
We consider the space
(1.12) 
The spectrum of is the set of pairs and , where is not identically equal to zero, such that
(1.13) 
The sense in which (1.13) holds is made precise in Subsection 2.4. The spectrum of the operator has similar properties to those of the spectrum of . We let
denote the eigenvalues of , repeated according to multiplicity.
The continuum spectral clustering analogous to the discrete one of Algorithm 2 is as follows. Let be the orthonormal set of eigenfunctions (with respect to the inner product ) corresponding to eigenvalues . Normalize them by
Consider the measure . Let be the clusters obtained by kmeans clustering of . Then for define the spectral clustering o .
Finally, the operator that describes the limit of the graph Laplacians is described by the operator :
As discussed in Subsection 2.4, the eigenvalues of are equal to the eigenvalues of . The continuum clustering, which is analogous to the discrete one of Algorithm 3, is as in Subsection 1.3, where eigenfunctions of are used.
1.5. Passage from discrete to continuum.
We are interested in showing that as
eigenvalues of discrete graph Laplacians and the associated eigenvectors converge towards eigenvalues and eigenfunctions of corresponding differential operators. The issue that arises is how to compare functions on discrete and continuum setting. Typically this is achieved by introducing an interpolation operator that takes discretely defined functions to continuum ones and a restriction operator which restricts the continuum function to the discrete setting. For this setting to work some smoothness of functions considered is required. Furthermore the choice of the interpolation operator and its properties adds an intermediate step that needs to be understood.
We choose a different route and introduce a way to compare the functions between settings directly. This approach is quite general and does not require any regularity assumptions. We use the topologies introduced in [15] and in particular in this paper we focus in the topology that we now recall. Denote by the empirical measure associated to the data points, that is
(1.14) 
For a given function , the question is how to compare with a function (a function defined on the set ). More generally, one can consider the problem of how to compare functions in with those in for arbitrary probability measures , on . We define the set of objects that includes both the functions in discrete setting and those in continuum setting as follows:
where denotes the set of Borel probability measures on . For and in we define the distance
where is the set of all couplings (or transportation plans) between and , that is, the set of all Borel probability measures on for which the marginal on the first variable is and the marginal on the second variable is . It was proved in [15] that is indeed a metric on . As remarked in [15], one of the nice features of the convergence in is that it simultaneously generalizes the weak convergence of probability measures and the convergence in of functions. It also provides us with a way to compare functions which are supported in sets as different as point clouds and continuous domains. In Subsection 2.1 we present more details about this metric.
For a given we denote by the space of functions with respect to the measure . Also, for we write
Finally, if the measure has a density , that is, if , we may write and instead of and .
1.6. Convergence of eigenvalues, eigenvectors, and of spectral clustering: the unnormalized case.
Here we present one of the main results of this paper. We state the conditions on for the spectrum of the unnormalized graph Laplacian , given in (1.6), to converge to the spectrum of , given by (1.10) and for the spectral clustering of Algorithm 1 to converge to the clustering of Subsection 1.3. Let be the eigenvalues of and the corresponding orthonormal eigenfunctions, as in Subsection 1.3. We recall that orthogonality is considered with respect to the inner product in .
To state the results it is convenient to introduce the sequence of distinct eigenvalues of . For a given , we denote by the multiplicity of the eigenvalue and we let be such that Also, we denote by the projection (with respect to the inner product
) onto the eigenspace of
associated to the eigenvalue . For all large enough , we denote by the projection (with respect to the inner product ) onto the space generated by all the eigenvectors of associated to the eigenvalues . Here, as in the rest of the paper, we identify with the space .Theorem 1.2 (Convergence of the spectra of the unnormalized graph Laplacians).
Let and let , be an open, bounded, connected set with Lipschitz boundary. Let be a probability measure on with continuous density , satisfying
(1.15) 
for some . Let be a sequence of i.i.d. random points chosen according to . Let be a sequence of positive numbers converging to and satisfying
(1.16) 
Assume the kernel satisfies conditions (K1)(K3). Then, with probability one, all of the following statements hold true:

For every , every sequence with an eigenvector of associated to the eigenvalue and with is precompact in . Additionally, whenever along a subsequence as , then and is an eigenfunction of associated to .

Convergence of Eigenprojections: For all and for arbitrary sequence , if as along some subsequence. Then along that subsequence

Consistency of Spectral Clustering. Let be the clusters obtained in Algorithm 1. Let (the restriction of to ) for . Then is precompact with respect to weak convergence of measures and furthermore if converges along a subsequence to then where is a spectral clustering of , described in Subsection 1.3.
Remark 1.3.
We remark that although the choice of the topology used in the previous theorem may seem unusual at first sight, it actually reduces to a more common notion of convergence (like the one used in [48] which we described below) in the presence of regularity assumptions on the density and the domain . In fact, assume for simplicity that has smooth boundary and that is a smooth function. Consider where is an eigenvector of associated to the eigenvalue and satisfying . The second statement in Theorem 1.2 says that up to subsequence, , where is an eigenfunction of associated to . From the regularity theory of elliptic PDEs it follows that is smooth up to the boundary. In particular, it makes sense to define a function on the point cloud, by simply taking the restriction of to the points . It is straightforward to check that due to the smoothness of . In turn, , implies that . From this and Proposition 2.1, we conclude that
This is precisely the mode of convergence used in [48].
The proof of Theorem 1.2 relies on the study of the limiting behavior of the following rescaled form of the Dirichlet energy (1.7) on the graph:
(1.18) 
The type of limit which is relevant for the problem, is the one given by variational convergence known as the convergence. The notion of convergence is recalled in Subsection 2.2. This notion of convergence is particularly suitable in order to study the convergence of minimizers of objective functionals on graphs as , as it is discussed in [17].
The relevant continuum energy is the weighted Dirichlet energy :
(1.19) 
Theorem 1.4 (convergence of Dirichlet energies).
Consider the same setting as in Theorem 1.2 and the same assumptions on and on . Then, , defined by (1.18), converge to as in the sense, where is given by (1.9) and is the weighted Dirichlet energy with weight defined in (1.19). Moreover, the sequence of functionals satisfies the compactness property with respect to the metric. That is, every sequence with for which
is precompact in .
The fact that the weight in the limiting functional is (and not ) essentially follows from the fact that the graph Dirichlet energy defined in (1.18) is a double sum. This is the same weight that shows up in the study of the continuum limit of the graph total variation in [15]. Theorem 1.4 is analogous to Theorems 1.1 and 1.2 in [15] combined.
1.7. Convergence of eigenvalues, eigenvectors, and of spectral clustering: the normalized case.
We also study the limit of the spectra of , the symmetric normalized graph Laplacian which we recall is given by,
For a function , (1.3) can be written as
(1.20) 
We denote by
the eigenvalues of repeated according to multiplicity. Their limit is described by differential operator
Let
denote the eigenvalues of , repeated according to multiplicity. We write to denote the distinct eigenvalues of . For a given , we denote by the multiplicity of the eigenvalue and we let be such that We define and analogously to the way we defined them in the paragraph preceding Theorem 1.2. The following is analogous to Theorem 1.2.
Theorem 1.5 (Convergence of the spectra of the normalized graph Laplacians).
Consider the same setting as in Theorem 1.2 and the same assumptions on and on . Then, with probability one, all of the following statements hold

For every , every sequence with being an eigenvector of associated to the eigenvalue and with is precompact in . Additionally, whenever along a subsequence as , then and is an eigenfunction of associated to .

Convergence of Eigenprojections: For all and for arbitrary , if along a subsequence as then,

Consistency of Spectral Clustering. Assume . Let be the clusters obtained in Algorithm 2. Let for . Then is precompact with respect to weak convergence of measures and furthermore if converges along a subsequence to then where is a spectral clustering of , described in Subsection 1.4.
The proof of the previous theorem is completely analogous to the one of Theorem 1.2 once one has proved the variational convergence of the relevant energies. Indeed, consider defined by
(1.22) 
and by
(1.23) 
where is defined in (1.12). The following holds.
Theorem 1.6 (convergence of normalized Dirichlet energies).
With the same setting as in Theorem 1.2 and the same assumptions on and on , , defined by (1.22), converge to as in the sense, where is defined in (1.23), and are given by (1.9) and (1.21) respectively. Moreover, the sequence of functionals satisfies the compactness property with respect to the metric. That is, every sequence with for which
is precompact in .
Finally, we consider the limit of the spectrum of , where . Consider the operator given by
As discussed in Subsection 2.4, the eigenvalues of are equal to the eigenvalues of . Thus from (1.4) and from Theorem 1.5, it follows that after appropriate rescaling, the eigenvalues of converge to the eigenvalues of . Moreover, using again (1.4) and Theorem 1.5, we have the following convergence of eigenvectors.
Corollary 1.7.
Consider the same setting as in Theorem 1.2 and the same assumptions on and on . Then, with probability one, the following statement holds: For every , every sequence with being an eigenvector of associated to the eigenvalue and with is precompact in . Additionally, all its cluster points are eigenfunctions of with eigenvalue . Finally the clusters obtained by Algorithm 3 converge to clusters obtained by spectral clustering corresponding to described at the end of Subsection 1.3.
1.8. Stability of –means clustering
One of the final elements of the proof of the consistency results of spectral clustering (statement 4. in Theorems 1.2 and 1.5) requires new results on stability of means clustering with respect to perturbations of the measure being clustered. These results extend the result of Pollard [31] who proved the consistency of means clustering. It is important to extend such results because in our setting, at the discrete level, the point set used as input for the means algorithm is not a sample from a given distribution and thus one can not apply the results in [31] directly.
Given and given a measure on
with finite second moments, let
be defined by(1.24) 
where for . For brevity we write both for and where the object considered should be clear from the context. The problem of means clustering is to minimize over . In Subsection 2.3 we show the existence of minimizers of the functional (1.24). The main result is the following.
Theorem 1.8 (Stability of means clustering).
Let . Let be a Borel probability measure on with finite second moments and whose support has at least points. Assume is a sequence of probability measures on with finite second moments which converges in the Wasserstein distance (see (2.1)) to . Then,
Moreover, if is a minimizer of for all , then the set is precompact in and all of its accumulation points are minimizers of .
We present the proof of the previous Theorem in Subsection 2.3.
The clusters corresponding to minimizing the are the Voronoi cells: . We prove in Lemma 2.10 that the measure of the boundaries of clusters is zero, that is we show that if
Comments
There are no comments yet.