 # How the result of graph clustering methods depends on the construction of the graph

We study the scenario of graph-based clustering algorithms such as spectral clustering. Given a set of data points, one first has to construct a graph on the data points and then apply a graph clustering algorithm to find a suitable partition of the graph. Our main question is if and how the construction of the graph (choice of the graph, choice of parameters, choice of weights) influences the outcome of the final clustering result. To this end we study the convergence of cluster quality measures such as the normalized cut or the Cheeger cut on various kinds of random geometric graphs as the sample size tends to infinity. It turns out that the limit values of the same objective function are systematically different on different types of graphs. This implies that clustering results systematically depend on the graph and can be very different for different types of graph. We provide examples to illustrate the implications on spectral clustering.

Comments

There are no comments yet.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Nowadays it is very popular to represent and analyze statistical data using random graph or network models. The vertices in such a graph correspond to data points, whereas edges in the graph indicate that the adjacent vertices are “similar” or “related” to each other. In this paper we consider the problem of data clustering in a random geometric graph setting. We are given a sample of points drawn from some underlying probability distribution on a metric space. The goal is to cluster the sample points into “meaningful groups”. A standard procedure is to first transform the data to a neighborhood graph, for example a

-nearest neighbor graph. In a second step, the cluster structure is then extracted from the graph: clusters correspond to regions in the graph that are tightly connected within themselves and only sparsely connected to other clusters.

There already exist a couple of papers that study statistical properties of this procedure in a particular setting: when the true underlying clusters are defined to be the connected components of a density level set in the underlying space. In his setting, a test for detecting cluster structure and outliers is proposed in

Brito et al. (1997). In Biau et al. (2007) the authors build a neighborhood graph in such a way that its connected components converge to the underlying true clusters in the data. Maier et al. (2009a) compare the properties of different random graph models for identifying clusters of the density level sets.

While the definition of clusters as connected components of level sets is appealing from a theoretical point of view, the corresponding algorithms are often too simplistic and only moderately successful in practice. From a practical point of view, clustering methods based on graph partitioning algorithms are more robust. Clusters do not have to be perfectly disconnected in the graph, but are allowed to have a small number of connecting edges between them. Graph partitioning methods are widely used in practice. The most prominent algorithm in this class is spectral clustering, which optimizes the normalized cut () objective function (see below for exact definitions, and von Luxburg (2007) for a tutorial on spectral clustering). It is already known under what circumstances spectral clustering is statistically consistent (von Luxburg et al., 2008). However, there is one important open question. When applying graph-based methods to given sets of data points, one obviously has to build a graph first, and there are several important choices to be made: the type of the graph (for example, -nearest neighbor graph, the -neighborhood graph or a Gaussian similarity graph), the connectivity parameter ( or or

, respectively) and the weights of the graph. Making such choices is not so difficult in the domain of supervised learning, where parameters can be set using cross-validation. However, it poses a serious problem in unsupervised learning. While different researchers use different heuristics and their “gut feeling” to set these parameters, neither systematic empirical studies have been conducted (for example, how sensitive the results are to the choice of graph parameters), nor do theoretical results exist which lead to well-justified heuristics.

In this paper we study the question if and how the results of graph-based clustering algorithms are affected by the graph type and the parameters that are chosen for the construction of the neighborhood graph. We focus on the case where the best clustering is defined as the partition that minimizes the normalized cut (Ncut) or the Cheeger cut.

Our theoretical setup is as follows. In a first step we ignore the problem of actually finding the optimal partition. Instead we fix some partition of the underlying space and consider it as the “true” partition. For any finite set of points drawn from the underlying space we consider the clustering of the points that is induced by this underlying partition. Then we study the convergence of the value of this clustering as the sample size tends to infinity. We investigate this question on different kinds of neighborhood graphs. Our first main result is that depending on the type of graph, the clustering quality measure converges to different limit values. For example, depending on whether we use the graph or the -graph, the limit functional integrates over different powers of the density. From a statistical point of view, this is very surprising because in many other respects, the graph and the

-graph behave very similar to each other. Just consider the related problem of density estimation. Here, both the

-nearest neighbor density estimate and the estimate based on the degrees in the -graph converge to the same limit, namely the true underlying density. So it is far from obvious that the values would converge to different limits.

In a second step we then relate these results to the setting where we optimize over all partitions to find the one that minimizes the . We can show that the results from the first part can lead to the effect that the minimizer of on the graph is different from the minimizer of on the -graph or on the complete graph with Gaussian weights. This effect can also be studied in practical examples. First, we give examples of well-clustered distributions (mixtures of Gaussians) where the optimal limit cut on the graph is different from the one on the -neighborhood graph. The optimal limit cuts in these examples can be computed analytically. Next we can demonstrate that this effect can already been observed on finite samples from these distributions. Given a finite sample, running normalized spectral clustering to optimize Ncut leads to systematically different results on the graph than on the -graph. This shows that our results are not only of theoretical interest, but that they are highly relevant in practice.

In the following section we formally define the graph clustering quality measures and the neighborhood graph types we consider in this paper. Furthermore, we introduce the notation and technical assumptions for the rest of the paper. In Section 3 we present our main results on the convergence of and the on different graphs. In Section 4 we show that our findings are not only of theoretical interest, but that they also influence concrete algorithms such as spectral clustering in practice. All proofs are deferred to Section 6. Note that a small part of the results of this paper has already been published in Maier et al. (2009b).

## 2 Definitions and assumptions

Given a directed graph with weights and a partition of the nodes into we define

 cut(U,V∖U)=∑u∈U,v∈V∖U(w(u,v)+w(v,u)),

and . If

is an undirected graph we replace the ordered pair

in the sums by the unordered pair . Note that by doing so we count each edge twice in the undirected graph. This introduces a constant of two in the limits but it has the advantage that there is no need to distinguish in the formulation of our results between directed and undirected graphs.

Intuitively, the measures how strong the connection between the different clusters in the clustering is, whereas the volume of a subset of the nodes measures the “weight” of the subset in terms of the edges that originate in it. An ideal clustering would have a low and balanced clusters, that is clusters with similar volume. The graph clustering quality measures that we use in this paper, the normalized cut and the Cheeger cut, formalize this trade-off in slightly different ways: The normalized cut is defined by

 NCut(U,V∖U)=cut(U,V∖U)(1vol(U)+1vol(V∖U)), (1) whereas the Cheeger cut is defined by CheegerCut(U,V∖U)=cut(U,V∖U)min{vol(U),vol(V∖U)}. (2)

These definitions are useful for general weighted graphs and general partitions. As was said in the beginning we want to study the values of and

on neighborhood graphs on sample points in Euclidean space and for partitions of the nodes that are induced by a hyperplane

in . The two halfspaces belonging to are denoted by and . Having a neighborhood graph on the sample points , the partition of the nodes induced by is . In the rest of this paper for a given neighborhood graph we set . Similarly, for or we set . Accordingly we define and .

In the following we introduce the different types of neighborhood graphs and weighting schemes that are considered in this paper. The graph types are:

• The -nearest neighbor () graphs, where the idea is to connect each point to its nearest neighbors. However, this yields a directed graph, since the -nearest neighbor relationship is not symmetric. If we want to construct an undirected graph we can choose between the mutual graph, where there is an edge between two points if both points are among the nearest neighbors of the other one, and the symmetric graph, where there is an edge between two points if only one point is among the nearest neighbors of the other one. In our proofs for the limit expressions it will become clear that these do not differ between the different types of graphs. Therefore, we do not distinguish between them in the statement of the theorems, but rather speak of “the graph”.

• The -neighborhood graph, where a radius is fixed and two points are connected if their distance does not exceed the threshold radius . Note that due to the symmetry of the distance we do not have to distinguish between directed and undirected graphs.

• The complete weighted graph, where there is an edge between each pair of distinct nodes (but no loops). Of course, in general we would not consider this graph a neighborhood graph. However, if the weight function is chosen in such a way that the weights of edges between nearby nodes are high and the weights between points far away from each other are almost negligible, then the behavior of this graph should be similar to that of a neighborhood graph. One such weight function is the Gaussian weight function, which we introduce below.

The weights that are used on neighborhood graphs usually depend on the distance of the end nodes of the edge and are non-increasing. That is, the weight of an edge is given by with a non-increasing weight function . The weight functions we consider here are the unit weight function , which results in the unweighted graph, and the Gaussian weight function

 f(u)=1(2πσ2)d/2exp(−12u2σ2)

with a parameter defining the bandwidth.

Of course, not every weighting scheme is suitable for every graph type. For example, as mentioned above, we would hardly consider the complete graph with unit weights a neighborhood graph. Therefore, we only consider the Gaussian weight function for this graph. On the other hand, for the graph and the -neighborhood graph with Gaussian weights there are two “mechanisms” that reduce the influence of far-away nodes: first the fact that far-away nodes are not connected to each other by an edge and second the decay of the weight function. In fact, it turns out that the limit expressions we study depend on the interplay between these two mechanisms. Clearly, the decay of the weight function is governed by the parameter . For the -neighborhood graph the radius limits the length of the edges. Asymptotically, given sequences and of bandwidths and radii we distinguish between the following two cases:

• the bandwidth is dominated by the radius , that is for ,

• the radius is dominated by the bandwidth , that is for .

For the

graph we cannot give a radius up to which points are connected by an edge, since this radius for each point is a random variable that depends on the positions of all the sample points. However, it is possible to show that for a point in a region of constant density

the -nearest neighbor radius is concentrated around , where denotes the volume of the unit ball in Euclidean space . That is, the radius decays to zero with the rate . In the following it is convenient to set for the graph , noting that this is not the -nearest neighbor radius of any point but only its decay rate. Using this “radius” we distinguish between the same two cases of the ratio of and as for the -neighborhood graph.

For the sequences and we always assume , and , for . Furthermore, for the parameter sequence of the graph we always assume , which corresponds to , and .

In the rest of this paper we denote by the Lebesgue measure in . Furthermore, let denote the closed ball of radius around and , where we set .

We make the following general assumptions in the whole paper:

• The data points are drawn independently from some density on . The measure on that is induced by is denoted by ; that means, for a measurable set we set .

• The density is bounded from below and above, that is . In particular, it has compact support .

• In the interior of , the density is twice differentiable and for a and all in the interior of .

• The cut hyperplane splits the space into two halfspaces and (both including the hyperplane ) with positive probability masses, that is , . The normal of pointing towards is denoted by .

• If the boundary is a compact, smooth -dimensional surface with minimal curvature radius , that is the absolute values of the principal curvatures are bounded by . We denote by the normal to the surface at the point . Furthermore, we can find constants and such that for all we have for all .

• If we can find an angle such that for all . If we assume that (the point) is in the interior of .

The assumptions on the boundary are necessary in order to bound the influence of points that are close to the boundary. The problem with these points is that the density is not approximately uniform inside small balls around them. Therefore, we cannot find a good estimate of their radius and on their contribution to the and the volume. Under the assumptions above we can neglect these points.

## 3 Main results: Limits of the quality measures NCut and CheegerCut

As we can see in Equations (1) and (2) the definitions of and rely on the and the volume. Therefore, in order to study the convergence of and it seems reasonable to study the convergence of the and the volume first. In Section 6 the Corollaries 1-3 and the Corollaries 4-6 state the convergence of the and the volume on the graphs. The Corollaries 7-10 state the convergence of the on the -graph and the complete weighted graph, whereas the Corollaries 11-14 state the convergence of the volume on the same graphs.

These corollaries show that there are scaling sequences and that depend on , and the graph type such that, under certain conditions, almost surely

 (scutn)−1cutn→CutLim and (svoln)−1voln(H)→VolLim(H)

for , where and are constants depending only on the density and the hyperplane .

Having defined these limits we define, analogously to the definitions in Equations (1) and (2), the limits of and as

 NCutLim =CutLimVolLim(H+)+CutLimVolLim(H−) (3) and CheegerCutLim =CutLimmin{VolLim(H+),VolLim(H−)}. (4)

In our following main theorems we show the conditions under which we have for almost sure convergence of

 svolnscutnNCutn→NCutLim and svolnscutnCheegerCutn→CheegerCutLim.

Furthermore, for the unweighted -graph and -graph and for the complete weighted graph with Gaussian weights we state the optimal convergence rates, where “optimal” means the best trade-off between our bounds for different quantities derived in Section 6. Note that we will not prove the following theorems here. Rather the proof of Theorem 1 can be found in Section 6.2.4, whereas the proofs of Theorems 2 and 3 can be found in Section 6.3.3.

###### Theorem 1 (NCut and CheegerCut on the kNN-graph)

For a sequence with for let be the -nearest neighbor graph on the sample . Set or and let denote the corresponding limit as defined in Equations (3) and (4). Set

 Δn=∣∣∣svolnscutnXCutn−XCutLim∣∣∣.
• Let be the unweighted graph. If in the case and in the case we have for almost surely. The optimal convergence rate is achieved for in the case and in the case . For this choice of we have in the case and for .

• Let be the -graph with Gaussian weights and suppose for an . Then we have almost sure convergence of for if and .

• Let be the -graph with Gaussian weights and . Then we have almost sure convergence of for if in the case and in the case .

###### Theorem 2 (NCut and CheegerCut on the r-graph)

For a sequence with for let be the -neighborhood graph on the sample . Set or and let denote the corresponding limit as defined in Equations (3) and (4). Set

 Δn=∣∣∣svolnscutnXCutn−XCutLim∣∣∣.
• Let be unweighted. Then almost surely for if . The optimal convergence rate is achieved for for a suitable constant . For this choice of we have .

• Let be weighted with Gaussian weights with bandwidth and for . Then almost surely for if .

• Let be weighted with Gaussian weights with bandwidth and for . Then almost surely for if .

The following theorem presents the limit results for and on the complete weighted graph. One result that we need in the proof of this theorem is Corollary 8 on the convergence of the . Note that in Narayanan et al. (2007) a similar convergence problem is studied for the case of the complete weighted graph, and the scaling sequence and the limit differ from ours. However, the reason is that in that paper the weighted is considered, which can be written as , where denotes the normalized graph Laplacian matrix and is an

-dimensional vector with

if is in one cluster and if is in the other cluster. On the other hand, the standard , which we consider in this paper, can be written (up to a constant) as , where denotes the unnormalized graph Laplacian matrix. (For the definitions of the graph Laplacian matrices and their relationship to the we refer the reader to von Luxburg (2007).) Therefore, the two results do not contradict each other.

###### Theorem 3 (NCut and CheegerCut on the complete weighted graph)

Let be the complete weighted graph with Gaussian weights and bandwidth on the sample points . Set or and let denote the corresponding limit as defined in Equations (3) and (4). Set

 Δn=∣∣∣svolnscutnXCutn−XCutLim∣∣∣.

Under the conditions and we have almost surely for . The optimal convergence rate is achieved setting with a suitable . For this choice of the convergence rate is in for any .

Let us decrypt these results and for simplicity focus on the value. When we compare the limits of the (cf. Table 1) it is striking that, depending on the graph type and the weighting scheme, there are two substantially different limits: the limit for the unweighted -neighborhood graph, and the limit for the unweighted -nearest neighbor graph.

The limit of the for the complete weighted graph with Gaussian weights is the same as the limit for the unweighted -neighborhood graph. There is a simple reason for that: On both graph types the weight of an edge only depends on the distance between its end points, no matter where the points are. This is in contrast to the -graph, where the radius up to which a point is connected strongly depends on its location: If a point is in a region of high density there will be many other points close by, which means that the radius is small. On the other hand, this radius is large for points in low-density regions. Furthermore, the Gaussian weights decline very rapidly with the distance, depending on the parameter . That is, plays a similar role as the radius for the -neighborhood graph.

The two types of -neighborhood graphs with Gaussian weights have the same limit as the unweighted -neighborhood graph and the complete weighted graph with Gaussian weights. When we compare the scaling sequences it turns out that in the case this sequence is the same as for the complete weighted graph, whereas in the case we have , which is the same sequence as for the unweighted -graph corrected by a factor of . In fact, these effects are easy to explain: If then the edges which we have to remove from the complete weighted graph in order to obtain the -neighborhood graph have a very small weight and their contribution to the value of the can be neglected. Therefore this graph behaves like the complete weighted graph with Gaussian weights. On the other hand, if then all the edges that remain in the -neighborhood graph have approximately the same weight, namely the maximum of the Gaussian weight function, which is linear in .

Similar effects can be observed for the -nearest neighbor graphs. The limits of the unweighted graph and the graph with Gaussian weight and are identical (up to constants) and the scaling sequence has to correct for the maximum of the Gaussian weight function. However, the limit for the -graph with Gaussian weights and is different: In fact, we have the same limit expression as for the complete weighted graph with Gaussian weights. The reason for this is the following: Since is large compared to at some point all the -nearest neighbor radii of the sample points are very large. Therefore, all the edges that are in the complete weighted graph but not in the graph have very low weights and thus the limit of this graph behaves like the limit of the complete weighted graph with Gaussian weights.

Finally, we would like to discuss the difference between the two limit expressions, where as examples for the graphs we use only the unweighted -neighborhood graph and the unweighted -graph. Of course, the results can be carried over to the other graph types. For the we have the limits and . In dimension 1 the difference between these expressions is most pronounced: The limit for the graph does not depend on the density at all, whereas in the limit for the -graph the exponent of is , independent of the dimension. Generally, the limit for the -graph seems to be more sensitive to the absolute value of the density. This can also be seen for the volume: The limit expression for the graph is , which does not depend on the absolute value of the density at all, but only on the probability mass in the halfspace . This is different for the unweighted -neighborhood graph with the limit expression .

## 4 Examples where different limits of Ncut lead to different optimal cuts Figure 1: Densities in the examples. In the two-dimensional case, we plot the informative dimension (marginal over the other dimensions) only. The dashed blue vertical line depicts the optimal limit cut of the r-graph, the solid red vertical line the optimal limit cut of the kNN graph.

In Theorems 1-3 we have proved that the limit expressions for and are different for different kinds of neighborhood graphs. In fact, apart from constants there are two limit expressions: that of the unweighted -graph, where the exponent of the density in the limit integral for the is and for the volume is , and that of the unweighted -neighborhood graph, where the exponent in the limit of the is and in the limit of the is . Therefore, we consider here only the unweighted -graph and the unweighted -neighborhood graph.

In this section we show that the difference between the limit expressions is more than a mathematical subtlety without practical relevance: If we select an optimal cut based on the limit criterion for the graph we can obtain a different result than if we use the limit criterion based on the -neighborhood graph.

Consider Gaussian mixture distributions in one (Example 1) and in two dimensions (Example 2) of the form which are set to zero where they are below a threshold and properly rescaled. The specific parameters in one and two dimensions are

dim
1 0 0.5 1 0.4 0.1 0.1 0.66 0.17 0.17 0.1
2 0.2 0.4 0.1 0.4 0.55 0.05 0.01

Plots of the densities of Example 1 and 2 can be seen in Figure 1. We first investigate the theoretic limit values, for hyperplanes which cut perpendicular to the first dimension (which is the “informative” dimension of the data). For the chosen densities, the limit expressions from Theorems 1 and 2 can be computed analytically and optimized over the chosen hyperplanes. The solid red line in Figure 1 indicates the position of the minimal value for the -graph case, whereas the dashed blue line indicates the the position of the minimal value for the -graph case. Figure 2: Results of spectral clustering in two dimensions, for the unweighted r-graph (left) and the unweighted kNN graph (right)

Up to now we only compared the limits of different graphs with each other, but the question is, whether the effects of these limits can be observed even for finite sample sizes. In order to investigate this question we applied normalized spectral clustering (cf. von Luxburg (2007)) to sample data sets of points from the mixture distribution above. We used the unweighted -graph and the unweighted symmetric -nearest neighbor graph. We tried a range of reasonable values for the parameters and and the results we obtained were stable over a range of parameters. Here we present the results for the - (for ) and the -nearest neighbor graphs (for ) and the -graphs with corresponding parameter , that is was set to be the mean - and -nearest neighbor radius. Different clusterings are compared using the minimal matching distance:

 dMM(Clust1,Clust2)=minπ1nn∑i=11Clust1(xi)≠π(Clust2(xi))

where the minimum is taken over all permutations of the labels. In the case of two clusters, this distance corresponds to the 0-1-loss as used in classification: a minimal matching distance of , say, means that of the data points lie in different clusters. In our spectral clustering experiment, we could observe that the clusterings obtained by spectral clustering are usually very close to the theoretically optimal hyperplane splits predicted by theory (the minimal matching distances to the optimal hyperplane splits were always in the order of 0.03 or smaller). As predicted by theory, the two types of graph give different cuts in the data. An illustration of this phenomenon for the case of dimension 2 can be found in Figure 2. To give a quantitative evaluation of this phenomenon, we computed the mean minimal matching distances between clusterings obtained by the same type of graph over the different samples (denoted and ), and the mean difference between the clusterings obtained by different graph types:

Example
1 dim
2 dim

We can see that for the same graph, the clustering results are very stable (differences in the order of ) whereas the differences between the graph and the -neighborhood graph are substantial (0.35 and 0.49, respectively). This difference is exactly the one induced by assigning the middle mode of the density to different clusters, which is the effect predicted by theory.

It is tempting to conjecture that in Example 1 and 2 the two different limit solutions and their impact on spectral clustering might arise due to the fact that the number of Gaussians and the number of clusters we are looking for do not coincide. Yet the following Example 3 shows that this is not the case: for a density in one dimension as above but with only two Gaussians with parameters

0.2 0.4 0.05 0.03 0.8 0.2 0.1

the same effects can be observed. The density is depicted in the left plot of Figure 3.

In this example we draw a sample of points from this density and compute the spectral clustering of the points, once with the unweighted -graph and once with the unweighted -graph. In one dimension we can compute the place of the boundary between two clusters, that is the middle between the rightmost point of the left cluster and the leftmost point of the right cluster. We did this for iterations and plotted histograms of the location of the cluster boundary. In the middle and the right plot of Figure 3 we see that these coincide with the optimal cut predicted by theory. Figure 3: The Example 3 with the sum of two Gaussians, that is two modes of the density. In the left figure the density with the optimal limit cut of the r-graph (dashed blue vertical line) and the optimal limit cut of the kNN graph (the solid red vertical line) is depicted. The two figures on the right show the histograms of the cluster boundary over 100 iterations for the unweighted r-neighborhood and kNN-graphs.

## 5 Outlook

In this paper we have investigated the influence of the graph construction on the graph-based clustering measures normalized cut and Cheeger cut. We have seen that depending on the type of graph and the weights, the clustering quality measures converge to different limit results.

This means that ultimately, the question about the “best ” or “best Cheeger cut” clustering, given infinite amount of data, has different answers, depending on which underlying graph we use. This observation opens Pandora’s box on clustering criteria: the “meaning” of a clustering criterion does not only depend on the exact definition of the criterion itself, but also on how the graph on the finite sample is constructed. This means that one graph clustering quality measure is not just “one well-defined criterion” on the underlying space, but it corresponds to a whole bunch of criteria, which differ depending on the underlying graph. More sloppy: A clustering quality measure applied to one neighborhood graph does something different in terms of partitions of the underlying space than the same quality measure applied to a different neighborhood graph. This shows that these criteria cannot be studied isolated from the graph they are applied to.

From a theoretical side, there are several directions in which our work can be improved. In this paper we only consider partitions of Euclidean space that are defined by hyperplanes. This restriction is made in order to keep the proofs reasonably simple. However, we are confident that similar results could be proven for arbitrary smooth surfaces.

Another extension would be to obtain uniform convergence results. Here one has to take care that one uses a suitably restricted class of candidate surfaces (note that uniform convergence results over the set of all partitions of are impossible, cf. Bubeck and von Luxburg (2009)). This result would be especially useful, if there existed a practically applicable algorithm to compute the optimal surface out of the set of all candidate surfaces.

For practice, it will be important to study how the different limit results influence clustering results. So far, we do not have much intuition about when the different limit expressions lead to different optimal solutions, and when these solutions will show up in practice. The examples we provided above already show that different graphs indeed can lead to systematically different clusterings in practice. Gaining more understanding of this effect will be an important direction of research if one wants to understand the nature of different graph clustering quality measures.

## 6 Proofs

In many of the proofs that are to follow in this section a lot of technique is involved in order to come to terms with problems that arise due to effects at the boundary of our support and to the non-uniformity of the density . However, if these technicalities are ignored, the basic ideas of the proofs are simple to explain and they are similar for the different types of neighborhood graphs. In Section 6.1 we discuss these ideas without the technical overhead and define some quantities that are necessary for the formulation of our results.

In Section 6.2 we present the results for the -nearest neighbor graph and in Section 6.3 we present those for the -graph and the complete weighted graph. Each of these sections consists of three parts: the first is devoted to the , the second is devoted to the volume, and in the third we proof the main theorem for the considered graphs using the results for the and the volume.

The sections on the convergence of the and the volume always follow the same scheme: First, a proposition concerning the convergence of the or the volume for general monotonically decreasing weight functions is given. Using this general proposition the results for the specific weight functions we consider in this paper follow as corollaries.

Since the basic ideas of our proofs are the same for all the different graphs, it is not worth repeating the same steps for all the graphs. Therefore, we decided to give detailed proofs for the -nearest neighbor graph, which is the most difficult case. The -neighborhood graph and the complete weighted graph can be treated together and we mainly discuss the differences to the proof for the graph.

The limits of the and the volume for general weight function are expressed in terms of certain integrals of the weight function over “caps” and “balls”, which are explained later. For a specific weight function these integrals have to be evaluated. This is done in the lemmas in Section 6.4. Furthermore, this section contains a technical lemma that helps us to control boundary effects.

### 6.1 Basic ideas

In this section we present the ideas of our convergence proofs non-formally. We focus here on , but all the ideas can easily be carried over to the Cheeger cut.

First step: Decompose into and
Under our general assumptions there exist constants , which may depend on the limit values of the cut and the volume, such that for sufficiently large

 ∣∣ ∣∣svolnscutn(cutnvoln(H+)+cutnvoln(H−))−CutLimVolLim(H+)+CutLimVolLim(H−)∣∣ ∣∣ ≤c1∣∣∣cutnscutn−CutLim∣∣∣cut term+c2∣∣∣voln(H+)svoln−VolLim(H+)∣∣∣volume-term+c3∣∣∣voln(H−)svoln−VolLim(H−)∣∣∣volume-term.

Second step: Bias/variance decomposition of and volume terms
In order to show the convergence of the

-term we do a bias/variance decomposition

 ∣∣∣cutnscutn−CutLim∣∣∣≤∣∣∣cutnscutn−E(cutnscutn)∣∣∣variance term+∣∣∣E(cutnscutn)−CutLim∣∣∣bias term

and show the convergence to zero of these terms separately. Clearly, the same decomposition can be done for the volume terms. In the following we call these terms the “bias term of the ” and the “variance term of the ” and similarly for the volume.

For both, the and the volume, there is one result in this section dealing with the convergence properties of the bias term and the variance term on each particular graph type and weighting scheme.

Third step: Use concentration of measure inequalities for the variance term
Bounding the deviation of a random variable from its expectation is a well-studied problem in statistics and there are a couple of so-called concentration of measure inequalities that bound the probability of a large deviation from the mean. In this paper we use McDiarmid’s inequality for the graphs and a concentration of measure result for -statistics by Hoeffding for the -neighborhood graph and the complete weighted graph. The reason for this is that each of the graph types has its particular advantages and disadvantages when it comes to the prerequisites for the concentration inequalities: The advantage of the graph is that we can bound the degree of a node linearly in the parameter , whereas for the -neighborhood graph we can bound the degree only by the trivial bound and for the complete graph this bound is even attained. Therefore, using the same proof as for the -graph is suboptimal for the latter two graphs. On the other hand, in these graphs the connectivity between points is not random given their position and it is always symmetric. This allows us to use a -statistics argument, which cannot be applied to the -graph, since the connectivity there may be unsymmetric (at least for the directed one) and the connectivity between each two points depends on all the sample points.

Note that these results are of a probabilistic nature, that is we obtain results of the form

 Pr(∣∣∣cutnscutn−E(cutnscutn)∣∣∣>ε)≤pn,

for a sequence of non-negative real numbers. If for all the sum is finite, then we have almost sure convergence of the variance term to zero by the Borel-Cantelli lemma.

Fourth step: Bias of the term
While all steps so far were pretty much standard, this part is the technically most challenging part of our convergence proof. We have to prove the convergence of to (and similarly for the volume). Omitting all technical difficulties like boundary effects and the variability of the density, the basic ideas can be described in a rather simple manner.

The first idea is to break the down into the contributions of each single edge. We define a random variable that attains the weight of the edge between and , if these points are connected in the graph and on different sides of the hyperplane , and zero otherwise. By the linearity of the expectation and the fact that the points are sampled i.i.d.

 E(cutn)=n∑i=1n∑j=1j≠iEWij=n(n−1)EW12.

Now we fix the positions of the points and . In this case can attain only two values: if the points are connected and on different sides of , and zero otherwise. We first consider the -neighborhood graph with parameter , since here the existence of an edge between two points is determined by their distance, and is not random as in the graph. Two points are connected if their distance is not greater than and thus if . Furthermore, if and are on the same side of . That is, for a point we have

 E(W12|x1=x,x2=y)={fn(dist(x,y))if y is in the cap B(x,rn)∩H−0otherwise.

By integrating over we obtain

 E(W12|x1=x)=∫B(x,rn)∩H−fn(dist(x,y))p(y) dy

and denote the integral on the right hand side in the following by .

Integrating the conditional expectation over all possible positions of the point in gives

 E(W12)=∫Rdg(x) p(x) dx=∫H+g(x) p(x) dx+∫H−g(x) p(x) dx.

We only consider the integral over the halfspace here, since the other integral can be treated analogously. The important idea in the evaluation of this integral is the following: Instead of integrating over , we initially integrate over the hyperplane and then, at each point , along the normal line through , that is the line for all . This leads to

 ∫H+g(x) p(x) dx=∫S∫∞0g(s+tnS) p(s+tnS) dt ds. Figure 4: Integration along the normal line through s. Obviously, for t≥rn the intersection B(s+tnS,rn)∩H− is empty and therefore g(s+tnS)=0. For 0≤t

This integration is illustrated in Figure 4. It has two advantages: First, if is far enough from (that is, for all ), then and the corresponding terms in the integral vanish. Second, if is close to and the radius is small, then the density on the ball can be considered approximately uniform, that is we assume for all . Thus,

 ∫∞0g(s+tnS) p(s+tnS) dt=∫rn0g(s+tnS) p(s+tnS) dt =ηd−1∫rn0udfn(u) du p2(s)

where the last step follows with Lemma 3.

Since this integral of the weight function over the “caps” plays such an important role in the derivation of our results we introduce a special notation for it: For a radius and we define

 F(q)C(r)=ηd−1∫r0udfqn(u) du.

Although these integrals also depend on we do not make this dependence explicit. In fact, the parameter  is replaced by the radius in the case of the -neighborhood graph or by a different graph parameter depending on for the other neighborhood graphs. Therefore the dependence of on will be understood. Note that we allow the notation , if the indefinite integral exists. The integral for is needed for the following reason: For the -statistics bound on the variance term we do not only have to compute the expectation of , but also their variance. But the variance can in turn be bounded by the expectation of , which is expressed in terms of .

In the -neighborhood graph points are only connected within a certain radius , which means that to compute we only have to integrate over the ball , since all other points cannot be connected to . This is clearly different for the complete graph, where every point is connected to every other point. The idea is to fix a radius in such a way as to make sure that the contribution of edges to points outside can be neglected, because their weight is small. Since if the points are on different sides of we have for

 E(W12|x1=x) =∫B(x,rn)∩H−fn(dist(x,y)) p(y) dy+∫B(x,rn)c∩H−fn(dist(x,y)) p(y) dy ≤g(x)+pmax∫B(x,rn)cfn(dist(x,y)) dy.

For the Gaussian weight function the integral converges to zero very quickly, if for . Thus we can treat the complete graph almost as the -neighborhood graph.

For the -nearest neighbor graph the connectedness of points depends on their -nearest neighbor radii that is, the distance of the point to its -th nearest neighbor, which is itself a random variable. However, one can show that with very high probability the -nearest neighbor radius of a point in a region with uniform density is concentrated around . Since we assume that for the expected radius converges to zero. Thus the density in balls with this radius is close to uniform and the estimate becomes more accurate. Upper and lower bounds on the -nearest neighbor radius that hold with high probability are given in Lemma 2. The idea is to perform the integration above for both, the lower bound on the radius and the upper bound on the radius. Then it is shown that these integrals converge to the same limit.

Fifth step: Bias of the volume terms
The bias of the volume term can be treated similarly to the cut term. We define if and are connected in the graph and otherwise. Note that we do not need the condition that the points have to be on different sides of the hyperplane  as for the . Then, for a point if we assume that the density is uniform within distance around

 E(W12|x1=x) =∫B(x,rn)fn(dist(x,y))p(y) dy=p(x)∫B(x,rn)fn(dist(x,y)) dy =dηd∫rn0ud−1fn(u) du p(x),

where the last integral transform follows with Lemma 5. Integrating over we obtain

 E(W12)=∫RdE(W12|x1=x)p(x) dx=dηd∫rn0ud−1fn(u) du∫Rdp2(x) dx.

Since the integral over the balls is so important in the formulation of our general results we often call it the “ball integral” and introduce the notation

for a radius and . The remarks that were made on the “cap integral” above also apply to the “ball integral” .

Sixth step: Plugging in the weight functions
Having derived results on the bias term of the and volume for general weight functions, we can now plug in the specific weight functions in which we are interested in this paper. This boils down to the evaluation of the “cap” and “ball” integrals and