Covariate-assisted spectral clustering

11/08/2014 ∙ by Norbert Binkiewicz, et al. ∙ University of Wisconsin-Madison Johns Hopkins University 0

Biological and social systems consist of myriad interacting units. The interactions can be represented in the form of a graph or network. Measurements of these graphs can reveal the underlying structure of these interactions, which provides insight into the systems that generated the graphs. Moreover, in applications such as connectomics, social networks, and genomics, graph data are accompanied by contextualizing measures on each node. We utilize these node covariates to help uncover latent communities in a graph, using a modification of spectral clustering. Statistical guarantees are provided under a joint mixture model that we call the node-contextualized stochastic blockmodel, including a bound on the mis-clustering rate. The bound is used to derive conditions for achieving perfect clustering. For most simulated cases, covariate-assisted spectral clustering yields results superior to regularized spectral clustering without node covariates and to an adaptation of canonical correlation analysis. We apply our clustering method to large brain graphs derived from diffusion MRI data, using the node locations or neurological region membership as covariates. In both cases, covariate-assisted spectral clustering yields clusters that are easier to interpret neurologically.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

Code Repositories

CASC

Scripts for covariate assisted spectral clustering paper


view repo

CASC

Covariate Assisted Spectral Clustering Rcpp package


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Modern experimental techniques in areas such as genomics and brain imaging generate vast amounts of structured data, which contain information about the relationships of genes or brain regions. Studying these relationships is essential for solving challenging scientific problems, but few computationally feasible statistical techniques incorporate both the structure and diversity of these data.

A common approach to understanding the behavior of a complex biological or social system is to first discover blocks of highly interconnected units, also known as communities or clusters, that serve or contribute to a common function. These might be genes that are involved in a common pathway or areas in the brain with a common neurological function. Typically, we only observe the pairwise relationships between the units, which can be represented by a graph or network. Analyzing networks has become an important part of the social and biological sciences. Examples of such networks include gene regulatory networks, friendship networks, and brain graphs. If we can discover the underlying block structure of such graphs, we can gain insight from the common characteristics or functions of the units within a block.

Existing research has extensively studied the algorithmic and theoretical aspects of finding node clusters within a graph, by Bayesian, maximum likelihood, and spectral approaches. Unlike model-based methods, spectral clustering is a relaxation of a cost minimization problem and has shown to be effective in various settings (Ng et al., 2002; Von Luxburg, 2007). Modifications of spectral clustering, such as regularized spectral clustering, are accurate even for sparse networks (Chaudhuri et al., 2012; Amini et al., 2013; Qin & Rohe, 2013). On the other hand, certain Bayesian methods offer additional flexibility in how nodes are assigned to blocks, allowing for a single node to belong to multiple blocks or a mixture of blocks (Nowicki & Snijders, 2001; Airoldi et al., 2008). Maximum likelihood approaches can enhance interpretabilty by embedding nodes in a latent social space and providing methods for quantifying statistical uncertainty (Hoff et al., 2002; Handcock et al., 2007; Amini et al., 2013). For large graphs, spectral clustering is one of very few computationally feasible methods that has an algorithmic guarantee for finding the globally optimal partition.

The structured data generated by modern technologies often contain additional measurements that can be represented as graph node attributes or covariates. For example, these could be personal profile information in a friendship network or the spatial location of a brain region in a brain graph. There are two potential advantages of utilizing node covariates in graph clustering. First, if the covariates and the graph have common latent structure, then the node covariates provide additional information to help estimate this structure. Even if the covariates and the graph do not share exactly the same structure, some similarity is sufficient for the covariates to assist in the discovery of the graph structure. Second, by using node covariates in the clustering procedure, we enhance the relative homogeneity of covariates within a cluster and filter out partitions that fail to align with the important covariates. This allows for easy contextualization of the clusters in terms of the member nodes’ covariates, providing a natural way to interpret the clusters.

Methods that utilize both node covariates and the graph to cluster the nodes have previously been introduced, but many of them rely on ad hoc or heuristic approaches and none provide theoretical guarantees for statistical estimation. Most existing methods can be broadly classified into Bayesian approaches, spectral techniques, and heuristic algorithms. Many Bayesian models focus on categorical node covariates and are often computationally expensive

(Chang et al., 2010; Balasubramanyan & Cohen, 2011). A recent Bayesian model proposed by Yang et al. (2013)

can discover multi-block membership of nodes with binary node covariates. This method has linear update time in the network size, but does not guarantee linear-time convergence. Heuristic algorithms use various approaches, including embedding the network in a vector space, at which point more traditional methods can be applied to the vector data

(Gibert et al., 2012) or using the covariates to augment the graph and applying other graph clustering methods that tune the relative weights of node-to-node and node-to-covariate edges (Zhou et al., 2009). A commonly-used spectral approach to incorporate node covariates directly alters the edge weights based on the similarity of the corresponding nodes’ covariates, and uses traditional spectral clustering on the weighted graph (Neville et al., 2003; Gunnemann et al., 2013).

This work introduces a spectral approach that performs well for assortative graphs and another that does not require this restriction. We give a standard definition of an assortative graph here and later define it in the context of a stochastic blockmodel.

Definition 1

(Assortative graph) A graph is assortative if nodes within the same cluster are more likely to share an edge than nodes in two different clusters.

Assortative covariate-assisted spectral clustering adds the covariance matrix of the node covariates to the regularized graph Laplacian, boosting the signal in the top eigenvectors of the sum, which is then used for spectral clustering. This works well for assortative graphs, but performs poorly otherwise. Covariate-assisted spectral clustering, which uses the square of the regularized graph Laplacian, is presented as a more general method that performs well for assortative and non-assortative graphs. A tuning parameter is employed by both methods to adjust the relative weight of the covariates and the graph; §

2.3 proposes a way to choose this tuning parameter. Research on dynamic networks using latent space models has yielded an analogous form for updating latent coordinates based on a distance matrix and the latent coordinates from the previous time step (Sarkar & Moore, 2006). A similar framework can also be used to cluster multiple graphs (Eynard et al., 2015).

Variants of our methods previously introduced were derived by first considering an optimization problem to minimize the weighted sum of the k-means and graph cut objective functions and then solving a spectral relaxation of the original problem.

Wang et al. (2009) decided against using an additive method similar to covariate-assisted spectral clustering because setting the method’s tuning parameter is a non-convex problem. They chose to investigate a method that uses the product of the generalized inverse of the graph Laplacian and the covariate matrix instead. Shiga et al. (2007) recognized the advantage of having a tuning parameter to balance the contribution of the graph and the covariates, but did not use the stochastic blockmodel to study their method. The full utility and flexibility of these types of approaches have not yet been presented, and neither paper derives any statistical results about their performance. Furthermore, they do not consider the performance of these methods on non-assortative graphs. In contrast, we were initially motivated to develop covariate-assisted spectral clustering by its interpretation and propensity for theoretical analysis.

Very few of the clustering methods that employ both node covariates and the graph offer any theoretical results, and, to our knowledge, this paper gives the first statistical guarantee for these types of approaches. We define the node-contextualized stochastic blockmodel, which combines the stochastic blockmodel with a block mixture model for node covariates. Under this model, a bound on the mis-clustering rate of covariate-assisted spectral clustering is established in §3.2. The behavior of the bound is studied for a fixed and an increasing number of covariates as a function of the number of nodes, and conditions for perfect clustering are derived. A general lower bound is also derived, demonstrating the conditions under which an algorithm using both the node covariates and the graph can give more accurate clusters than any algorithm using only the node covariates or the graph.

For comparison, an alternative method based on an adaptation of classical canonical correlation analysis is introduced (Hotelling, 1936)

, which uses the product of the regularized graph Laplacian and the covariate matrix as the input to the spectral clustering algorithm. Simulations indicate that canonical correlation performs worse than covariate-assisted spectral clustering under the node-contextualized stochastic blockmodel with Bernoulli covariates. However, canonical correlation analysis clustering is computationally faster than our clustering method and requires no tuning. In contrast, covariate-assisted spectral clustering depends on a single tuning parameter, which interpolates between spectral clustering with only the graph and only the covariates. This parameter can be set without prior knowledge by using an objective function such as the within-cluster sum of squares. Some results for determining what range of tuning parameter values should be considered are provided in the description of the optimization procedure in §

2.3. Alternatively, the tuning parameter can be set using prior knowledge or to ensure the clusters achieve some desired quality, such as spatial cohesion. As an illustrative example, §5 studies diffusion magnetic resonance imaging derived brain graphs using two different sets of node covariates. The first analysis uses spatial location. This produces clusters that are more spatially coherent than those obtained using regularized spectral clustering alone, making them easier to interpret. The second analysis uses neurological region membership, which yields partitions that closely align with neurological regions while allowing for patient-wise variability based on brain graph connectivity.

2 Methodology

2.1 Notation

Let be a graph, where is the set of vertices or nodes and is the set of edges, which represent relationships between the nodes. Let be the number of nodes. Index the nodes in ; then contains a pair if there is an edge between nodes and . A graph’s edge set can be represented as the adjacency matrix , where if and otherwise. We restrict ourselves to studying undirected and unweighted graphs, although with small modifications most of our results apply to directed and weighted graphs as well.

Define the regularized graph Laplacian as

where and is a diagonal matrix with . The regularization parameter is treated as a constant, and is included to improve spectral clustering performance on sparse graphs (Chaudhuri et al., 2012). Throughout, we shall set , i.e., the average node degree (Qin & Rohe, 2013).

For the graph , let each node in the set have an associated bounded covariate vector , and let be the covariate matrix where each row corresponds to a node covariate vector. Let denote the spectral norm and denote the Frobenius norm. Let denote the indicator function. For a sequence and , if and only if and .

2.2 Spectral clustering for a graph with node covariates

The spectral clustering algorithm has been employed to cluster graph nodes using various functions of the adjacency matrix. For instance, applying the algorithm to corresponds to regularized spectral clustering, where the value of the regularization parameter is set prior to running the algorithm. All of the methods we consider will employ this algorithm, but will use a different input matrix such as , , or as defined later.

Spectral clustering.
Given input matrix and number of clusters :
Step 1. Find eigenvectors corresponding to the

largest eigenvalues of

.
Step 2. Use the eigenvectors as columns to form the matrix .
Step 3. Form the matrix by normalizing each of ’s rows to have unit length.
Step 4. Run -means clustering with clusters treating each row of as a point in .
Step 5. If the th row of falls in the th cluster, assign node to cluster .

Step 4 of the spectral clustering algorithm uses -means clustering, which is sensitive to initialization. In order to reduce this sensitivity, we use multiple random initializations. To take advantage of available graph and node covariate data in graph clustering, it is necessary to employ methods that incorporate both of these data types. As discussed in §1, spectral clustering has many advantages over other graph clustering methods. Hence, we propose three approaches that use the spectral clustering framework and utilize both the graph structure and the node covariates.

Assortative covariate-assisted spectral clustering uses the leading eigenvectors of

where is a tuning parameter. When using -Bernoulli covariates, the covariate term can be interpreted as adding to each element a value proportional to the number of covariates equal to one for both and . In practice, the covariate matrix

should be parameterized as in linear regression; specifically, categorical covariates should be re-expressed with dummy variables. For continuous covariates, it can be beneficial to center and scale the columns of

before performing the analysis. The scaling helps satisfy condition in Lemma 1, which ensures the top eigenvectors of contain block information. As demonstrated in the simulations in §4, this method is robust and has good performance for assortative graphs, but does not perform well for non-assortative graphs.

Covariate-assisted spectral clustering uses the leading eigenvectors of

This approach performs well for non-assortative graphs and nearly as well as our assortative clustering method for assortative graphs. When there is little chance of confusion, will be used for notational convenience.

To run covariate-assisted spectral clustering on the large graphs, such as the brain graphs in §5, the top eigenvectors of are computed using the implicitly restarted Lanczos bidiagonalization algorithm (Baglama & Reichel, 2006). At each iteration, the algorithm only needs to compute the product , where is an arbitrary vector. For computational efficiency, the product is calculated as . This takes advantage of the sparsity of and the low-rank structure of . Ignoring log terms and any special structure in , it takes operations to compute the required top eigenvectors of , where is the number of columns in . The graph clusters are obtained by iteratively employing the spectral clustering algorithm on while varying the tuning parameter until an optimal value is obtained. The details of this procedure are described in the next section.

As an alternative, we propose a modification of classical canonical correlation analysis (Hotelling, 1936) whose similarity matrix is the product of the regularized graph Laplacian and the covariate matrix,

The spectral clustering algorithm is employed on to obtain node clusters when the number of covariates, , is greater than or equal to the number of clusters, . This approach inherently provides a dimensionality reduction in the common case where the number of covariates is much less than the number of nodes. If , then spectral clustering with has a faster running time than covariate-assisted spectral clustering.

2.3 Setting the tuning parameter

In order to preform spectral clustering with , it is necessary to determine a specific value for the tuning parameter, . The tuning procedure presented here presumes that both the graph and the covariates contain some block information, as demonstrated by the simulations in §4. In practice, an initial test can be used to determine if the graph and the covariates contain common block information, and such a test will be presented in future work. The tuning parameter should be chosen to achieve a balance between and

such that the information in both is captured in the leading eigenspace of

. For large values of , the leading eigenspace of is approximately the leading eigenspace of . For small values of , the leading eigenspace of is approximately the leading eigenspace of . A good initial choice of is that which makes the leading eignevalues of and equal, namely .

There is a finite range of where the leading eigenspace of is not a continuous function of ; outside this range, the leading eigenspace is always continuous in . In simulations, the clustering results are exceedingly stable in the continuous range of . Hence, only the values of inside a finite interval need to be considered. This section gives an interval that is computed with only the eigenvalues of and . Within this interval, is chosen to minimize an objective function. Empirical results demonstrating these properties are given in the Supplementary Material.

Let be the th eigenvalue of matrix . To find the initial range , define a static vector as a vector that satisfies either condition (a) or (b) below. For ,

These are vectors for which and are highly differentiated; perhaps there is a cluster in the graph that does not appear in the covariates, or vice versa. These static vectors produce discontinuities in the leading eigenspace of .

For example, let be an eigenvector of and a static vector of type (a), then as changes, it will remain a slightly perturbed eigenvector of . When is close to , then in some neighborhood of , the slightly perturbed version of will transition into the leading eigenspace of . This transition corresponds to a discontinuity in the leading eigenspace.

As shown in the Supplementary Material, the concept of static vectors with can be used to find a limited range of for possible discontinuities. The range of values for which discontinuities can occur is , where

The tuning parameter is chosen to be the value which minimizes the k-means objective function, the within cluster sum of squares,

where is the th row of , is the centroid of the th cluster from k-means clustering, and is the set of points in the th cluster. Hence, the tuning parameter is .

3 Theory

3.1 Node-contextualized stochastic blockmodel

To illustrate what covariate-assisted spectral clustering estimates, this section proposes a statistical model for a network with node covariates and shows that covariate-assisted spectral clustering is a weakly consistent estimator of certain parameters in the proposed model. To derive statistical guarantees for covariate-assisted spectral clustering, we assume a joint mixture model for the the graph and the covariates. Under this model, each node belongs to one of

blocks and each edge in the graph corresponds to an independent Bernoulli random variable. The probability of an edge between any two nodes depends only on the block membership of those nodes

(Holland et al., 1983). In addition, each node is associated with independent covariates with bounded support, where expectation depends only on the block membership and can grow with the number of nodes.

Definition 2

(Node-contextualized stochastic blockmodel) Consider a set of nodes . Let assign the nodes to one of the blocks, where if node belongs to block . Let be full rank and symmetric, where is the probability of an edge between nodes in blocks and . Conditional on , the elements of the adjacency matrix are independent Bernoulli random variables. The population adjacency matrix fully identifies the distribution of and .

Let be the covariate matrix and be the covariate expectation matrix, where is the expectation of the th covariate of a node in the th block. Conditional on , the elements of are independent and the population covariate matrix, , is

(1)

Under the node-contextualized stochastic blockmodel, covariate-assisted spectral clustering seeks to estimate the block membership matrix . In the next section, we show its consistency. If is assumed to be positive definite, the same results hold for assortative covariate-assisted spectral clustering up to a constant factor. These results motivate the definition of an assortative graph in the context of the node-contextualized stochastic blockmodel.

Definition 3

(Assortative graph) A graph generated under the node-contextualized stochastic blockmodel is said to be assortative if the block probability matrix corresponding to the graph is positive definite. Otherwise, it is said to be non-assortative.

Many common networks are assortative, such as friendship networks or brain graphs. Dating networks are one example of a non-assortative network. Most relationships in a dating network are heterosexual, comprised of one male and one female. In a stochastic blockmodel, where the blocks are constructed by gender, will have small diagonal elements and large off-diagonal elements, producing more relationships between genders than within genders. Such a matrix is not positive definite. More generally, non-assortative stochastic blockmodels will tend to generate more edges between blocks and fewer edges within blocks. These non-assortative blocks appear in the spectrum of as large negative eigenvalues. By squaring the matrix , the eigenvalues become large and positive, matching the positive eigenvalues in .

3.2 Covariate-assisted spectral clustering is statistically consistent under the node-contextualized stochastic blockmodel

The proof of consistency for covariate-assisted spectral clustering under the node-contextualized stochastic blockmodel requires three results. Lemma 1 expresses the eigendecomposition of the population version of the covariate-assisted Laplacian,

in terms of . Theorem 1 bounds the spectral norm of the difference between and . Then, the Davis–Kahan Theorem (Davis & Kahan, 1970) bounds the difference between the sample and population eigenvectors in Frobenius norm. Finally, Theorem 3 combines these results to establish a bound on the mis-clustering rate of covariate-assisted spectral clustering. The argument largely follows Qin & Rohe (2013). The results provided here do not include the effects of Step 3 in Algorithm 1. The proofs are in the Supplementary Material.

Lemma 1

(Equivalence of eigenvectors and block membership)

Under the node-contextualized stochastic blockmodel, let , , and . Let , where and . Define with columns containing the top eigenvectors of . Assume

, then there exists an orthogonal matrix

, such that . Furthermore, if and only if , where is the th row of the block membership matrix.

Under assumption

, the rows of the population eigenvectors are equal if and only if the corresponding nodes belong to the same block. This assumption requires the population eigengap to be greater than the maximum of the absolute difference between the sum of covariate variances within a block and the mean of the sums across all blocks. If all the covariates have equal variance in all blocks, the assumption is trivially true. Since

is effectively being used as a measure of similarity between nodes, if the covariate variances across blocks are unequal, the difference in scale makes the blocks more difficult to distinguish. This is evidenced by a reduction in the eigengap proportional to this difference. In practice, this condition is not restrictive since can be centered and normalized. To derive a bound on the mis-clustering rate, we will need a bound on the difference between the population eigenvectors and the sample eigenvectors. In order to establish this bound, the following theorem bounds the spectral norm of the difference between and .

Theorem 1

(Concentration inequality) Let , , , , . For any , if and then with probability at least ,

Consider a node-contextualized stochastic blockmodel with two blocks, within block probabilities , and between block probabilities . Condition holds when and condition holds when . Hence, condition restricts the sparsity of the graph, while condition requires that the number of covariates grows with the number of nodes. Now we use Theorem 1 and the Davis–Kahan Theorem to bound the difference between the sample and population eigenvectors.

Theorem 2

(Empirical and population eigenvector bound) Let be the th largest eigenvalue of and be a rotation matrix. Let the columns of and contain the top eigenvectors of and , respectively. Given assumptions (i) in Lemma 1, (ii) and (iii) in Theorem 1, and (iv) , then with probability at least ,

The next theorem bounds the proportion of mis-clustered nodes. In order to define mis-clustering, recall that the spectral clustering algorithm uses k-means clustering to cluster the rows of . Let and be the cluster centroid of the th node generated using k-means clustering on and , respectively. A node is correctly clustered if is closer to than for all such that . In order to avoid identifiablity problems and since clustering only requires the estimation of the correct subspace, the formal definition is augmented with a rotation matrix . The following definition formalizes this intuition.

Definition 4

(Set of mis-clustered nodes) Let be a rotation matrix that minimizes . Define the set of mis-clustered nodes as

Using the definition of mis-clustering and the result from Theorem 2, the next theorem bounds the mis-clustering rate, .

Theorem 3

(Mis-clustering rate bound) Let denote the size of the largest block. Under assumptions (i) in Lemma 1, (ii) and (iii) in Theorem 1, and (iv) in Theorem 2, with probability at least ,

The asymptotics of the mis-clustering rate depend on the number of covariates and the sparsity of the graph. This is demonstrated by Corollary 1, which provides insight into how the number of covariates and graph sparsity affect the mis-clustering rate and the choice of tuning parameter.

Corollary 1

(Mis-clustering bound with increasing ) Assume and ; in addition, ; ; and . For computational convenience, assume that each block has the same number of nodes and is a multiple of , where is fixed. Let , , and , where , then the mis-clustering bound from Theorem 3 becomes

When the mis-clustering rate is minimized with , which yields a rate of . When the mis-clustering rate is minimized with , which gives a rate of .

These results demonstrate that the tuning parameter is determined by the balance between the number of covariates and the sparsity of the graph. A non-zero optimal value signifies that including covariates does improve the mis-clustering bound, although it might not improve the asymptotics of the bound. Furthermore, the asymptotic mis-clustering rate is determined by the asymptotic behavior of the number of covariates or the mean number of edges, whichever is greater as determined by and , respectively. For example, if we allow the number of covariates to grow with the number of nodes such that or and let the mean number of edges increase such that or , then the covariates and the graph contribute equally to the asymptotic mis-clustering rate.

Remark 1

It is instructive to compare the value of suggested by the results in Corollary 1 with the possible values of based on the optimization procedure in §2.3. Computing and with , instead of , for convenience, gives and . Therefore, the optimization procedure will yield . This agrees with the results of Corollary 1 when the mean node degree and the number of covariates grow at the same rate with respect to the number of nodes or .

Corollary 2

(Perfect clustering with increasing ) Based on Theorem 3, perfect clustering requires . Under the simplifying assumptions given in Corollary 1, perfect clustering is achieved when the number of covariates is .

3.3 General lower bound

The next theorem gives a lower bound for clustering a graph with node covariates. This bound uses Fano’s inequality and is similar to that shown in Chaudhuri et al. (2012) for a graph without node attributes. We restrict ourselves to a node-contextualized stochastic blockmodel with blocks, but allow an arbitrary number of covariates .

Theorem 4

(Covariate-assisted clustering lower bound) Consider the node-contextualized stochastic blockmodel with blocks and such that

. Let the Kullback–Leibler divergence of the covariates be

, where and are the distribution of the th covariate under opposite block assignments, and . For a fixed and , in order to correctly recover the block assignments with probability at least , must satisfy

Remark 2

(Lower bound interpretation) If

(2)
(3)

then only an algorithm that uses both the graph and node covariates can yield correct blocks with high probability. Condition (2) specifies when the graph is insufficient and condition (3) specifies when the covariates are insufficient to individually recover the block membership with high probability.

Remark 3

(Bound comparison) The upper bound for covariate-assisted spectral clustering in Theorem 3 can be compared to the general lower bound. Simplifying the general lower bound gives the condition for perfect clustering with probability . This is the same condition as for regularized spectral clustering. According to Theorem 3, for this method to achieve prefect clustering with probability requires . As highlighted in Corollary 2 this condition cannot be satisfied for a fixed , so it cannot be shown that covariate-assisted spectral clustering achieves perfect clustering for a fixed number of covariates. This is consistent with similar results for regularized spectral clustering.

4 Simulations

4.1 Varying graph or covariate signal

In these simulations, consider a node-contextualized stochastic blockmodel with blocks and node Bernoulli covariates. Define the block probabilities for the assortative graph, the non-assortative graph, and the covariates as

(4)

where and . This implies that for the assortative graph the probability of an edge within a block is , which is greater than , the probability of an edge between two blocks. The opposite is true for the non-assortative graph. In the th block, the probability of the th covariate being one is and the probability of the other covariates being one is .

These simulations compare five different methods. The first three are canonical correlation analysis clustering, covariate-assisted spectral clustering, and assortative covariate-assisted spectral clustering, which utilize the node edges as well as the node covariates to cluster the graph. The other two methods utilize either the node edges or the node covariates. For the node edges, regularized spectral clustering is used; for the node covariates, spectral clustering on the covariate matrix is used.

The first set of simulations investigates the effect of varying the block signal in the graph on the mis-clustering rate. This is done by varying the difference in the within and between block probabilities, . The simulations are conducted for the assortative and non-assortative graphs, using and in 4, shown in Fig. 1(a) and (b), respectively. In the assortative case, our assortative clustering method performs better than any of the other methods. Covariate-assisted spectral clustering performs slightly worse than the assortative variant, but still outperforms the other methods. In the non-assortative case, our clustering method has the best performance, while the assortative version always does worse than using only the covariates or the graph.

35pc[plotBiom3.eps]

Figure 1: Average mis-clustering rate of five different clustering methods: covariate-assisted spectral clustering (solid), assortative covariate-assisted spectral clustering (dash), canonical correlation analysis clustering (dot), regularized spectral clustering (dot-dash), and spectral clustering on the covariate matrix (long dash). The fixed parameters are , , , , and .

The second set of simulations investigates the effect of varying the block signal of the covariates on the mis-clustering rate by changing the difference between the block specific covariate probabilities, . As shown in Fig. 1(c), assortative covariate-assisted spectral clustering tends to have a better mis-clustering rate than the other methods. Only when the difference in the covariate block probabilities is very small and effectively becomes a noise term does regularized spectral clustering outperform our assortative clustering method. For the non-assortative case shown in Fig. 1(d), assortative covariate-assisted spectral clustering performs poorly, while covariate-assisted spectral clustering is able to outperform all other methods for a sufficiently large difference in the covariate block probabilities. This is expected since the covariates in the assortative variant effectively increase the edge weights within a block, which will smooth out the block structure specified by .

4.2 Model misspecification

The final simulation considers the case where the block membership in the covariates is not necessarily the same as the block membership in the graph. The node Bernoulli covariates no longer satisfy (1) in Definition 2, but , where is a block membership matrix that differs from . As such, the underlying clusters in the graph do not align with the clusters in the covariates. This simulation varies the proportion of block assignments in which agree with the block assignments in to investigate the robustness of the methods to this form of model misspecification. The results in Fig. 1(e) show that assortative covariate-assisted spectral clustering is robust to covariate block membership model misspecification for the assortative graph. The mis-clustering rate shown is computed relative to the block membership of the graph. For this specific case, our assortative clustering method is able to achieve a lower mis-clustering rate than regularized spectral clustering as long as the proportion of agreement between the block membership of the graph and the covariates is greater than 0.7. Since a three block model is used, the lowest proportion of agreement possible is one third due to identifiability. For the non-assortative graph, Fig. 1(f), covariate-assisted spectral clustering requires a slightly higher level of agreement at 0.8.

5 Clustering Diffusion Mri Connectome Graphs

Assortative covariate-assisted spectral clustering was applied to brain graphs recovered from diffusion magnetic resonance imaging (Craddock et al., 2013). Each node in a brain graph corresponds to a voxel in the brain. The edges between nodes are weighted by the number of estimated fibers that pass through both voxels. The center of a voxel is treated as the spatial location of the corresponding node. These spatial locations were centered and used as the first set of covariates in the analysis. The data set used in this analysis contains 42 brain graphs obtained from 21 different individuals. Only the largest connected components of the brain graphs were used, ranging in size from 707,000 to 935,000 nodes, with a mean density of 744 edges per node. In addition, the brain graphs contain brain atlas labels corresponding to 70 different neurological brain regions, which were treated as a second set of covariates.

20pc[brainPlotBiom_flat.eps]

Figure 2: A section of a brain graph with nodes plotted at their spatial location and colored by cluster membership for three different clustering methods and a brain atlas.

Whereas the simulations attempted to demonstrate the effectiveness of our clustering method in utilizing node covariates to help discover the underlying block structure of the graph, this analysis focuses on the ability of our clustering method to discover highly connected clusters with relatively homogeneous covariates. The node covariates contextualize the brain clusters and improve their interpretability. Like other clustering methods, covariate-assisted spectral clustering is mainly an exploratory tool which may or may not provide answers directly but can often provide insight about relationships within the data. In this example, it is used to examine the relationships between brain graph connectivity, spatial location, and brain atlas labels. The utility of covariate-assisted spectral clustering was explored by partitioning the brain graphs into 100 clusters. The brain graphs in this data set are assortative, so our assortative clustering method was used in this analysis. Since the brain graphs have heterogeneous node degrees, the rows of the eigenvector matrix were normalized when applying the spectral clustering algorithm to improve the clustering results (Qin & Rohe, 2013). Figure 2 shows a section of a sample brain graph with nodes plotted at their corresponding spatial locations and colored by cluster membership. For reference, the neurological brain atlas clusters with 70 different regions and an additional category for unlabeled nodes are also plotted. The brain graphs were clustered using three different approaches: regularized spectral clustering, assortative covariate-assisted spectral clustering with spatial location and with brain atlas membership. The tuning parameter was set using the procedure in §2.3, and the values were with spatial location covariates and with brain atlas membership covariates.

As shown in Fig. 2, regularized spectral clustering yielded spatially diffuse clusters of densely connected nodes. By adding spatial location using covariate-assisted spectral clustering, we obtained densely connected and spatially coherent clusters. Regularized spectral clustering had two clusters of about 80,000 nodes and 4 clusters with fewer than 1,000 nodes, while the largest cluster from our clustering method had fewer than 50,000 nodes and no clusters had fewer than 1,000 nodes. Both greater spatial coherence and increased uniformity in cluster size demonstrated by covariate-assisted spectral clustering are important qualities for interpreting the partition. In addition, the clusters have a greater similarity with the brain atlas labels, though this similarity is still not very substantial. This suggests that brain graph connectivity is governed by more than just the neurological regions in the brain atlas.

The adjusted Rand index between different partitions ACASC-X Brain Atlas ACASC-BA SC-X RSC 0.095 0.082 0.085 0.092 ACASC-X - 0.169 0.189 0.278 Brain Atlas - - 0.838 0.226 ACASC-BA - - - 0.227 RSC, regularized spectral clustering; ACASC-X, assortative covariate-assisted spectral clustering with spatial location; ACASC-BA assortative covariate-assisted spectral clustering with brain atlas membership

The relation between the brain atlas and the brain graph was studied further by treating brain atlas membership as the node covariates. This allowed the discovery of highly-connected regions with relatively homogeneous graph atlas labels. As shown in Fig. 2, relative to the brain atlas, some of the clusters are broken up, a few are joined together, and others overlap with multiple brain atlas regions, but the high similarity is clearly visible. Importantly, this approach gives us clusters that are highly aligned with known neurological regions while allowing for individual variability of the partitions based on brain graph connectivity. The adjusted Rand index was used to quantify the similarity of the partitions of a brain graph specified by the different clustering methods and the brain atlas in Table 5. The alignment with the partitions based only on spatial location and either covariate-assisted spectral clustering with spatial location or the brain atlas is greater than between the two methods. This indicates that both the clusters from our method and the brain atlas are spatially coherent yet not highly overlapping.

10pc[brainHeatmap.eps]

Figure 3: Heat maps comparing the partitions of 42 brain graphs using the adjusted Rand index with adjacent rows corresponding to two scans of the same individual.

Brain graph connectivity appears to be giving the clusters that use spatial location a different configuration from the brain atlas, as seen in Fig. 2. As expected, covariate-assisted spectral clustering with brain atlas membership has the highest adjusted Rand index partition similarity with the brain atlas but low similarity with the regularized spectral clustering partitions. If a more balanced partition alignment is desired, the tuning parameter can be adjusted accordingly.

The relationship between all 42 brain graphs was analyzed by using the adjusted Rand index to compare partitions between them, as shown in Fig. 3. To conduct the comparison, the nodes of each brain graph were matched by spatial location, and any non-matching nodes were ignored. Both regularized spectral clustering and covariate-assisted spectral clustering with spatial location distinguish clearly between individuals based on their brain graph partitions, but the latter gave partitions which are more homogeneous both within and between individuals. This increased partition consistency is favorable since a high degree of variation in the clusters between individuals would make them more difficult to interpret.

6 Discussion

Although the node-contextualized stochastic blockmodel is useful for studying graph clustering methods, data can deviate from the model’s assumptions. More generally, covariate-assisted spectral clustering can be used to find highly connected communities with relatively homogeneous covariates, where the balance between these two objectives is controlled by the tuning parameter and can be set empirically or decided by the analyst. Relatively homogeneous covariates contextualize the clusters, making them easier to interpret and allow the analyst to focus on partitions that align with important covariates. Beyond its scientific interest, the brain graph analysis demonstrates the computational efficiency of our clustering method since the analysis could not have been feasibly conducted with existing methods. Nevertheless, determining an optimal tuning parameter value still presents a computational burden. Using a low rank update algorithm for eigenvector decomposition can further reduce this cost.

This work is meant as a step towards statistical understanding of graphs with node covariates. Further work is needed to better understand the use of covariate-assisted spectral clustering for network contextualization. Methods for determining the relative contribution of the graph and the covariates to a graph partition and tests to signify which covariates are informative would be useful tools. Ultimately, a thorough examination of the relationship between graph structure and node covariates is essential for a deep understanding of the underlying social or biological system.

Acknowledgment

This research was supported by NIH grant 5T32HL083806- 08, NSF grants DMS-1309998, DMS-1612456, ARO grant W911NF1510423, the Defense Advanced Research Projects Agency (DARPA) SIMPLEX program through SPAWAR contract N66001-15-C-4041 and DARPA GRAPHS N66001-14-1-4028. The authors would like to thank Yilin Zhang, Tai Qin, Jun Tao, Soumendu Sundar Mukherjee, and Zoe Russek for helpful comments.

Supplementary Material

6.1 Discontinuous Transitions in the Leading Eigenspace of

Discontinuous changes in the leading eigenspace of are a major concern when determining an optimal value since they have a large effect on the clustering results. They can be studied algebraically by expressing in terms of the eigenvectors of and . This approach is motivated by Brand (2006).

Let and be the orthogonal basis of the column space of , the component of orthogonal to . Let and , so . Then, can be written as follows.

Note that

and

Hence, for any such that , the th row and column of will be zero except for the diagonal element. This means that will not be rotated by and will be an eigenvector of for all values of . The eigenvalue will not change either, but its position relative to the other eigenvalues will change with . The change in the relative position of will result in a discontinuous transition in the leading eigenspace of if .

For any such that , is a column in by construction. Row in the lower left block of is

and, since is symmetric, this is also column in the upper right block of . The lower right block of has row , and by symmetry column , given by

Thus, for any such that the th row and column of will be zero except for the diagonal element. This means that and will be an eigenvector and eigenvalue of for all values of , but will occupy different relative positions in the eigendecomposition based on the value of . The change in the relative position of will result in a discontinuous transition in the leading eigenspace of if .

Knowing the interval on which such discontinuous transitions are possible can reduce the computational burden of choosing an optimal . The values of for which transitions occur can be identified as points at which the eigengap equals zero, . First, consider the lowest possible value of for which such a transition can occur, . Note that , where the equality holds when is orthogonal to and is sufficiently small, and , where the equality holds when is identical to . Hence, the earliest possible transition occurs when

For the highest value of for which such a transition is possible, consider . Following the above argument for with and interchanged, a symmetric result is obtained with the additional dependence on the number of covariates, . This result yields,

Therefore, discontinuous transitions in the leading eigenspace of can only occur in the interval .

6.2 Empirical Results for Choosing

Figure 4 presents some empirical details to demonstrate how the within cluster sum of squares and the mis-clustering rate vary with the tuning parameter . The simulations shown in the figure use the same model structure described in §4 of the paper. The results show the minimum of the within cluster sum of squares falls within the prescribed range of , . Furthermore, the minimum of the within cluster sum of squares tends to align with the minimum of the mis-clustering rate. Similar results were observed for other parameter settings.

16pc[suppSim.eps]

Figure 4: The results of assortative covariate-assisted spectral clustering for a range of values. The solid line in bottom graphs indicates the value chosen by the optimization procedure and the dased lines indicate the interval . The fixed parameters are , , , and .

6.3 Proof of Lemma 1

This proof follows the approach used in Rohe et al. (2011) to establish the equivalence between block membership and a subset of the population eigenvectors. Note that . Define , a diagonal matrix such that , and a diagonal matrix such that .

If we let , then . Recall that is symmetric and full rank by assumption. Let , which is positive definite . Assume is chosen such that is full rank, which is true with the possible exception of a set of values of measure zero. Let and note that . Hence, is symmetric and has real eigenvalues. By spectral decomposition, let

Then,

Therefore, is the matrix of eigenvectors of , but not necessarily the top . Also, so exists and . This establishes the equivalence between block membership and a subset of the population eigenvectors. A condition will now be derived for which this equivalence holds for the top population eigenvectors. Let be a normalized eigenvector orthogonal to the span of . Because has orthogonal columns, it is full rank. As such, .

Define , , and , then

The th eigenvalue of is given by