 # Modularity Component Analysis versus Principal Component Analysis

In this paper the exact linear relation between the leading eigenvectors of the modularity matrix and the singular vectors of an uncentered data matrix is developed. Based on this analysis the concept of a modularity component is defined, and its properties are developed. It is shown that modularity component analysis can be used to cluster data similar to how traditional principal component analysis is used except that modularity component analysis does not require data centering.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The purpose of this paper is to present a development of modularity components that are analogous to principal value components . It will be shown that modularity components have characteristics that are similar to those of principal value components in the sense that modularity components provide for data analysis in much the same manner as do principal value components. In particular, just as in the case of principal value components, modularity components are shown to be mutually orthogonal, and raw data can be projected onto the directions of a number of modularity components to reveal patterns and clusters in the data. However, a drawback of principal component analysis (PCA) is that it generally requires centering or standardizing the data before determining principal components. On the other hand, utilizing modularity components does not require data to be centered to accurately extract important information. Among other things, this means that sparsity in the original data is preserved whereas centering data naturally destroys inherent sparsity.

Moreover, we will complete the comparison of modularity components with principal components by showing that the component that maximizes the modularity function of the uncentered data as defined in 

can replace the principal component that maximizes the variance in the

centered data. Finally, just as each succeeding principal component has maximal variance with the constraint that it is orthogonal to all previous principal components, each succeeding modularity component has maximal modularity with the constraint that it is orthogonal to all prior modularity components.

Our modularity components are derived from the concept of modularity introduced by Newman and Girvan in , and further explained by Newman in . The modularity partitioning method starts with an adjacency matrix or similarity matrix and aims to partition a graph by maximizing the modularity. Assuming the graph containing nodes, the modularity is defined by

 Q(s)=14msTBs, (1.1)

where is the number of edges in the graph, is the modularity matrix defined below, and is a vector that maximizes . Since the number of edges in a given graph is constant, the multiplier is often dropped for simplicity, and the modularity becomes

 Q(s)=sTBs. (1.2)

The modularity matrix is defined by

 B=A−ddT2m, (1.3)

where is an adjacency matrix or similarity matrix, and is the vector containing the degrees of the nodes. It is proven in 

that the eigenvector corresponding to the largest eigenvalue of

can maximize

. Like the spectral clustering method

, the modularity clustering method also uses signs of entries in the dominant eigenvector to partition graphs.

The modularity partitioning algorithm has been widely applied and discussed. For instance, it has been applied to reveal human brain functional networks  and ecological networks , and used in image processing . Blondel et al. 

proposed a heuristic that can reveal the community structure for large networks. Rotta and Noack

 compared several heuristics in maximizing modularity. DasGupta and Desai  studied the complexity of modularity clustering. The limitations of the modularity maximization technique are discussed in  and .

By the modularity algorithm , a graph is partitioned into two parts, and a hierarchy can be built by iteratively calculating the matrices and their dominant eigenvectors. Repetitively partitioning a graph into two subsets may be inefficient and does not utilize information in subdominant eigenvectors. And while there is a connection between graph partitioning and data analysis, they are not strictly equivalent because extracting information from raw data by means of graph partitioning necessarily requires the knowledge or creation of a similarity or adjacency matrix, which in turn can only group nodes. For the purpose of data analysis, it is more desirable to analyze raw data without involving a similarity matrix. Modularity analysis can be executed directly from uncentered raw data ( number of attributes, number of data points) by redefining the modularity matrix to be

 B=XTX−ddT2m, (1.4)

but in practice need not be explicitly computed. In addition to using only raw data, this formulation allows the creation of modularity components that are directly analogous to principal value components created from centered data. In what follows, let , where the rows of may be normalized when different units are involved.

The paper is organized as follows. In Section 2 we give the definition of modularity components. In Section 3 properties of the modularity components are established. Section 4 contains some conclusions.

## 2 Definition of Modularity Components

In this section we will give the definition of the modularity components. Before doing that we will prove a couple of lemmas about the relation between the eigenvectors of a particular kind of similarity matrices that can be fed in the modularity algorithm and the singular vectors of the data matrix. The lemmas will help us to define the modularity components. Suppose the SVD of the uncentered data matrix is and that there are

nonzero singular values. Then

 A=XTX=VΣTΣVT (2.1)

has positive eigenvalues. From the interlacing theorem mentioned in  and , it is guaranteed that the largest eigenvalues of are positive. If the eigenvalues of are simple, then the eigenvectors of corresponding to the largest eigenvalues can be written as linear combinations of the eigenvectors of . The proof of the following lemma can be found in Appendix A.

###### Lemma 2.1.

Suppose the largest eigenvalues of are and the nonzero eigenvalues of are . Further suppose that for we have and . Then the eigenvector of can be written by

 bi=k∑j=1γijvj, (2.2)

where

 γij=vTjd(αj−βi)∥d∥2. (2.3)

The point of this lemma is to realize that the vector is a linear combination of the . The next lemma gives the linear expression of the vectors in terms of the , where is the Moore-Penrose inverse of . There are practical cases where our assumptions in Lemma 2.1 hold true, and examples are given in Appendix B.

###### Lemma 2.2.

With the assumptions in Lemma 2.1, we have

 bTiX†=k∑j=1γijσjuTj, (2.4)

where is the -th the nonzero singular value of .

###### Proof.
 bTiX†=(k∑j=1γijvTj)VΣ†UT
 =(γi1σ1γi2σ2⋯γikσk0⋯0)1×pUT
 =k∑j=1γijσjuTj.

Lemma 2.2 shows that if can be written as a linear combination of the , then the vectors can be written as a linear combination of the . Next we give the formal definition of the modularity components.

###### Definition 2.3.

Suppose is the data matrix, is the eigenvector corresponding to the -th largest eigenvalue of , where

 B=XTX−ddT2m. (2.5)

Under the assumptions in Lemma 2.1, let

 mTi=bTiX†=k∑j=1γijσjuTj. (2.6)

The -th modularity component is defined to be

 ci=mi∥mi∥2. (2.7)

By the two lemmas, it can be seen that as long as the assumptions in Lemma 2.1 are met, the modularity components are well-defined, and the definition of is based on the linear combination of in terms of the . In the next section some important properties of the modularity components are established.

## 3 Properties of the Modularity Components

In this section some properties of modularity components will be discussed. It will be seen that the properties of modularity components are similar to the ones of principal components. First we will prove that the modularity components, as long as they are well-defined, are perpendicular to each other. Then we will prove that if we project the uncentered data onto the span of the modularity components, then the projection will be a scalar multiple of the modularity vectors. Finally, we will prove that the ‘importance’ of each modularity component is given by its corresponding eigenvalue of . The first modularity component has the largest modularity, and the -th modularity component has the largest modularity with the constraint that it is perpendicular to the preceding modularity components.

###### Theorem 3.1.

With the assumptions in Lemma 2.1, suppose is the unnormalized data matrix, , . Suppose , are the eigenvectors of corresponding to eigenvalues and , , respectively. Then we have

 B=(BX†)(BX†)T (3.1)

and for .

###### Proof.

It is sufficient to prove that for . From we have

 d=Ae=XTXe,
 2m=dTe=eTXTXe,

where is a column vector with all ones. Therefore,

 B=A−ddT2m=XTX−(XTXe)(XTXe)TeTXTXe

Since is always true, we have

 BX†=XT−XTXeeTXTeTXTXe.

Consequently,

 (BX†)(BX†)T
 =XTX−2XTXeeTXTXeTXTXe
 +(eTXTXe)XTXeeTXTX(eTXTXe)2

Therefore . Since , , , , we have

 mTimj=(bTiX†)(bTjX†)T
 =(1λibTiBX†)(1λjbTjBX†)T
 =1λiλjbTi(BX†)(BX†)Tbj=1λiλjbTiBbj
 =1λibTibj=0,

so

 cTicj=mTimj∥mi∥2∥mj∥2

implies for . ∎

From Theorem 3.1, it can be seen that the modularity components are orthogonal to each other. Next we prove that the projection of the uncentered data onto the span of is a scalar multiple of .

###### Theorem 3.2.

With the assumptions in Lemma 2.1, let be the projector onto the span of . Then we have

 PciX=1∥mi∥2cibTi. (3.2)
###### Proof.
 PciX=cicTiX=1∥mi∥2cimTiUΣVT
 =1∥mi∥2ci(k∑j=1γijσjuTj)UΣVT
 =1∥mi∥2ci(γi1σ1γi2σ2⋯γikσk0⋯0)1×pΣVT
 =1∥mi∥2ci(γi1γi2⋯γik0⋯0)1×nVT
 =1∥mi∥2cik∑j=1γijvTi=1∥mi∥2cibTi.

This property is similar to that of principal components in the sense that if we project the data onto the span of the components, we get a scalar multiple of a vector, and the vector can give the clusters in the data based on the signs of the entries in the eigenvectors. Finally, we can prove that if we look at in the space perpendicular to , , , , then the projection onto the span of will give us the largest modularity, and the projection is just .

###### Theorem 3.3.

With the assumptions in Lemma 2.1,

 βi=1∥mi∥22, 1≤i≤k−1. (3.3)

Moreover, let and for , let

 Xi=X−i−1∑j=1cjcTjX, (3.4)

and let , be defined correspondingly. Under these conditions, is the largest eigenvalue of , and is the corresponding eigenvector of .

###### Proof.

For , since it is proved in  that is the vector that maximizes in Equation 1.2, we have

 Qmax1=bT1Bb1=β1bT1b1=β1.

By Theorem 3.1,

 max∥s∥2=1sTBs=max∥s∥2=1sT(BX†)(BX†)Ts
 =max∥s∥2=1∥(BX†)Ts∥22=max∥s∥2=1∥(X†)TBs∥22
 =∥(X†)TBb1∥22=∥(X†)Tβ1b1∥22=∥β1m1∥22=β1.

Therefore . Then is defined by

 X2=X−c1cT1X=(I−c1cT1)X.

Since is idempotent, we have

 XT2X2=XT(I−c1cT1)X=XTX−XTc1cT1X.

By Theorem 3.2, we know that , so and then

 XT2X2=XTX−β1b1bT1.

Plug into

 B2=XT2X2−d2dT22m2=XT2X2−XT2X2eeTXT2X2eTXT2X2e,

and notice that (because and are eigenvectors corresponding to different eigenvalues of ) to produce

 B2=B−β1b1bT1.

So by Brauer’s theorem (Exercise 7.1.17), the eigenpairs of are the ones of with replaced by an eigenpair with zero eigenvalue. So is the largest eigenvalue of and is the eigenvector of corresponding to .
For the cases when , let

 Qi−1=sTBi−1s.

Notice that is the vector that maximizes . Then by similar steps we can prove that . Then can be defined by

 Xi=X−i−1∑j=1cjcTjX=(I−i−1∑j=1cjcTj)X.

It is easy to see that is idempotent. Then we have

 XTiXi=XT(I−i−1∑j=1cjcTj)X

Plug into

 Bi=XTiXi−didTi2mi=XTiXi−XTiXieeTXTiXieTXTiXie,

and notice that (because and are eigenvectors corresponding to different eigenvalues of ) to produce

 Bi=B−i−1∑j=1βjbjbTj=Bi−1−βi−1bi−1bTi−1.

So by Brauer’s theorem again, the eigenpairs of are the ones of with replaced by an eigenpair with zero eigenvalue. So is the largest eigenvalue of and is the eigenvector of corresponding to . ∎

Theorem 3.3 says when we build the new data matrix from , and change. Also is different from , but the eigenpairs of are retained by except for the first pairs. The conclusion is that the first modularity component has the largest modularity of the data . Each succeeding modularity component has the largest modularity with the constraint that it is orthogonal to all previous modularity components.

## 4 Conclusion

In this paper, the concept of modularity components is defined, and some important properties of modularity components are proven. The concept of modularity components can be used to explain why using more than one eigenvectors of the modularity matrix to do data clustering is reasonable. The combination of modularity clustering and modularity components gives a modularity component analysis that has some nice properties similar to the well known principal component analysis.

## References

•  V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre, Fast unfolding of communities in large networks, Journal of statistical mechanics: theory and experiment, 2008 (2008), p. P10008.
•  J. R. Bunch, C. P. Nielsen, and D. C. Sorensen, Rank-one modification of the symmetric eigenproblem, Numerische Mathematik, 31 (1978), pp. 31–48.
•  R. Chitta, R. Jin, and A. K. Jain, Efficient kernel clustering using random fourier features, in Data Mining (ICDM), 2012 IEEE 12th International Conference on, IEEE, 2012, pp. 161–170.
•  B. DasGupta and D. Desai, On the complexity of newmanʼs community finding approach for biological and social networks, Journal of Computer and System Sciences, 79 (2013), pp. 50–67.
•  M. A. Fortuna, D. B. Stouffer, J. M. Olesen, P. Jordano, D. Mouillot, B. R. Krasnov, R. Poulin, and J. Bascompte, Nestedness versus modularity in ecological networks: two sides of the same coin?, Journal of Animal Ecology, 79 (2010), pp. 811–817.
•  B. H. Good, Y.-A. de Montjoye, and A. Clauset, Performance of modularity maximization in practical contexts, Physical Review E, 81 (2010), p. 046106.
•  T. Hertz, A. Bar-Hillel, and D. Weinshall, Boosting margin based distance functions for clustering

, in Proceedings of the twenty-first international conference on Machine learning, ACM, 2004, p. 50.

•  I. Jolliffe, Principal component analysis, Wiley Online Library, 2002.
•  A. Lancichinetti and S. Fortunato, Limits of modularity maximization in community detection, Physical review E, 84 (2011), p. 066122.
•  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86 (1998), pp. 2278–2324.
•  R. A. Mercovich, A. Harkin, and D. Messinger, Automatic clustering of multispectral imagery by maximization of the graph modularity, in SPIE Defense, Security, and Sensing, International Society for Optics and Photonics, 2011, pp. 80480Z–80480Z.
•  D. Meunier, R. Lambiotte, A. Fornito, K. D. Ersche, and E. T. Bullmore, Hierarchical modularity in human brain functional networks

, Hierarchy and dynamics in neural networks, 1 (2010), p. 2.

•  C. D. Meyer, Matrix analysis and applied linear algebra, Siam, 2000.
•  M. E. Newman, Modularity and community structure in networks, Proceedings of the National Academy of Sciences, 103 (2006), pp. 8577–8582.
•  M. E. Newman and M. Girvan, Finding and evaluating community structure in networks, Physical review E, 69 (2004), p. 026113.
•  S. L. Race, C. Meyer, and K. Valakuzhy, Determining the number of clusters via iterative consensus clustering, in Proceedings of the SIAM Conference on Data Mining (SDM), SIAM, 2013, pp. 94––102.
•  R. Rotta and A. Noack, Multilevel local search algorithms for modularity clustering, Journal of Experimental Algorithmics (JEA), 16 (2011), pp. 2–3.
•  U. Von Luxburg, A tutorial on spectral clustering, Statistics and computing, 17 (2007), pp. 395–416.
•  J. H. Wilkinson, The algebraic eigenvalue problem, vol. 87, Clarendon Press Oxford, 1965.
•  R. Zhang and A. I. Rudnicky,

A large scale clustering scheme for kernel k-means

, in Pattern Recognition, 2002. Proceedings. 16th International Conference on, vol. 4, IEEE, 2002, pp. 289–292.

## Appendix A Proof of Lemma 2.1

The lemma is based on a theorem from  about the interlacing property of a diagonal matrix and its rank-one modification and how to calculate the eigenvectors of a diagonal plus rank one (DPR1) matrix . The theorem can also be found in .

###### Theorem A.1.

Let , where is diagonal, . Let be the eigenvalues of , and let be the eigenvalues of . Then if . If the are distinct and all the elements of are nonzero, then the eigenvalues of strictly separate those of .

###### Corollary A.2.

With the notations in Theorem A.1, the eigenvector of corresponding to the eigenvalue is given by .

Theorem A.1 tells us the eigenvalues of a DPR1 matrix are interlaced with the eigenvalues of the original diagonal matrix. Next we will write the eigenvector corresponding to the positive eigenvalues of a modularity matrix as a linear combination of the eigenvectors of the corresponding adjacency matrix.

With the notations in Section 1, since , then if the SVD of is , then

where is an diagonal matrix. Suppose the rows and columns of are ordered such that , where . Let . Similarly, since is symmetric, it is orthogonally similar to a diagonal matrix. Suppose the eigenvalues of are with largest eigenvalues .

###### Proof.

Since , we have

where and . Since is also symmetric, it is orthogonally similar to a diagonal matrix. So we have

 B=VU′ΣBU′TVT,

where is orthogonal and is diagonal. Since is a DPR1 matrix, and , the interlacing theorem applies to the eigenvalues of and . More specifically, we have

 αk<βk−1<αk−1<βk−2<⋯<β2<α2<β1<α1.

The strict inequalities hold because of our assumptions. Let . Since , we have . Suppose is an eigenpair of , then

 BVu=VB1u=λVu

implies that is an eigenpair of if and only if is an eigenpair of . By Corollary A.2, the eigenvector of corresponding to , is given by

 pi=(ΣA−βiI)−1y=(ΣA−βiI)−1VTd∥VTd∥2,

and hence the eigenvector of corresponding to , is given by

 bi=Vpi=V(ΣA−βiI)−1VTd∥VTd∥2
 =1∥d∥2n∑j=1vTjdαj−βivj.

Since where is a column vector with all ones, we have

Since , we have for . Therefore, the eigenvector of corresponding to , is given by

 bi=k∑j=1γijvj,

where

 γij=vTjd(αj−βi)∥d∥2.

## Appendix B Examples satisfy the assumptions in Lemma 2.1

We used two subsets of the popular MNIST data set from the literature, and the data set is described below.

The PenDigit data sets are subsets of the widely used MNIST database

. The original data contains a training set of 60,000 handwritten digits from 44 writers. The first subset used in the experiments contains some of the digits 1, 5 and 7111The data can be downloaded at http://www.kaggle.com/c/digit-recognizer/data. The second subset used contains some of the digits 1, 7 and 9. Each piece of data is a row vector converted from a grey-scale image. Each image is 28 pixels in height and 28 pixels in width, so there are 784 pixels in total. Each row vector contains the label of the digit and the lightness of each pixel. Lightness of a pixel is represented by a number from 0 to 255 inclusively, and smaller numbers represent lighter pixels.

The matrix of the 1-5-7 subset has 644 eigenvalues that are positive, and the largest 643 eigenvalues of the matrix are different from both and . The matrix of the 1-7-9 subset has 623 eigenvalues that are positive, and the largest 622 eigenvalues of the matrix are different from and . Thus we conclude that these examples satisfy the assumptions in Lemma 2.1.