Modularity Component Analysis versus Principal Component Analysis

10/19/2015 ∙ by Hansi Jiang, et al. ∙ NC State University 0

In this paper the exact linear relation between the leading eigenvectors of the modularity matrix and the singular vectors of an uncentered data matrix is developed. Based on this analysis the concept of a modularity component is defined, and its properties are developed. It is shown that modularity component analysis can be used to cluster data similar to how traditional principal component analysis is used except that modularity component analysis does not require data centering.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The purpose of this paper is to present a development of modularity components that are analogous to principal value components [8]. It will be shown that modularity components have characteristics that are similar to those of principal value components in the sense that modularity components provide for data analysis in much the same manner as do principal value components. In particular, just as in the case of principal value components, modularity components are shown to be mutually orthogonal, and raw data can be projected onto the directions of a number of modularity components to reveal patterns and clusters in the data. However, a drawback of principal component analysis (PCA) is that it generally requires centering or standardizing the data before determining principal components. On the other hand, utilizing modularity components does not require data to be centered to accurately extract important information. Among other things, this means that sparsity in the original data is preserved whereas centering data naturally destroys inherent sparsity.

Moreover, we will complete the comparison of modularity components with principal components by showing that the component that maximizes the modularity function of the uncentered data as defined in [14]

can replace the principal component that maximizes the variance in the

centered data. Finally, just as each succeeding principal component has maximal variance with the constraint that it is orthogonal to all previous principal components, each succeeding modularity component has maximal modularity with the constraint that it is orthogonal to all prior modularity components.

Our modularity components are derived from the concept of modularity introduced by Newman and Girvan in [15], and further explained by Newman in [14]. The modularity partitioning method starts with an adjacency matrix or similarity matrix and aims to partition a graph by maximizing the modularity. Assuming the graph containing nodes, the modularity is defined by

(1.1)

where is the number of edges in the graph, is the modularity matrix defined below, and is a vector that maximizes . Since the number of edges in a given graph is constant, the multiplier is often dropped for simplicity, and the modularity becomes

(1.2)

The modularity matrix is defined by

(1.3)

where is an adjacency matrix or similarity matrix, and is the vector containing the degrees of the nodes. It is proven in [14]

that the eigenvector corresponding to the largest eigenvalue of

can maximize

. Like the spectral clustering method

[18], the modularity clustering method also uses signs of entries in the dominant eigenvector to partition graphs.

The modularity partitioning algorithm has been widely applied and discussed. For instance, it has been applied to reveal human brain functional networks [12] and ecological networks [5], and used in image processing [11]. Blondel et al. [1]

proposed a heuristic that can reveal the community structure for large networks. Rotta and Noack

[17] compared several heuristics in maximizing modularity. DasGupta and Desai [4] studied the complexity of modularity clustering. The limitations of the modularity maximization technique are discussed in [6] and [9].

By the modularity algorithm [14], a graph is partitioned into two parts, and a hierarchy can be built by iteratively calculating the matrices and their dominant eigenvectors. Repetitively partitioning a graph into two subsets may be inefficient and does not utilize information in subdominant eigenvectors. And while there is a connection between graph partitioning and data analysis, they are not strictly equivalent because extracting information from raw data by means of graph partitioning necessarily requires the knowledge or creation of a similarity or adjacency matrix, which in turn can only group nodes. For the purpose of data analysis, it is more desirable to analyze raw data without involving a similarity matrix. Modularity analysis can be executed directly from uncentered raw data ( number of attributes, number of data points) by redefining the modularity matrix to be

(1.4)

but in practice need not be explicitly computed. In addition to using only raw data, this formulation allows the creation of modularity components that are directly analogous to principal value components created from centered data. In what follows, let , where the rows of may be normalized when different units are involved.

The paper is organized as follows. In Section 2 we give the definition of modularity components. In Section 3 properties of the modularity components are established. Section 4 contains some conclusions.

2 Definition of Modularity Components

In this section we will give the definition of the modularity components. Before doing that we will prove a couple of lemmas about the relation between the eigenvectors of a particular kind of similarity matrices that can be fed in the modularity algorithm and the singular vectors of the data matrix. The lemmas will help us to define the modularity components. Suppose the SVD of the uncentered data matrix is and that there are

nonzero singular values. Then

(2.1)

has positive eigenvalues. From the interlacing theorem mentioned in [2] and [19], it is guaranteed that the largest eigenvalues of are positive. If the eigenvalues of are simple, then the eigenvectors of corresponding to the largest eigenvalues can be written as linear combinations of the eigenvectors of . The proof of the following lemma can be found in Appendix A.

Lemma 2.1.

Suppose the largest eigenvalues of are and the nonzero eigenvalues of are . Further suppose that for we have and . Then the eigenvector of can be written by

(2.2)

where

(2.3)

The point of this lemma is to realize that the vector is a linear combination of the . The next lemma gives the linear expression of the vectors in terms of the , where is the Moore-Penrose inverse of . There are practical cases where our assumptions in Lemma 2.1 hold true, and examples are given in Appendix B.

Lemma 2.2.

With the assumptions in Lemma 2.1, we have

(2.4)

where is the -th the nonzero singular value of .

Proof.

Lemma 2.2 shows that if can be written as a linear combination of the , then the vectors can be written as a linear combination of the . Next we give the formal definition of the modularity components.

Definition 2.3.

Suppose is the data matrix, is the eigenvector corresponding to the -th largest eigenvalue of , where

(2.5)

Under the assumptions in Lemma 2.1, let

(2.6)

The -th modularity component is defined to be

(2.7)

By the two lemmas, it can be seen that as long as the assumptions in Lemma 2.1 are met, the modularity components are well-defined, and the definition of is based on the linear combination of in terms of the . In the next section some important properties of the modularity components are established.

3 Properties of the Modularity Components

In this section some properties of modularity components will be discussed. It will be seen that the properties of modularity components are similar to the ones of principal components. First we will prove that the modularity components, as long as they are well-defined, are perpendicular to each other. Then we will prove that if we project the uncentered data onto the span of the modularity components, then the projection will be a scalar multiple of the modularity vectors. Finally, we will prove that the ‘importance’ of each modularity component is given by its corresponding eigenvalue of . The first modularity component has the largest modularity, and the -th modularity component has the largest modularity with the constraint that it is perpendicular to the preceding modularity components.

Theorem 3.1.

With the assumptions in Lemma 2.1, suppose is the unnormalized data matrix, , . Suppose , are the eigenvectors of corresponding to eigenvalues and , , respectively. Then we have

(3.1)

and for .

Proof.

It is sufficient to prove that for . From we have

where is a column vector with all ones. Therefore,

Since is always true, we have

Consequently,

Therefore . Since , , , , we have

so

implies for . ∎

From Theorem 3.1, it can be seen that the modularity components are orthogonal to each other. Next we prove that the projection of the uncentered data onto the span of is a scalar multiple of .

Theorem 3.2.

With the assumptions in Lemma 2.1, let be the projector onto the span of . Then we have

(3.2)
Proof.

This property is similar to that of principal components in the sense that if we project the data onto the span of the components, we get a scalar multiple of a vector, and the vector can give the clusters in the data based on the signs of the entries in the eigenvectors. Finally, we can prove that if we look at in the space perpendicular to , , , , then the projection onto the span of will give us the largest modularity, and the projection is just .

Theorem 3.3.

With the assumptions in Lemma 2.1,

(3.3)

Moreover, let and for , let

(3.4)

and let , be defined correspondingly. Under these conditions, is the largest eigenvalue of , and is the corresponding eigenvector of .

Proof.

For , since it is proved in [14] that is the vector that maximizes in Equation 1.2, we have

By Theorem 3.1,

Therefore . Then is defined by

Since is idempotent, we have

By Theorem 3.2, we know that , so and then

Plug into

and notice that (because and are eigenvectors corresponding to different eigenvalues of ) to produce

So by Brauer’s theorem [13](Exercise 7.1.17), the eigenpairs of are the ones of with replaced by an eigenpair with zero eigenvalue. So is the largest eigenvalue of and is the eigenvector of corresponding to .
For the cases when , let

Notice that is the vector that maximizes . Then by similar steps we can prove that . Then can be defined by

It is easy to see that is idempotent. Then we have

Plug into

and notice that (because and are eigenvectors corresponding to different eigenvalues of ) to produce

So by Brauer’s theorem again, the eigenpairs of are the ones of with replaced by an eigenpair with zero eigenvalue. So is the largest eigenvalue of and is the eigenvector of corresponding to . ∎

Theorem 3.3 says when we build the new data matrix from , and change. Also is different from , but the eigenpairs of are retained by except for the first pairs. The conclusion is that the first modularity component has the largest modularity of the data . Each succeeding modularity component has the largest modularity with the constraint that it is orthogonal to all previous modularity components.

4 Conclusion

In this paper, the concept of modularity components is defined, and some important properties of modularity components are proven. The concept of modularity components can be used to explain why using more than one eigenvectors of the modularity matrix to do data clustering is reasonable. The combination of modularity clustering and modularity components gives a modularity component analysis that has some nice properties similar to the well known principal component analysis.

References

  • [1] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre, Fast unfolding of communities in large networks, Journal of statistical mechanics: theory and experiment, 2008 (2008), p. P10008.
  • [2] J. R. Bunch, C. P. Nielsen, and D. C. Sorensen, Rank-one modification of the symmetric eigenproblem, Numerische Mathematik, 31 (1978), pp. 31–48.
  • [3] R. Chitta, R. Jin, and A. K. Jain, Efficient kernel clustering using random fourier features, in Data Mining (ICDM), 2012 IEEE 12th International Conference on, IEEE, 2012, pp. 161–170.
  • [4] B. DasGupta and D. Desai, On the complexity of newmanʼs community finding approach for biological and social networks, Journal of Computer and System Sciences, 79 (2013), pp. 50–67.
  • [5] M. A. Fortuna, D. B. Stouffer, J. M. Olesen, P. Jordano, D. Mouillot, B. R. Krasnov, R. Poulin, and J. Bascompte, Nestedness versus modularity in ecological networks: two sides of the same coin?, Journal of Animal Ecology, 79 (2010), pp. 811–817.
  • [6] B. H. Good, Y.-A. de Montjoye, and A. Clauset, Performance of modularity maximization in practical contexts, Physical Review E, 81 (2010), p. 046106.
  • [7] T. Hertz, A. Bar-Hillel, and D. Weinshall, Boosting margin based distance functions for clustering

    , in Proceedings of the twenty-first international conference on Machine learning, ACM, 2004, p. 50.

  • [8] I. Jolliffe, Principal component analysis, Wiley Online Library, 2002.
  • [9] A. Lancichinetti and S. Fortunato, Limits of modularity maximization in community detection, Physical review E, 84 (2011), p. 066122.
  • [10] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE, 86 (1998), pp. 2278–2324.
  • [11] R. A. Mercovich, A. Harkin, and D. Messinger, Automatic clustering of multispectral imagery by maximization of the graph modularity, in SPIE Defense, Security, and Sensing, International Society for Optics and Photonics, 2011, pp. 80480Z–80480Z.
  • [12] D. Meunier, R. Lambiotte, A. Fornito, K. D. Ersche, and E. T. Bullmore, Hierarchical modularity in human brain functional networks

    , Hierarchy and dynamics in neural networks, 1 (2010), p. 2.

  • [13] C. D. Meyer, Matrix analysis and applied linear algebra, Siam, 2000.
  • [14] M. E. Newman, Modularity and community structure in networks, Proceedings of the National Academy of Sciences, 103 (2006), pp. 8577–8582.
  • [15] M. E. Newman and M. Girvan, Finding and evaluating community structure in networks, Physical review E, 69 (2004), p. 026113.
  • [16] S. L. Race, C. Meyer, and K. Valakuzhy, Determining the number of clusters via iterative consensus clustering, in Proceedings of the SIAM Conference on Data Mining (SDM), SIAM, 2013, pp. 94––102.
  • [17] R. Rotta and A. Noack, Multilevel local search algorithms for modularity clustering, Journal of Experimental Algorithmics (JEA), 16 (2011), pp. 2–3.
  • [18] U. Von Luxburg, A tutorial on spectral clustering, Statistics and computing, 17 (2007), pp. 395–416.
  • [19] J. H. Wilkinson, The algebraic eigenvalue problem, vol. 87, Clarendon Press Oxford, 1965.
  • [20] R. Zhang and A. I. Rudnicky,

    A large scale clustering scheme for kernel k-means

    , in Pattern Recognition, 2002. Proceedings. 16th International Conference on, vol. 4, IEEE, 2002, pp. 289–292.

Appendices

Appendix A Proof of Lemma 2.1

The lemma is based on a theorem from [2] about the interlacing property of a diagonal matrix and its rank-one modification and how to calculate the eigenvectors of a diagonal plus rank one (DPR1) matrix [13]. The theorem can also be found in [19].

Theorem A.1.

Let , where is diagonal, . Let be the eigenvalues of , and let be the eigenvalues of . Then if . If the are distinct and all the elements of are nonzero, then the eigenvalues of strictly separate those of .

Corollary A.2.

With the notations in Theorem A.1, the eigenvector of corresponding to the eigenvalue is given by .

Theorem A.1 tells us the eigenvalues of a DPR1 matrix are interlaced with the eigenvalues of the original diagonal matrix. Next we will write the eigenvector corresponding to the positive eigenvalues of a modularity matrix as a linear combination of the eigenvectors of the corresponding adjacency matrix.

With the notations in Section 1, since , then if the SVD of is , then

where is an diagonal matrix. Suppose the rows and columns of are ordered such that , where . Let . Similarly, since is symmetric, it is orthogonally similar to a diagonal matrix. Suppose the eigenvalues of are with largest eigenvalues .

Proof.

Since , we have

where and . Since is also symmetric, it is orthogonally similar to a diagonal matrix. So we have

where is orthogonal and is diagonal. Since is a DPR1 matrix, and , the interlacing theorem applies to the eigenvalues of and . More specifically, we have

The strict inequalities hold because of our assumptions. Let . Since , we have . Suppose is an eigenpair of , then

implies that is an eigenpair of if and only if is an eigenpair of . By Corollary A.2, the eigenvector of corresponding to , is given by

and hence the eigenvector of corresponding to , is given by

Since where is a column vector with all ones, we have

Since , we have for . Therefore, the eigenvector of corresponding to , is given by

where

Appendix B Examples satisfy the assumptions in Lemma 2.1

We used two subsets of the popular MNIST data set from the literature, and the data set is described below.

The PenDigit data sets are subsets of the widely used MNIST database

[10][20][7][3][16]. The original data contains a training set of 60,000 handwritten digits from 44 writers. The first subset used in the experiments contains some of the digits 1, 5 and 7111The data can be downloaded at http://www.kaggle.com/c/digit-recognizer/data. The second subset used contains some of the digits 1, 7 and 9. Each piece of data is a row vector converted from a grey-scale image. Each image is 28 pixels in height and 28 pixels in width, so there are 784 pixels in total. Each row vector contains the label of the digit and the lightness of each pixel. Lightness of a pixel is represented by a number from 0 to 255 inclusively, and smaller numbers represent lighter pixels.

The matrix of the 1-5-7 subset has 644 eigenvalues that are positive, and the largest 643 eigenvalues of the matrix are different from both and . The matrix of the 1-7-9 subset has 623 eigenvalues that are positive, and the largest 622 eigenvalues of the matrix are different from and . Thus we conclude that these examples satisfy the assumptions in Lemma 2.1.