Clustering, a fundamental task in data mining and machine learning, aims to divide a set of entities into several groups such that entities in the same group are more similar to each other than to those in other groups. Ingraph clustering or partitioning, the entities are modeled as the vertices of a graph and their similarities are encoded in the edges. In this setting, the goal is to group the vertices into clusters such that there are more edges within each cluster than across clusters.
While graphs serve as a popular tool to model pairwise relationships, in many real world applications the entities engage in more complicated, higher-order relationships. For example, in coauthorship networks  more than two authors can interact in writing a manuscript. Hypergraphs can be used to represent such datasets, where the notion of an edge is extended to a hyperedge that can connect more than two vertices. Existing research on hypergraph partitioning mainly follows two directions. One is to project a hypergraph onto a proxy graph via hyperedge expansion and then graph partitioning methods can be directly leveraged [2, 38, 1]
. Another one is to represent hypergraphs using tensors and adopt tensor decomposition algorithms[32, 16, 5, 20].
To better accommodate hypergraphs for the representation of real-world data, several extensions over the classical hypergraph have been recently proposed [23, 3, 7, 18, 30]. These more elaborate models consider different types of vertices or hyperedges, or different levels of relations. In this paper, we consider edge-dependent vertex weights (EDVWs) , which can be used to reflect the different importance or contribution of vertices in a hyperedge. This model is highly relevant in practice. For example, an e-commerce system can be modeled as a hypergraph with EDVWs where users and products are respectively modeled as vertices and hyperedges, and EDVWs represent the quantity of a product in a user’s shopping basket . EDVWs can also be used to model the relevance of a word to a document in text mining 
, the probability of an image pixel belonging to a segment in image segmentation, and the author positions in a coauthorship or citation network , to name a few.
A large portion of clustering algorithms focus on one-way clustering, i.e., clustering data entities based on their features and, in the hypergraph setting, clustering vertices based on hyperedges. Indeed, in , a hypergraph partitioning algorithm was proposed to cluster the vertices in a hypergraph with EDVWs. However, it is more desirable to simultaneously cluster (or co-cluster) both vertices and hyperedges in many applications including text mining [12, 11], product recommendation , and bioinformatics [6, 8]. Moreover, co-clustering can leverage the benefit of exploiting the duality between data entities and features to effectively deal with high-dimensional and sparse data [11, 26].
In this paper, we study the problem of co-clustering vertices and hyperedges in a hypergraph with EDVWs.
Our contributions can be summarized as follows:
(i) We define a Laplacian for hypergraphs with EDVWs through random walks on vertices and hyperedges and show its equivalence to the Laplacian of a specific digraph obtained via a modified star expansion of the hypergraph.
(ii) We propose a spectral hypergraph co-clustering method based on the proposed hypergraph Laplacian.
(iii) We validate the effectiveness of the proposed method via numerical experiments on real-world datasets.
Notation: The entries of a matrix are denoted by or . Operations and represent transpose and trace, respectively. andand refer to the identity matrix of size
and the all-zero matrix of size. denotes a diagonal matrix whose diagonal entries are given by the vector . Finally, represents the matrix obtained by vertically concatenating two matrices and , while denotes horizontal concatenation.
Ii-a Hypergraphs with edge-dependent vertex weights
Hypergraphs are generalizations of graphs where edges can connect more than two vertices. In this paper, we consider the hypergraph model with EDVWs  as defined next.
A hypergraph with EDVWs consists of a set of vertices , a set of hyperedges where a hyperedge is a subset of the vertex set, a weight for every hyperedge , and a weight for every hyperedge and every vertex .
The difference between the above hypergraph model and the typical hypergraph model considered in most existing papers is the introduction of the EDVWs . The motivation is to enable the model to describe the cases when the vertices in the same hyperedge contribute differently to this hyperedge. For example, in a coauthorship network, every author (vertex) in general has a different degree of contribution to a paper (hyperedge), usually represented by the order of the authors. This information is lost in traditional hypergraph models but it can be easily encoded through EDVWs.
For convenience, let collect edge-dependent vertex weights, with if and otherwise. Also, let collect hyperedge weights, with if and otherwise. Throughout the paper we assume that the hypergraph is connected.
Ii-B Spectral graph partitioning
Given an undirected graph with vertices, the goal of graph partitioning is to divide its vertex set into disjoint subsets (clusters) such that there are more (heavily weighted) edges inside a cluster and few edges across clusters, while these clusters are also balanced in size.111Although there are different variations of the graph partitioning problem , this is the one that we adopt in this paper.
To postulate this problem, let , , and denote the weighted adjacency matrix, the degree matrix, and the combinatorial graph Laplacian, respectively. Denote by a subset of vertices and its complement. Then, the cut between and is defined as the sum of weights of edges across them whereas the volume of is defined as the sum of weighted degrees of vertices in . More formally, we have
One well-known measure for evaluating the partition is normalized cut (Ncut)  defined as
If we define an matrix whose entries are
then it can be shown that . Thus, we can write the problem of minimizing the Ncut as
generalized eigenvectors ofassociated with the
smallest eigenvalues. Then,-means  can be applied to the rows of to obtain the desired clusters .
Iii The Proposed Hypergraph Co-clustering
Iii-a Star expansion and hypergraph Laplacians
We project the hypergraph onto a directed graph via the so-called star expansion, where we replace each hyperedge with a star graph. More precisely, we introduce a new vertex for every hyperedge , thus . The graph connects each new vertex representing a hyperedge with each vertex through two directed edges (one in each direction) that we weigh differently, as explained next.
We consider a random walk on the hypergraph (equivalently, on ) in which we walk from a vertex to a hyperedge that contains with probability proportional to , and then walk from to a vertex contained in with probability proportional to . We define two matrices and to collect the transition probabilities from to and from to , respectively. The corresponding entries are given by and . Then, the transition probability matrix associated with a random walk on can be written as
When the hypergraph is connected, the graph is strongly connected, thus the random walk defined by is irreducible (every vertex can reach any vertex). Moreover, it is periodic since is bipartite and once we start at a vertex , we can only return to after even steps.
It is well known that a random walk has a unique stationary distribution if it is irreducible and aperiodic . To fix the above periodicity problem, we introduce self-loops to and define a new transition probability matrix where . Matrix defines a random walk (the so-called lazy random walk) where at each discrete time point we take a step of the original random walk with probability and stay at the current vertex with probability . The stationary distribution of the random walk is the all-positive dominant left eigenvector of , i.e. , scaled to satisfy . Notice that different choices of lead to the same .
|Datasets||Subsets||# documents||# words||Classes|
|20 Newsgroups||Dataset 1||3,863||2,000||comp.os.ms-windows.misc, rec.autos, sci.crypt, talk.politics.guns|
|Dataset 2||5,663||2,000||alt.atheism, comp.graphics, misc.forsale, rec.sport.hockey, sci.electronics, talk.politics.mideast|
|RCV1||Dataset 3||4,000||2,000||CCAT, ECAT, GCAT, MCAT|
|Dataset 4||8,000||2,000||C15, C18, E31, E41, GCRIM, GDIS, M11, M14|
Given and , we generalize the directed combinatorial Laplacian and the normalized Laplacian  to hypergraphs as follows
where is the corresponding degree matrix.
Iii-B Spectral hypergraph partitioning
We can leverage the hypergraph Laplacians proposed in Section III-A to apply spectral graph partitioning methods (as introduced in Section II-B) to hypergraphs. More precisely, we compute the generalized eigenvectors of the generalized eigenproblem associated with the smallest eigenvalues, and then cluster the rows of using -means. Note that can be written as , implying that is an eigenpair of the normalized Laplacian . Hence, if is an eigenvector of , then .
Since obtaining eigenvectors can be computationally challenging, we show next how to compute the eigenvectors of from a smaller size matrix. To do this, let us first rewrite and as
Define the following matrix
and denote by and the left and right singular vectors of
associated with the singular value, respectively. Then, the vector is the eigenvector of associated with the eigenvalue .
Let us rewrite as
Split its eigenvector into two parts where and respectively have length and . Then we have
and it follows that
When , i.e. , and are respectively the left and right singular vectors of and is the corresponding singular value. ∎
Based on Proposition 1, our proposed spectral hypergraph co-clustering algorithm is given by the following steps:
1) Compute the left and right singular vectors of associated with the largest singular values, denoted by and , respectively.
2) Leverage Proposition 1 to form .
3) (Optional) Normalize the rows of to have unit norm.
4) Apply -means to the rows of (or its normalized version).
The optional normalization step above is inspired by the spectral partitioning algorithm proposed in . In our next section, we denote the variant of our algorithm without normalization as s-spec-1 whereas the one that implements the third step above is denoted as s-spec-2.
In this section, we evaluate the performance of the proposed methods via numerical experiments.222The code needed to replicate the numerical experiments presented in this paper can be found at https://github.com/yuzhu2019/hypergraph_cocluster. We consider two widely used real-world text datasets: 20 Newsgroups333http://qwone.com/~jason/20Newsgroups/ and Reuters Corpus Volume 1 (RCV1) . Both of them contain documents in different categories. We extract two subsets of documents from each of them to build datasets of different levels of difficulty (datasets 1 and 3 are easier than datasets 2 and 4; see Table I). We consider the most frequent words in the corpus after removing stop words and words appearing in and of the documents.
To model text datasets using hypergraphs with EDVWs, we follow the procedure in . More precisely, we consider documents as vertices and words as hyperedges. A document (vertex) belongs to a word (hyperedge) if the word appears in the document. The EDVWs (the entries in
) are taken as the corresponding tf-idf (term frequency–inverse document frequency) values, which reflect how relevant a word is to a document in a collection of documents. The weight associated with a hyperedge is computed as the standard deviation of the entries in the corresponding row of.
We compare the proposed methods (s-spec-1 and s-spec-2) with the following three methods. (i) The naive method (naive): We run -means on the columns and the rows of the tf-idf matrix to cluster documents and words, respectively. (ii) Bipartite spectral graph partitioning (bi-spec) : The dataset is modeled as an (undirected) bipartite graph between documents and words, then a spectral graph partitioning algorithm is applied; see Section II-B. (iii) Clique expansion (c-spec, Algorithm 1 in ): This method projects the hypergraph with EDVWs onto a proxy graph via the so-called clique expansion, then applies a spectral graph partitioning algorithm. We consider it as the state-of-the-art method. Since c-spec can only cluster the vertices (and not the hyperedges), we build a hypergraph as mentioned above to cluster documents and then we construct another hypergraph in which we take words as vertices and documents as hyperedges to cluster words. Notice that of the above mentioned methods only the proposed methods (s-spec-1 and s-spec-2) and bi-spec can co-cluster documents and words.
To evaluate the clustering performance, we consider four metrics, namely, clustering accuracy score (ACC), normalized mutual information (NMI), weighted F1 score (F1), and adjusted Rand index (ARI) . For all of them, a larger value indicates a better performance. Notice that there are no ground-truth classes for words. Hence, following , we consider the class conditional word distribution. More precisely, we compute the aggregate word distribution for each document class, then for every word we assign it to the class in which it has the highest probability in the aggregate distribution. We regard this assignment as the ground truth for performance evaluation.
The numerical results (averaged over runs of -means) are shown in Fig. 1. We first notice that, of the proposed methods, s-spec-2 usually performs better than s-spec-1. This is in line with , where it was observed that the lack of a normalization step (as in our s-spec-1) might lead to performance decays when the connectivity within each cluster varies substantially across clusters. It can also be seen that the proposed methods and c-spec tend to work better than the naive method and the classical bipartite spectral graph partitioning method. This underscores the value of the hypergraph model considered. Importantly, s-spec-2 achieves similar clustering accuracy as the state-of-the-art c-spec for documents but tends to perform better in clustering words. Moreover, the proposed methods achieve small standard deviations, indicating their robustness to different centroid initializations in -means.
Having showed the superiority in performance of s-spec-2, we now present visualizations of its application to Dataset 1 to further illustrate its effectiveness. In Fig. 2, we depict the embeddings of documents and words obtained by s-spec-2 by mapping them to a 2D space using t-SNE . We can see that documents and words in the same class appear to form groups. In Fig. 3, we plot the word clouds444https://github.com/amueller/word_cloud for the words predicted in the classes ‘comp.os.ms-windows.misc’ (Microsoft Windows operating system) and ‘sci.crypt’ (cryptography). The size of a word is determined by its frequency in the documents predicted in the same class, thus is able to reveal its importance in the class. We can see that the top words (such as windows, file, dos, ms in ‘comp.os.ms-windows.misc’) align well with our intuitive understanding of the class topics.
We developed valid Laplacian matrices for hypergraphs with EDVWs, based on which we proposed spectral partitioning algorithms for co-clustering vertices and hyperedges. Through real-world text mining applications, we showcased the value of considering hypergraph models and demonstrated the effectiveness of our proposed methods. Future research avenues include: (i) Developing alternative co-clustering methods where we replace the spectral clustering step by non-negative matrix tri-factorization algorithms [13, 31, 36] of matrices related to the hypergraph Laplacians. (ii) Generalizing additional existing digraph Laplacians [24, 10] to the hypergraph case. (iii) Study the use of the hypergraph model with EDVWs in other network analysis tasks such hypergraph alignment [37, 34, 28]. Related to this last point, the fact that our proposed methods embed vertices and hyperedges in the same vector space (as shown in Fig. 2) facilitates the development of embedding-based hypergraph alignment algorithms .
-  (2006) Higher order learning with graphs. In ICML, pp. 17–24. Cited by: §I.
-  (2005) Beyond pairwise clustering. In CVPR, Vol. 2, pp. 838–845. Cited by: §I.
-  (2018) Heterogeneous hyper-network embedding. In ICDM, pp. 875–880. Cited by: §I.
-  (2016) Recent advances in graph partitioning. Algorithm engineering, pp. 117–158. Cited by: footnote 1.
-  (2017) The fiedler vector of a laplacian tensor for hypergraph partitioning. Siam Journal on Scientific Computing 39 (6), pp. A2508–A2537. Cited by: §I.
-  (2000) Biclustering of expression data.. In Ismb, Vol. 8, pp. 93–103. Cited by: §I.
-  (2019) Random walks on hypergraphs with edge-dependent vertex weights. In ICML, pp. 1172–1181. Cited by: §I, §II-A.
-  (2004) Minimum sum-squared residue co-clustering of gene expression data. In SDM, pp. 114–125. Cited by: §I.
-  (2005) Laplacians and the cheeger inequality for directed graphs. Annals of Combinatorics 9 (1), pp. 1–19. Cited by: §III-A, §III-A.
-  (2020) Hermitian matrices for clustering directed graphs: insights and applications. In AISTATS, pp. 983–992. Cited by: §V.
-  (2003) Information-theoretic co-clustering. In KDD, pp. 89–98. Cited by: §I.
-  (2001) Co-clustering documents and words using bipartite spectral graph partitioning. In KDD, pp. 269–274. Cited by: §I, §IV.
-  (2006) Orthogonal nonnegative matrix t-factorizations for clustering. In KDD, pp. 126–135. Cited by: §IV, §V.
-  (2010) Interactive image segmentation using probabilistic hypergraphs. Pattern Recognition 43 (5), pp. 1863–1873. Cited by: §I.
-  (2016) Analysis of network clustering algorithms and cluster quality metrics at scale. PloS one 11 (7). Cited by: §IV.
-  (2015) A provable generalized tensor spectral method for uniform hypergraph partitioning. In ICML, pp. 400–409. Cited by: §I.
-  (2009) Understanding importance of collaborations in co-authorship networks: a supportiveness analysis approach. In SDM, pp. 1112–1123. Cited by: §I.
-  (2020) Hypergraph random walks, laplacians, and clustering. In CIKM, pp. 495–504. Cited by: §I, §I, §IV, §IV.
-  (2018) Regal: representation learning-based graph alignment. In CIKM, pp. 117–126. Cited by: §V.
-  (2019) Community detection for hypergraph networks via regularized tensor power iteration. arXiv preprint arXiv:1909.06503. Cited by: §I.
-  (2004-04) RCV1: a new benchmark collection for text categorization research. JMLR 5, pp. 361–397. Cited by: §IV.
-  (2018) E-tail product return prediction via hypergraph-based local graph cut. In KDD, pp. 519–527. Cited by: §I.
-  (2017) Inhomogeneous hypergraph clustering with applications. In NIPS, pp. 2308–2318. Cited by: §I.
-  (2010) Random walks on digraphs, the generalized digraph laplacian and the degree of asymmetry. In International Workshop on Algorithms and Models for the Web-Graph, pp. 74–85. Cited by: §V.
-  (1982) Least squares quantization in PCM. IEEE transactions on information theory 28 (2), pp. 129–137. Cited by: §II-B.
-  (2005) Co-clustering by block value decomposition. In KDD, pp. 635–640. Cited by: §I.
-  (2008-11) Visualizing data using t-sne. JMLR 9, pp. 2579–2605. Cited by: §IV.
-  (2016) Triangular alignment (TAME): a tensor-based approach for higher-order network alignment. IEEE/ACM transactions on computational biology and bioinformatics 14 (6), pp. 1446–1458. Cited by: §V.
On spectral clustering: analysis and an algorithm. In NIPS, pp. 849–856. Cited by: §III-B, §IV.
-  (2021) Signal processing on higher-order networks: livin’on the edge… and beyond. arXiv preprint arXiv:2101.05510. Cited by: §I.
-  (2012) Graph dual regularization non-negative matrix factorization for co-clustering. Pattern Recognition 45 (6), pp. 2237–2250. Cited by: §V.
-  (2006) Multi-way clustering using super-symmetric non-negative tensor factorization. In ECCV, pp. 595–608. Cited by: §I.
-  (2000) Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence 22 (8), pp. 888–905. Cited by: §II-B.
-  (2014) Mapping users across networks by manifold alignment on hypergraph. In AAAI, Vol. 28. Cited by: §V.
-  (2014) Improving co-cluster quality with application to product recommendations. In CIKM, pp. 679–688. Cited by: §I.
-  (2011) Fast nonnegative matrix tri-factorization for large-scale data co-clustering. In IJCAI, Cited by: §V.
-  (2008) Probabilistic graph and hypergraph matching. In CVPR, pp. 1–8. Cited by: §V.
-  (2007) Learning with hypergraphs: clustering, classification, and embedding. In NIPS, pp. 1601–1608. Cited by: §I.