Three hypergraph eigenvector centralities

by   Austin R. Benson, et al.
cornell university

Eigenvector centrality is a standard network analysis tool for determining the importance of (or ranking of) entities in a connected system that is represented by a graph. However, many complex systems and datasets have natural multi-way interactions that are more faithfully modeled by a hypergraph. Here we extend the notion of graph eigenvector centrality to uniform hypergraphs. Traditional graph eigenvector centralities are given by a positive eigenvector of the adjacency matrix, which is guaranteed to exist by the Perron-Frobenius theorem under some mild conditions. The natural representation of a hypergraph is a hypermatrix (colloquially, a tensor). Using recently established Perron-Frobenius theory for tensors, we develop three tensor eigenvectors centralities for hypergraphs, each with different interpretations. We show that these centralities can reveal different information on real-world data by analyzing hypergraphs constructed from n-gram frequencies, co-tagging on stack exchange, and drug combinations observed in patient emergency room visits.



page 1

page 2

page 3

page 4


Hypergraph Dissimilarity Measures

In this paper, we propose two novel approaches for hypergraph comparison...

On Adjacency and e-Adjacency in General Hypergraphs: Towards a New e-Adjacency Tensor

In graphs, the concept of adjacency is clearly defined: it is a pairwise...

Hyperedge Prediction using Tensor Eigenvalue Decomposition

Link prediction in graphs is studied by modeling the dyadic interactions...

Hypergraph Partitioning using Tensor Eigenvalue Decomposition

Hypergraphs have gained increasing attention in the machine learning com...

Adjacency Matrix and Co-occurrence Tensor of General Hypergraphs: Two Well Separated Notions

Adjacency and co-occurrence are two well separated notions: even if they...

On Adjacency and e-adjacency in General Hypergraphs: Towards an e-adjacency Tensor

Adjacency between two vertices in graphs or hypergraphs is a pairwise re...

Recurrently Predicting Hypergraphs

This work considers predicting the relational structure of a hypergraph ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Finding important entities from relations

The central question of centrality and ranking in network analysis is: how do we find the important entities given a set of relationships between them? Make no mistake—when the relationships are pairwise and the system is modeled by a graph, there is a plethora of definitions and methods for centrality (Borgatti and Everett, 2006; Estrada and Higham, 2010; Boldi and Vigna, 2014), and the study of centrality in social network analysis alone has a long-standing history (Bavelas, 1950; Katz, 1953; Sabidussi, 1966; Bonacich, 1972; de Sola Pool and Kochen, 1978; Freeman, 1977). Somewhat more modern developments come from Web applications, such as PageRank, which was used in the early development of Google search results (Brin and Page, 1998; Page et al., 1999), and hub and authority scores, which were used to find authoritative Web sources (Kleinberg, 1999). Centrality measures are a pivotal part of the network analysis toolbox and thus get used in a variety of applications (Gleich, 2015; Jeong et al., 2001; Bullmore and Sporns, 2009)

. And in addition to the problem of identifying important nodes, centrality is also used as a feature in network analysis machine learning tasks such as role discovery 

(Henderson et al., 2012), computing graph similarity (Koutra et al., 2013), and spam detection (Ye and Akoglu, 2015).

A major shortcoming of network centrality stems from the long-running assumption throughout network science that relationships are pairwise measurements and hence a graph is the appropriate mathematical model (Strogatz, 2001; Newman, 2003)

. Thus, nearly all centrality measures are designed within this dyadic paradigm. However, many systems contain simultaneous interactions between several entities. For example, people communicate and collaborate in groups, chemical reactions involve several reagents, and more than two labels may be used to classify a product. In these cases, a hypergraph is a more faithful model, but we lack foundational mathematical analogs of centrality for this model.

This paper focuses on developing analogs of graph eigenvector centrality for hypergraphs. The term “eigenvector centrality” has two meanings in network science. Sometimes, eigenvector centrality means any set of centrality scores on nodes that is an eigenvector of some natural matrix associated with the network at hand. These include, for example, the aforementioned PageRank111PageRank has been called the “$25,000,000,000 Eigenvector” (Bryan and Leise, 2006). and hub and authority scores. Other times, eigenvector centrality refers specifically to the principal eigenvector of the adjacency matrix of a graph (Newman, 2008) (which makes the vernacular confusing). This eigenvector centrality was originally proposed by Bonacich (1972), was later used to study social networks (Bonacich, 1987), and will be the notion of eigenvector centrality used in this paper.

Background on graph eigenvector centrality

Here we provide the requisite background on eigenvector centrality for graphs. We generalize the formulation to hypergraphs in the next section. Assume that we have a strongly connected (possibly directed) graph with adjacency matrix . The eigenvector centrality may be derived via the following two desiderata (Bonacich, 1972; Newman, 2008):

  1. The centrality score of each node , , is proportional to the sum of the centrality scores of the neighbors of , i.e., .

  2. The centrality scores should be positive, i.e., .

Assuming the same proportionality constant, we may auspiciously write the first condition as


where is a constant. The matrix enthusiast quickly recognizes that is an eigenvector of :


Equation 2 holds for any eigenpair of . The second of our desiderata, along with the Perron-Frobenius theorem, tells us which one to use.

Theorem 1 (Perron-Frobenius Theorem for matrices as in Theorem 1.4 of (Berman and Plemmons, 1994))

Let be an irreducible matrix. Then there exists an eigenvector such that ,

is an eigenvalue of largest magnitude of

, the eigenspace associated with

is one-dimensional, and is the only nonnegative eigenvector of up to scaling.

If is the adjacency matrix of a strongly connected graph, then

is irreducible and we can apply the theorem. The vector

gives the centrality scores, which are unique up to scaling. Eigenvector centrality of this form has appeared in a range of applications, including the analysis of infectious disease spreading in primates (Balasubramaniam et al., 2016), patterns in fMRI data of human brains (Lohmann et al., 2010), and career trajectories of Hollywood actors (Taylor et al., 2017).

2 Hypergraph eigenvector centralities

Instead of a graph, we now assume that our dataset is an -uniform hypergraph , which means that each hyperedge is a size- subset of . If , a natural representation of is an symmetric “hypergraph adjacency tensor”:222Technically, this object is a hypermatrix. However, “tensor” is synonymous with multi-dimensional array the data mining community (Kolda and Bader, 2009), so we use it here. See Lim (2013) for precise distinctions.


When deriving graph eigenvector centrality above, we used an irreducible adjacency matrix from a strongly connected graph. We need analogous notions for tensors and hypergraphs.

Definition 1 (Irreducible tensor (Lim, 2005))

An order-, dimension- tensor is reducible if there exists a non-empty proper subset such that for any and , . If is not reducible, then it is irreducible.

We introduce connected hypergraphs here using the language of tensors. The definition is the same as classical notions of connectivity in hypergraphs (Berge, 1984) when the tensor is symmetric, which is the case in Eq. 3.

Definition 2 (Strongly connected hypergraph)

An -uniform, -node hypergraph with adjacency tensor is strongly connected if the graph induced by the matrix obtained by summing the modes of , , is strongly connected.

The matrix defined above is called the representative matrix of , and, importantly, a strongly connected hypergraph has an irreducible adjacency tensor (Qi and Luo, 2017). Here we assumed an “undirected” set-based definition of hypergraphs, so is symmetric following Eq. 3. This means that the graph induced by is undirected and “strongly connected” really just means “connected.” Furthermore, the graph induced by has the same connectivity as the clique expansion graph of a hypergraph (Agarwal et al., 2006), where each hyperedge induces a clique on the nodes in the graph. There are natural notions of directed hypergraphs with non-symmetric adjacency tensors (Gallo et al., 1993), and some of the theorems we use later still apply in these cases. Therefore, we use the term “strongly connected” throughout.

In the rest of this section, we develop three eigenvector centralities for strongly connected hypergraphs. To do so, we generalize the desiderata for the eigenvector centrality scores :

  1. Some function of the centrality of node , , should be proportional to the sum of some function of the centrality score of its neighbors. In a 3-uniform hypergraph, this means that for some positive constant ,

  2. The centrality scores should be positive, i.e., .

Different choices of and give new notions of centrality. Careful choices of and relate to matrix and tensor eigenvectors. To keep notation simpler, we use -uniform hypergraphs when introducing new concepts (as in Eq. 4) and then generalize ideas to -uniform hypergraphs.

2.1 Clique motif Eigenvector Centrality (CEC)

Perhaps the most innocuous choice of and in Eq. 4 are and . In this case, there is a simple matrix formulation of the eigenvector formulation. This is unsurprising since and are linear.

Proposition 1

Let be a strongly connected -uniform hypergraph. When and in Eq. 4, the centrality scores are given by the eigenvector of the largest real eigenvalue of the matrix , where is the number of hyperedges containing and .


Thus, , and we assumed above that and . If is strongly connected, then the undirected graph induced by is connected and is irreducible. Applying Theorem 1 says that must be the eigenvector corresponding to the largest real eigenvalue.

The matrix was called the “motif adjacency matrix” by the author in previous work (Benson et al., 2016; Yin et al., 2017). Specifically, it would be the triangle motif adjacency matrix, if you interpret -uniform hyperedges as triangles in some graph. We give a formal definition for the general case.

Definition 3 (Clique motif Eigenvector Centrality (CEC))

Let be a strongly connected -uniform hypergraph. Then the clique motif eigenvector centrality scores are given by the eigenvector , where , is the number of hyperedges containing nodes and , and is the largest real eigenvalue of .

One interpretation of CEC (and eigenvector centrality for undirected graphs in general) is via a best low-rank decomposition. Assuming that the graph induced by is non-bipartite (which it will be if , since hyperedges induce cliques in ), then is the unique largest magnitude eigenvalue of the symmetric matrix  (Lovász, 2007), and is also the principal left and right singular vector of . Thus, by the Eckart–Young–Mirsky theorem (Golub and Van Loan, 2013, Theorem 2.4.8), . We can also interpret CEC with averaged path counts. First, observe that

Let be a vector of that counts the number of length- paths ending at each node from any starting node. Then


where is the vector of all ones. If we think of the CEC vector as the limit of the power method algorithm, then can be interpreted as the steady state of a weighted average of infinite paths through the hypergraph (see Benzi and Klymko (2015) for a more formal argument).

Computing the CEC vector is often straightforward. If is strongly connected, then the undirected graph induced by is connected. If this graph is also non-bipartite (which, again, must be the case for -uniform hypergraphs when ), then the eigenvalue in Theorem 1 is the unique eigenvalue of largest magnitude. In this case, we can we can use the power method to reliably compute .

One subtlety is that the eigenvector is only defined up to its magnitude. Usually, this issue is ignored, under the argument that only relative order matters for ranking problems. However, we should be conscientious when centrality scores are used as features in machine learning. For example, the scale of a centrality vector as a node feature would affect common tasks such as principal component analysis, where scale changes variance. (These issues can also be alleviated by pre-processing techniques, such as normalizing features to have zero mean and unit variance, although such pre-processing is not always employed.) In this paper, to make the presentation simple, we assume that centrality vectors are scaled to have unit


2.2 -eigenvector centrality (ZEC)

To actually incorporate non-linearity, we can keep the innocuous choice but change to the product form: in Eq. 4. Now, the contribution of the centralities of two nodes in a 3-node hyperedge is multiplicative for the third. This leads to the following system of nonlinear equations for a 3-uniform hypergraph:


Here, is short-hand for a vector with (similarly, for an order- tensor, ). The extra factor of 2 comes from the symmetry in the adjacency tensor (for an order- tensor, this extra factor is ).

A real-valued solution with to Eq. 7 is called a tensor -eigenpair (Qi, 2005) or a tensor eigenpair (Lim, 2005) (we will use the “” terminology). At first glance, it is unclear if such an eigenpair even exists, let alone a positive one. Assuming the hypergraph is strongly connected, Chang et al. (2008) proved a Perron-Frobenius-like theorem that gives us the existence of a positive solution .

Theorem 2 (Perron-Frobenius for -eigenvectors—Corollary 5.10 of (Chang et al., 2008)333An erratum was published for this result, but the error does not affect our statement or analysis. See Chang et al. (2013, Theorem 2.6) from the same authors for the corrected result.)

Let be an order- irreducible nonnegative tensor. Then there exists a -eigenpair satisfying such that and .

Unlike the case with matrices, there can be multiple positive -eigenvectors, even for the same eigenvalue (Chang et al., 2013, Example 2.7). With this tensor Perron-Frobenius theorem in hand, we can define -eigenvector centrality for hypergraphs. To manage the uniqueness issue, we consider any positive solution to be a centrality vector.

Definition 4 (-eigenvector centrality (ZEC))

Let be a strongly connected -uniform hypergraph with adjacency tensor . Then a -eigenvector centrality vector for is any positive vector satisfying and for some eigenvalue .

Analogous to the CEC (or standard graph) case, there is a ZEC vector that is a best low-rank approximation. To prove this, we first need the following lemma.

Lemma 1

Let be an irreducible symmetric nonnegative tensor and suppose that is a nonnegative -eigenvector of with positive eigenvalue . Then is positive.


The proof technique follows Qi et al. (2016, Lemma 21). Since and , there must be some coordinate such that . By Eq. 7 and nonnegativity of ,


Therefore, for any index . Iterating this argument shows that for any index reachable from in the graph induced by the representation matrix . Since is irreducible, this is all indices, so .

The following theorem says that there the ZEC vector is proportional to a best rank-1 approximation vector of the hypergraph adjacency tensor. However, neither ZEC vectors nor best rank-1 approximations need be unique (Friedland and Ottaviani, 2014).

Theorem 3

Let be an -uniform strongly connected hypergraph with symmetric adjacency tensor . Then there is a ZEC vector , where and is the order- symmetric tensor defined by .


The proof combines several prior results on tensors with Lemma 1. First, any best symmetric rank-1 approximation to a symmetric tensor is a tensor -eigenvector with largest magnitude eigenvalue (Kofidis and Regalia, 2002, Theorem 3). Second, the best rank-1 approximation to a symmetric tensor can be chosen symmetric (Chen et al., 2012, Theorem 4.1). Thus, a best rank-1 approximation can be chosen to be the -eigenvector with largest magnitude eigenvalue. Third, the coordinate values of any best rank-1 approximation to a nonnegative tensor can be chosen so that its entries are nonnegative (Qi et al., 2016, Theorem 16), so there is a nonnegative eigenvector with largest magnitude eigenvalue. Fourth, the largest -eigenvalue in magnitude is positive (Chang et al., 2013, Theorem 3.11 and Corollary 3.12). Finally, Lemma 1 says that the corresponding eigenvector must be positive.

Computing tensor -eigenvectors is much more challenging than computing matrix eigenvectors; computing a best symmetric rank-1 approximation to a tensor is NP-hard (Hillar and Lim, 2013, Theorem 10.2). Adjacency tensors of hypergraphs are symmetric tensors, so we might first try the symmetric higher-order power method, an analog of the power method for matrices; however, such methods are not guaranteed to converge (Lathauwer et al., 2000; Regalia and Kofidis, 2000; Kofidis and Regalia, 2002). A shifted symmetric higher-order power method with an appropriate shift guarantees convergence to some -eigenpair (Kolda and Mayo, 2011, 2014) but can only converge to a class of so-called “stable” eigenpairs.444Let be an eigenpair of an order- symmetric tensor with and be an orthonormal basis of the subspace orthogonal to . Then the eigenpair is unstable if is indefinite, where is the matrix with entry . An eigenpair is stable if it is not unstable. It turns out that ZEC vectors can be unstable, which hinders our reliance on these algorithms.


The following strongly connected 3-uniform hypergraph has a ZEC vector that is an unstable eigenvector in the sense of Kolda and Mayo (2011):

Indeed, one can verify that is a -eigenpair, where with , , and . Some simple calculations following Kolda and Mayo (2011, Definition 3.4) show that is an unstable -eigenvector.

There are algorithms based on semi-definite programming hierarchies that are guaranteed to compute the eigenvectors (Cui et al., 2014; Nie and Wang, 2014; Nie and Zhang, 2017), but these methods do not scale to the data problems we explore in Section 3. Recent work by the author develops a method to compute -eigenpairs using dynamical systems, which can scale to large tensors and also compute unstable eigenvectors (Benson and Gleich, 2018), albeit without theoretical guarantees on convergence. In fact, we used this method to discover the example in Example . We use this algorithm for our computational experiments.

2.3 -eigenvector centrality (HEC)

A reasonable qualm with ZEC is that the dimensional analysis is nonsensical—if centrality is measured in some “unit,” then Eq. 7 says that a unit of centrality is equal to the sum of the product of that same unit. With this in mind, we might choose and in Eq. 4 to satisfy dimensional analysis:


Here, is short-hand notation for the entry-wise th power of a vector.555Written as +c .^k+ in Julia or MATLAB. Again, the extra factor of 2 comes from the symmetry in the adjacency tensor.

A real-valued solution to Eq. 9 with is called a tensor -eigenpair (Qi, 2005) or a tensor -eigenpair (Lim, 2005) (we will use the “” terminology). Again, we can employ tensor Perron-Frobenius theory for the existence of a positive solution with positive eigenvalue.

Theorem 4 (Perron-Frobenius for -eigenvectors—Theorem 1.4 of (Chang et al., 2008))

Let be an order- irreducible tensor. Then there exists an -eigenpair with and . Moreover, any nonnegative -eigenvector also has eigenvalue , such vectors are unique up to scaling, and is the largest eigenvalue in magnitude.

The result is stronger than for -eigenvectors (Theorem 2)—the positive -eigenvector is unique up to scaling. With this result, we define our third hypergraph eigenvector centrality.

Definition 5 (-eigenvector centrality (HEC))

Let be a strongly connected -uniform hypergraph with adjacency tensor . Then the -eigenvector centrality vector for is the positive real vector satisfying and for some eigenvalue .

Computing the HEC vector is considerably easier than computing a ZEC vector. Simple power-method-like algorithms are guaranteed to converge and work well in practice (Liu et al., 2010; Ng et al., 2010; Zhou et al., 2013; Gautier et al., 2017, 2018).

2.4 Analysis of an illustrative example: the sunflower with singleton core

centrality CEC ZEC ! HEC
Figure 1: Hypergraph eigenvector centralities on sunflowers with singleton cores. (Left) A -uniform, -petal sunflower hypergraph with core . Each petal is a hyperedge, marked here with a yellow-shaded ellipse around the nodes. (Right) Finite and asymptotic ratios of the centrality of the center node to any other node (which all have the same centrality) in a general -uniform, -petal sunflower hypergraph with singleton core. The ZEC scores have no dependence on , and the HEC scores tend to uniform as grows.
This ratio holds when and can hold when (see Proposition 2).

A sunflower hypergraph has a hyperedge set with a common pairwise intersection. Formally, for any hyperedges (called petals) , . The common intersection is called the core. The sunflower is similar to the star graph, which has been used to evaluate centralities in social networks (Ruhnau, 2000). Here, we use sunflowers as an illustrative example for the behavior of our three hypergraph eigenvector centralities. We specifically consider sunflowers with petals where the core is a singleton, i.e., for any (Fig. 1, left). Below, we derive analytic solutions for the centrality of each method (see also Fig. 1, right). In all cases, the hypergraph centralities “do the right thing,” namely the center node has the largest centrality. However, the behavior of the three centralities differ.


Let be some other node than . We assume that the centrality is equal to some constant for all nodes and show that we get a positive eigenvector. The Perron-Frobenius theorem then gives us uniqueness. Recall that is the number of petals in the hypergraph. Under these assumptions, the CEC equations satisfy

Since and , and are both positive for any positive value . Some algebra shows that for finite and :

Lastly, we only need to choose and normalize so that .


The Perron-Frobenius theorem for tensor -eigenvectors does not preclude the existence of multiple positive eigenvectors with positive eigenvalues. We indeed see the non-uniqueness for the sunflower, but only in the -uniform case. We first show the following lemma, which states that the centrality of the non-core nodes in any petal must be the same.

Lemma 2

In any sunflower whose common intersection is a singleton , the ZEC scores of all nodes in the same petal—except for —are the same.


Let be the centrality score of node . Let be any other node in an arbitrary petal , whose centrality score is . The ZEC equations satisfy


We next characterize exactly when the sunflower has a unique ZEC vector.

Proposition 2

Let be an -uniform sunflower with singleton core and petals .

  1. If , the unique -eigenvector centrality score for is given by , where is any node other than and is a constant over nodes .

  2. If , there are infinite -eigenvector centrality scores for ; any vector with for , and are -eigenvector centrality scores.


By Lemma 2, each node other than has centrality , where is the petal to which the node belongs. Re-writing Eq. 10 in terms of gives


This implies that for any petals and . Assume . Then since . Let be the centrality of any node . The ZEC equations satisfy


Combining these equations gives , or . Plugging this expression for into Eq. 12 gives


Now assume . Then by Eq. 11. Let be an arbitrary constant for each petal and define . We now just check that the -eigenvector equations hold.


which holds by the definition of . Our choice of was any positive real number, and for any node in petal , the ZEC equation is .

Surprisingly, when , the non-center nodes can have different ZEC scores, even though the symmetry of the problem would suggest that they would be the same. Also surprisingly, all scores are independent of , the number of nodes in a hyperedge. However, ZEC is consistent in the sense that the center node always has the largest centrality score.

-eigenvector centrality

Theorem 4 gives us uniqueness of a positive vector with positive eigenvalue. We again assume that for any node . The HEC equations satisfy


Plugging in into Eq. 15 gives . Thus, for , and if the number of petals is fixed and the uniformity grows large.

2.5 Recap: which centrality should we use?

We derived three hypergraph eigenvector centralities. The appeal of CEC is that we only need to rely on the familiar, i.e., we can just use nonnegative matrix theory. However, CEC does not incorporate any interesting nonlinear structure, whereas ZEC and HEC incorporate nonlinearity. HEC is certainly attractive computationally—simple algorithms can compute a unique eigenvector centrality vector. We don’t have scalable algorithms guaranteed to compute a ZEC vector, and even worse, the ZEC vector may not be unique. Moreover, the non-uniqueness can show up on simple hypergraphs, as we saw with the sunflower. Both CEC and HEC have a proper dimensional analysis, while ZEC does not. On the other hand, ZEC can carry the same rank-1 approximation interpretation as standard graph eigenvector centrality. Also, in the asymptotics of the sunflower analysis (Fig. 1, right), the HEC score of the center node approaches that of the other nodes, while the relative CEC and ZEC scores of the center node to the others are constants that only depend on the number of hyperedges.

So which centrality should we use? Our analysis suggests that none is superior to all others. As is the case with graph centralities in general, the scores are not useful in a vacuum. Instead, we can use various centralities to study data. For example, multiple centralities provide more features that can be used for machine learning tasks. In the next section, we show that the three hypergraph centralities can provide qualitatively different results on real-world data.

3 Computational experiments and data analysis

3-uniform !!!!!!4-uniform !!!!!!5-uniform
dataset # nodes # nodes # nodes
N-grams 30,885 888,411 23,713 957,904 24,996 995,952
tags-ask-ubuntu 2,981 279,369 2,856 145,676 2,564 25,475
DAWN 1,677 41,225 1,447 29,829 1,212 15,690
Table 1: Summary statistics of datasets. The number of nodes is the dimension of the cubic adjacency tensor of the largest component of the hypergraph, and nnz() is the number of non-zeros in , which we divide by the number of symmetries in the symmetric tensor.

We now analyze our proposed eigenvector centralities on three real-world datasets. We construct a 3-uniform, 4-uniform, and 5-uniform hypergraph from each of the three datasets (summary statistics are in Table 1), so there are 9 total hypergraphs for our analysis. For each of the 9 hypergraphs, we computed the CEC, ZEC, and HEC scores on the largest connected component of the hypergraph. We used Julia’s eigs routine to compute the CEC scores, the dynamical systems algorithm by Benson and Gleich (2018) to compute the ZEC scores, and the NQI algorithm (Ng et al., 2010) to compute the HEC scores. The software used to compute the results in this section is available at

As discussed above, the ZEC vector need not be unique. We computed 100 ZEC vectors using random starting points and found that, for some datasets, the algorithm always converges to the same eigenvector and in others, the algorithm converges to a few different ones. For the purposes of our analysis, we use the ZEC vector to which convergence was most common. However, any of the ZEC vectors is a valid centrality. (One could also possibly take the mean of several ZEC vectors, but linear combinations of -eigenvectors are not necessarily -eigenvectors, unlike the matrix case.)


These hypergraphs are constructed from the most frequent -grams in the Corpus of Contemporary American English (COCA) (Davies, 2011).666 An -gram is a sequence of words (or parts of words, but we will just say “words”) that appear contiguously in text. Here, we use the million most frequent -grams dataset from COCA for to compose hyperedges. We construct -uniform hypergraphs () as follows. The set of nodes in the -uniform hypergraph correspond to all words appearing in at least one of the -grams in the corpus. There is a hyperedge between nodes if the corresponding words (appearing in any permutation order) make up one of the -grams appearing in the corpus. For each hypergraph, we analyze its largest connected component, which is given by taking the node set from the largest connected component of the graph discussed in Definition 2, and only keeping the hyperedges comprised entirely of nodes in .

Section 3 lists the top 20 ranked words according to the CEC, ZEC, and HEC scores for each of the three hypergraphs. Many of the top-ranked words are so-called stop words, such as “the,” “and,” and “to”; furthermore, nearly all of the top 20 ranked words for CEC and HEC are stop words or conjunctions, regardless of the size of the -gram. This is perhaps not surprising, given that stop words are by definition common in natural language (stop words also form important clusters in tensor-based clustering of -gram data (Wu et al., 2016)). The same is true of the ZEC scores, but only for the 3-grams and 4-grams. In the 5-uniform hypergraph, the word “world” has rank 12 with ZEC, rank 64 with CEC, and rank 84 with HEC; and the word “people” has rank 14 with ZEC, rank 39 with CEC, and rank 44 with HEC.

To better quantify the relationship between the centralities, we computed the Spearman’s rank correlation coefficient between components of each centrality vector. Specifically, for each of CEC, ZEC, and HEC, we find the top ranked nodes and compute the rank correlation on the sub-vectors consisting of these nodes with the other two centrality vectors. For example, if , we compute the top 100 nodes according to the CEC vector, take the length-100 sub-vector corresponding to the same nodes in the ZEC vector, and compute the rank correlation between the vectors. This is repeated for all six possible pairs of vectors and plotted as a function of (Section 3).

As a function of , the rank correlations in this dataset tend to have local minima for between 20 and a few hundred. Larger values of catch the tail of the distribution for which there is less difference in ranking, which leads to the increase in the correlation for large . We also see that the correlations tend to decrease as we increase the uniformity of the hypergraph. In other words, the three centrality measures become more different when considering larger multi-way relationships. Finally, the rank correlations reveal that ZEC ranks the top nodes (beyond the top 20) substantially differently than CEC and HEC.


Ask Ubuntu777 is a Stack Exchange forum, where Ubuntu users and developers ask, answer, and discuss questions. Each question may be annotated with up to five tags to aid in classification. We construct 3-uniform, 4-uniform, and 5-uniform hypergraphs from a previously collected dataset of tag co-appearances in questions (Benson et al., 2018). Specifically, the nodes of the hypergraphs represent tags. We add each possible hyperedge to the -uniform hypergraph if the corresponding tags were all simultaneously used to annotate at least one question on the web site (the question could also have contained other tags; for constructing the hyperedge, we only care if the tags were used for the question). Finally, as before, we use the largest component of the hypergraph.

Section 3 lists the top 10 nodes ranked by CEC, ZEC, HEC for each of the three hypergraphs. With the 3-uniform hypergraph, these top-ranked nodes are roughly the same for each centrality measure, with major Ubuntu version numbers (“12.04,” “14.04,” and “16.04”) near or at the top of each list. When moving to 4-uniform and 5-uniform hypergraphs, the version numbers remain highly ranked, but not the most highly ranked. ZEC finds tags related to the Windows operating system relatively more important. For example, the tags “windows-8”, “windows”, and “windows-7” are ranked 8, 9, and 10 with ZEC for the 5-uniform hypergraph but ranked 28, 22, and 26 with CEC and 21, 18, and 20 with HEC. Furthermore, ZEC ranks “windows,” “windows-xp,” “windows-vista,” “windows-7,” “windows-8,” and “windows-10” higher than CEC and HEC for all three hypergraphs. We conclude that ZEC provides complimentary information to the centralities for this dataset.

Figure 3 lists the same rank correlations as described above. We again see that all centrality vectors are relatively correlated for the 3-uniform hypergraph but less so as we increase the order of the hypergraph. The sub-vector corresponding to the top 10 ranked CEC nodes has only 0.05 rank correlation with the same ZEC sub-vector for the 4-uniform hypergraph.


The Drug Abuse Warning Network (DAWN) is a national health surveillance system in hospitals throughout the United States that records the drug use reported by patients visiting emergency rooms. Here, drugs include illicit substances, prescription medication, over-the-counter medication, and dietary supplements. We use a dataset that aggregates 8 years of DAWN reports (Benson et al., 2018) to construct -uniform hypergraphs for . The nodes in each hypergraph correspond to drugs. We add a hyperedge on nodes if there is at least one patient that reports using exactly that combination of drugs. Again, we use the largest component of the hypergraph.

We again list the top 10 ranked nodes by the three centrality vectors for each of the three hypergraphs (Section 3) as well as the same rank correlation statistics (Fig. 4). In this dataset, we see near agreement between the three centrality vectors across the 4-uniform and 5-uniform hypergraphs. For example, the rank correlations remain above 0.75 for the entire 4-uniform hypergraph for all measured top sub-vectors. Alcohol is consistently ranked near the top, which is unsurprising given its pervasiveness in emergency department visits, especially in combination with other drugs (Crane, 2013).

The ranking from the ZEC vector is substantially different from HEC and CEC for the 3-uniform hypergraph. Leading sub-vectors of ZEC actually have negative rank correlation with the corresponding HEC and CEC sub-vectors. As with the N-grams and tags-ask-ubuntu datasets, we again conclude that ZEC provides complimentary information for the centralities.

Table 2: Top 20 nodes with largest centralities for CEC, ZEC, and HEC for the three hypergraphs constructed from the frequent -grams. Many stop words appear as the top-ranked nodes, but ZEC picks up on non-stop words such as “world” and “people” on the hypergraph constructed from frequent -grams.
Figure 2: Spearman’s rank correlation coefficient between the top ranked nodes from one centrality measure with the same nodes from the other two centrality measures on the -grams dataset (the one used to determine the nodes is listed first in the legend). The rank correlation for the top few hundred nodes with ZEC and the other centralities decreases as the uniformity of the hypergraph increases, and dips below 0.4 for the 5-uniform hypergraph.
Table 3: Top 10 nodes with largest centralities for CEC, ZEC, and HEC for three hypergraphs constructed from the co-tagging dataset tags-ask-ubuntu. These highly-ranked nodes are largely the same in the 3-uniform hypergraph. For the 4-uniform and 5-uniform hypergraphs, ZEC picks up on Windows-related tags. Tags related to version numbers are ranked lower as the uniformity of the hypergraph increases. CEC ZEC HEC 3-uniform 1 14.04 14.04 14.04 2 12.04 12.04 12.04 3 16.04 boot 16.04 4 server 16.04 boot 5 command-line drivers drivers 6 boot nvidia command-line 7 networking dual-boot server 8 drivers server networking 9 unity command-line unity 10 gnome upgrade gnome CEC ZEC HEC 4-uniform 1 14.04 dual-boot 14.04 2 boot boot boot 3 drivers grub2 drivers 4 12.04 partitioning 12.04 5 16.04 uefi 16.04 6 networking system-installation dual-boot 7 server 14.04 nvidia 8 dual-boot windows grub2 9 nvidia installation networking 10 grub2 12.04 partitioning CEC ZEC HEC 5-uniform 1 boot dual-boot boot 2 dual-boot boot dual-boot 3 14.04 grub2 grub2 4 drivers partitioning drivers 5 grub2 uefi 14.04 6 networking system-installation partitioning 7 16.04 14.04 nvidia 8 partitioning windows-8 16.04 9 nvidia windows 12.04 10 12.04 windows-7 networking Figure 3: Spearman’s rank correlation coefficient between the top ranked nodes from one centrality measure with the same nodes from the other two centrality measures on the Ask Ubuntu co-tagging dataset (the one used to determine the nodes is listed first in the legend). For the 3-uniform hypergraph, all centralities are relatively correlated for . For the 4-uniform and 5-uniform hypergraphs, ZEC tends to be less correlated with CEC and HEC, which is also seen in the rankings of the top 10 nodes (Section 3, left).
Table 4: Top 10 nodes with largest centralities for CEC, ZEC, and HEC for three hypergraphs constructed from the reported sets of drugs used by patients in emergency room visits in the DAWN dataset. The highly-ranked nodes in the 4-uniform and 5-uniform hypergraphs are largely the same and are consistent across the centrality measures. CEC ZEC HEC 3-uniform 1 alcohol cephalothin alcohol 2 cocaine naloxone alprazolam 3 marijuana meclizine acet.-hydrocodone 4 acet.-hydrocodone cyclosporine clonazepam 5 alprazolam desipramine cocaine 6 clonazepam donnatal elixir marijuana 7 ibuprofen pyridostigmine quetiapine 8 quetiapine amoxapine lorazepam 9 acetaminophen aspirin ibuprofen 10 lorazepam bicalutamide zolpidem 1 alcohol alcohol alcohol 2 cocaine cocaine cocaine 3 marijuana marijuana marijuana 4 alprazolam alprazolam alprazolam 5 acet.-hydrocodone acet.-hydrocodone acet.-hydrocodone 6 clonazepam clonazepam clonazepam 7 quetiapine heroin quetiapine 8 heroin oxycodone oxycodone 9 oxycodone methadone heroin 10 lorazepam acet.-oxycodone acet.-oxycodone 1 alcohol cocaine alcohol 2 cocaine alcohol cocaine 3 marijuana marijuana marijuana 4 alprazolam heroin alprazolam 5 acet.-hydrocodone alprazolam acet.-hydrocodone 6 clonazepam benzodiazepines heroin 7 heroin oxycodone clonazepam 8 benzodiazepines acet.-hydrocodone benzodiazepines 9 oxycodone methadone oxycodone 10 narcotic analgesics narcotic analgesics narcotic analgesics Figure 4: Spearman’s rank correlation coefficient between the top ranked nodes from one centrality measure with the same nodes from the other two centrality measures on the DAWN dataset (the one used to determine the nodes is listed first in the legend). ZEC is negatively or nearly uncorrelated with CEC and HEC for the 3-uniform hypergraph, but all centralities are quite positively correlated for the 4-uniform and 5-uniform hypergraphs, which can also be observed from the similar top 10 nodes listed in Section 3 (left).

4 Discussion

Centrality is a pillar of network science, and emerging datasets containing supra-dyadic relationships offer new challenges in understanding centrality in complex systems. Here, we proposed three eigenvector centralities for hypergraph models of such multi-relational data. Two of these incorporated non-linear structure and relied on fairly recent developments in the spectral theory of tensors to create a sensible definition. None of the three centralities is “best” and we saw empirically that the eigenvectors can provide qualitatively different results. There are several other types of tensor eigenvectors (Qi and Luo, 2017), as well as other types of Perron-Frobenius theorems for hypergraph data (Michoel and Nachtergaele, 2012), which could be adapted for new centrality measures. However, - and -eigenvectors are arguably the most well-understood and commonly used tensor eigenvectors.

There are other centrality measures and ranking methods for higher-order relational data. For example, multilinear PageRank generalizes PageRank to tensors (Gleich et al., 2015; Benson et al., 2017). Tudisco et al. (2018) developed eigenvector centrality for multiplex networks using new Perron-Frobenius theory of multi-linear maps (Gautier et al., 2017); this is most similar to HEC. There are also several ranking methods for multi-relational data represented as tensors (Kolda et al., 2005; Kolda and Bader, 2006; Franz et al., 2009; Ng et al., 2011), as well as notions of centrality based on simplicial complexes (Estrada and Ross, 2018). Finally, there are other centralitities for hypergraphs (Kapoor et al., 2013; Busseniers, 2014; Bonacich et al., 2004; Rodríguez et al., 2007; Estrada and Rodríguez-Velázquez, 2006), but these do not relate to the multilinear structure of tensors that we studied.

We used a set-based definition of hypergraphs that made the adjacency tensor symmetric. In network science, directed graphs with non-symmetric adjacency matrices are a common model, and eigenvector centrality is still well-defined if the graph is strongly connected. There are similar notions of directionality in hypergraphs. For example, the N-grams dataset could have been interpreted as “directed” since the ordering of the words matters for its frequency. Trajectory or path-based data appearing in transportation systems (Xu et al., 2016), citation patterns (Rosvall et al., 2014), and human contact sequences (Scholtes, 2017) can be encoded as directed hypergraphs in similar ways. Theorems 4 and 2 hold for arbitrary irreducible nonnegative tensors, which includes adjacency tensors of strongly connected hypergraphs. Therefore, the hypergraph centralities we developed remain well-defined in these more general cases. However, computation becomes a bigger challenge.

There are many choices in deciding how to construct hypergraphs from data. As one example, we made our adjacency tensors binary (i.e., an unweighted hypergraph). This was not necessary mathematically, and all of the proposed methods work seamlessly if the hypergraph is weighted. The Ask Ubuntu and DAWN datasets also demonstrated two different ways of constructing hyperedges—in the former we included hyperedges induced by larger sets and in the latter we did not. This choice was made to illustrate the point that there are several ways one could construct hypergraphs from data. Our methods also relied on theory for symmetric tensors, so we studied uniform hypergraphs. One could incorporate non-uniformity in several ways. A simple approach could combine the scores for hypergraphs of different uniformity. We could also “embed” smaller hyperedges into a larger adjacency tensor. For example, a mixture of 3-node and 4-node hyperedges could be incorporated into an order-4 adjacency tensor, where a 3-node hyperedge adds non-zeros in the indices that only contain , , and (e.g., setting would create one such non-zero). In general, hypergraphs can be a convenient abstraction, and understanding the right way of constructing a hypergraph from data is a general research challenge.


I thank Yang Qi and David Gleich for many helpful discussions. I thank the reviewers for carefully reading this manuscript. This research was supported by NSF Award DMS-1830274 and ARO Award W911NF-19-1-0057.


  • Agarwal et al. (2006) S. Agarwal, K. Branson, and S. Belongie. Higher order learning with graphs. In Proceedings of the 23rd international conference on Machine learning. 2006. doi:10.1145/1143844.1143847.
  • Balasubramaniam et al. (2016) K. Balasubramaniam, B. Beisner, J. Vandeleest, E. Atwill, and B. McCowan. Social buffering and contact transmission: network connections have beneficial and detrimental effects on Shigella infection risk among captive rhesus macaques. PeerJ, 4, p. e2630, 2016. doi:10.7717/peerj.2630.
  • Bavelas (1950) A. Bavelas. Communication patterns in task-oriented groups. The Journal of the Acoustical Society of America, 22 (6), pp. 725–730, 1950. doi:10.1121/1.1906679.
  • Benson et al. (2018) A. R. Benson, R. Abebe, M. T. Schaub, A. Jadbabaie, and J. Kleinberg. Simplicial closure and higher-order link prediction. arXiv:1802.06916, 2018.
  • Benson and Gleich (2018) A. R. Benson and D. F. Gleich. Computing tensor Z-eigenvectors with dynamical systems. arXiv:1805.00903, 2018.
  • Benson et al. (2016) A. R. Benson, D. F. Gleich, and J. Leskovec. Higher-order organization of complex networks. Science, 353 (6295), pp. 163–166, 2016. doi:10.1126/science.aad9029.
  • Benson et al. (2017) A. R. Benson, D. F. Gleich, and L.-H. Lim. The spacey random walk: A stochastic process for higher-order data. SIAM Review, 59 (2), pp. 321–345, 2017. doi:10.1137/16m1074023.
  • Benzi and Klymko (2015) M. Benzi and C. Klymko. On the limiting behavior of parameter-dependent network centrality measures. SIAM Journal on Matrix Analysis and Applications, 36 (2), pp. 686–706, 2015. doi:10.1137/130950550.
  • Berge (1984) C. Berge. Hypergraphs: combinatorics of finite sets, Elsevier, 1984.
  • Berman and Plemmons (1994) A. Berman and R. J. Plemmons. Nonnegative matrices in the mathematical sciences, SIAM, 1994.
  • Boldi and Vigna (2014) P. Boldi and S. Vigna. Axioms for centrality. Internet Mathematics, 10 (3-4), pp. 222–262, 2014. doi:10.1080/15427951.2013.865686.
  • Bonacich (1972) P. Bonacich. Technique for analyzing overlapping memberships. Sociological Methodology, 4, p. 176, 1972. doi:10.2307/270732.
  • Bonacich (1987) ———. Power and centrality: A family of measures. American Journal of Sociology, 92 (5), pp. 1170–1182, 1987. doi:10.1086/228631.
  • Bonacich et al. (2004) P. Bonacich, A. C. Holdren, and M. Johnston. Hyper-edges and multidimensional centrality. Social Networks, 26 (3), pp. 189–203, 2004. doi:10.1016/j.socnet.2004.01.001.
  • Borgatti and Everett (2006) S. P. Borgatti and M. G. Everett. A graph-theoretic perspective on centrality. Social Networks, 28 (4), pp. 466–484, 2006. doi:10.1016/j.socnet.2005.11.005.
  • Brin and Page (1998) S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30 (1-7), pp. 107–117, 1998. doi:10.1016/s0169-7552(98)00110-x.
  • Bryan and Leise (2006) K. Bryan and T. Leise. The $25,000,000,000 eigenvector: The linear algebra behind google. SIAM Review, 48 (3), pp. 569–581, 2006. doi:10.1137/050623280.
  • Bullmore and Sporns (2009) E. Bullmore and O. Sporns. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10 (3), pp. 186–198, 2009. doi:10.1038/nrn2575.
  • Busseniers (2014) E. Busseniers. General centrality in a hypergraph. arXiv:1403.5162, 2014.
  • Chang et al. (2013) K. Chang, K. Pearson, and T. Zhang. Some variational principles for -eigenvalues of nonnegative tensors. Linear Algebra and its Applications, 438 (11), pp. 4166–4182, 2013. doi:10.1016/j.laa.2013.02.013.
  • Chang et al. (2008) K. C. Chang, K. Pearson, and T. Zhang. Perron-frobenius theorem for nonnegative tensors. Communications in Mathematical Sciences, 6 (2), pp. 507–520, 2008. doi:10.4310/cms.2008.v6.n2.a12.
  • Chen et al. (2012) B. Chen, S. He, Z. Li, and S. Zhang. Maximum block improvement and polynomial optimization. SIAM Journal on Optimization, 22 (1), pp. 87–107, 2012. doi:10.1137/110834524.
  • Crane (2013) E. H. Crane. Highlights of the 2011 drug abuse warning network (dawn) findings on drug-related emergency department visits. 2013.
  • Cui et al. (2014) C.-F. Cui, Y.-H. Dai, and J. Nie. All real eigenvalues of symmetric tensors. SIAM Journal on Matrix Analysis and Applications, 35 (4), pp. 1582–1601, 2014. doi:10.1137/140962292.
  • Davies (2011) M. Davies. N-grams data from the Corpus of Contemporary American English (COCA). 2011.
  • de Sola Pool and Kochen (1978) I. de Sola Pool and M. Kochen. Contacts and influence. Social Networks, 1 (1), pp. 5–51, 1978. doi:10.1016/0378-8733(78)90011-4.
  • Estrada and Higham (2010) E. Estrada and D. J. Higham. Network properties revealed through matrix functions. SIAM Review, 52 (4), pp. 696–714, 2010. doi:10.1137/090761070.
  • Estrada and Rodríguez-Velázquez (2006) E. Estrada and J. A. Rodríguez-Velázquez. Subgraph centrality and clustering in complex hyper-networks. Physica A: Statistical Mechanics and its Applications, 364, pp. 581–594, 2006. doi:10.1016/j.physa.2005.12.002.
  • Estrada and Ross (2018) E. Estrada and G. J. Ross. Centralities in simplicial complexes. applications to protein interaction networks. Journal of Theoretical Biology, 438, pp. 46–60, 2018. doi:10.1016/j.jtbi.2017.11.003.
  • Franz et al. (2009) T. Franz, A. Schultz, S. Sizov, and S. Staab. TripleRank: Ranking semantic web data by tensor decomposition. In Lecture Notes in Computer Science, pp. 213–228. Springer Berlin Heidelberg, 2009. doi:10.1007/978-3-642-04930-9_14.
  • Freeman (1977) L. C. Freeman. A set of measures of centrality based on betweenness. Sociometry, 40 (1), p. 35, 1977. doi:10.2307/3033543.
  • Friedland and Ottaviani (2014) S. Friedland and G. Ottaviani. The number of singular vector tuples and uniqueness of best rank-one approximation of tensors. Foundations of Computational Mathematics, 14 (6), pp. 1209–1242, 2014. doi:10.1007/s10208-014-9194-z.
  • Gallo et al. (1993) G. Gallo, G. Longo, S. Pallottino, and S. Nguyen. Directed hypergraphs and applications. Discrete Applied Mathematics, 42 (2-3), pp. 177–201, 1993. doi:10.1016/0166-218x(93)90045-p.
  • Gautier et al. (2017) A. Gautier, F. Tudisco, and M. Hein. The perron-frobenius theorem for multi-homogeneous maps. arXiv:1702.03230, 2017.
  • Gautier et al. (2018) ———. A unifying perron-frobenius theorem for nonnegative tensors via multi-homogeneous maps. arXiv:1801.04215, 2018.
  • Gleich (2015) D. F. Gleich. PageRank beyond the web. SIAM Review, 57 (3), pp. 321–363, 2015. doi:10.1137/140976649.
  • Gleich et al. (2015) D. F. Gleich, L.-H. Lim, and Y. Yu. Multilinear PageRank. SIAM Journal on Matrix Analysis and Applications, 36 (4), pp. 1507–1541, 2015. doi:10.1137/140985160.
  • Golub and Van Loan (2013) G. H. Golub and C. F. Van Loan. Matrix computations, JHU press, 4 edition, 2013.
  • Henderson et al. (2012) K. Henderson, B. Gallagher, T. Eliassi-Rad, H. Tong, S. Basu, L. Akoglu, D. Koutra, C. Faloutsos, and L. Li. RolX: structural role extraction & mining in large graphs. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. 2012. doi:10.1145/2339530.2339723.
  • Hillar and Lim (2013) C. J. Hillar and L.-H. Lim. Most tensor problems are NP-hard. Journal of the ACM, 60 (6), pp. 1–39, 2013. doi:10.1145/2512329.
  • Jeong et al. (2001) H. Jeong, S. P. Mason, A.-L. Barabási, and Z. N. Oltvai. Lethality and centrality in protein networks. Nature, 411 (6833), pp. 41–42, 2001. doi:10.1038/35075138.
  • Kapoor et al. (2013) K. Kapoor, D. Sharma, and J. Srivastava. Weighted node degree centrality for hypergraphs. In 2013 IEEE 2nd Network Science Workshop (NSW). 2013. doi:10.1109/nsw.2013.6609212.
  • Katz (1953) L. Katz. A new status index derived from sociometric analysis. Psychometrika, 18 (1), pp. 39–43, 1953. doi:10.1007/bf02289026.
  • Kleinberg (1999) J. M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46 (5), pp. 604–632, 1999. doi:10.1145/324133.324140.
  • Kofidis and Regalia (2002) E. Kofidis and P. A. Regalia. On the best rank-1 approximation of higher-order supersymmetric tensors. SIAM Journal on Matrix Analysis and Applications, 23 (3), pp. 863–884, 2002. doi:10.1137/s0895479801387413.
  • Kolda and Bader (2006) T. Kolda and B. Bader. The TOPHITS model for higher-order web link analysis. In Proceedings of Link Analysis, Counterterrorism and Security 2006. 2006.
  • Kolda et al. (2005) T. Kolda, B. Bader, and J. Kenny. Higher-order web link analysis using multilinear algebra. In Fifth IEEE International Conference on Data Mining. 2005. doi:10.1109/icdm.2005.77.
  • Kolda and Bader (2009) T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51 (3), pp. 455–500, 2009. doi:10.1137/07070111X.
  • Kolda and Mayo (2011) T. G. Kolda and J. R. Mayo. Shifted power method for computing tensor eigenpairs. SIAM Journal on Matrix Analysis and Applications, 32 (4), pp. 1095–1124, 2011. doi:10.1137/100801482.
  • Kolda and Mayo (2014) ———. An adaptive shifted power method for computing generalized tensor eigenpairs. SIAM Journal on Matrix Analysis and Applications, 35 (4), pp. 1563–1581, 2014. doi:10.1137/140951758.
  • Koutra et al. (2013) D. Koutra, J. T. Vogelstein, and C. Faloutsos. DeltaCon: A principled massive-graph similarity function. In Proceedings of the 2013 SIAM International Conference on Data Mining, pp. 162–170. Society for Industrial and Applied Mathematics, 2013. doi:10.1137/1.9781611972832.18.
  • Lathauwer et al. (2000) L. D. Lathauwer, B. D. Moor, and J. Vandewalle. On the Best Rank- and Rank-(, , , ) Approximation of Higher-Order Tensors. SIAM Journal on Matrix Analysis and Applications, 21 (4), pp. 1324–1342, 2000. doi:10.1137/s0895479898346995.
  • Lim (2005) L.-H. Lim. Singular values and eigenvalues of tensors: A variational approach. In First IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing. 2005. doi:10.1109/camap.2005.1574201.
  • Lim (2013) ———. Tensors and hypermatrices. In Handbook of Linear Algebra, Second Edition, chapter 15, pp. 231–260. Chapman and Hall/CRC, 2013. doi:10.1201/b16113-19.
  • Liu et al. (2010) Y. Liu, G. Zhou, and N. F. Ibrahim. An always convergent algorithm for the largest eigenvalue of an irreducible nonnegative tensor. Journal of Computational and Applied Mathematics, 235 (1), pp. 286–292, 2010. doi:10.1016/
  • Lohmann et al. (2010) G. Lohmann, D. S. Margulies, A. Horstmann, B. Pleger, J. Lepsien, D. Goldhahn, H. Schloegl, M. Stumvoll, A. Villringer, and R. Turner. Eigenvector centrality mapping for analyzing connectivity patterns in fMRI data of the human brain. PLOS ONE, 5 (4), p. e10232, 2010. doi:10.1371/journal.pone.0010232.
  • Lovász (2007) L. Lovász. Eigenvalues of graphs. 2007.
  • Michoel and Nachtergaele (2012) T. Michoel and B. Nachtergaele. Alignment and integration of complex networks by hypergraph-based spectral clustering. Physical Review E, 86 (5), 2012. doi:10.1103/physreve.86.056111.
  • Newman (2003) M. E. J. Newman. The structure and function of complex networks. SIAM Review, 45 (2), pp. 167–256, 2003. doi:10.1137/s003614450342480.
  • Newman (2008) ———. Mathematics of networks. In The New Palgrave Encyclopedia of Economics. Palgrave Macmillan, Basingstoke, second edition, 2008.
  • Ng et al. (2010) M. Ng, L. Qi, and G. Zhou. Finding the largest eigenvalue of a nonnegative tensor. SIAM Journal on Matrix Analysis and Applications, 31 (3), pp. 1090–1099, 2010. doi:10.1137/09074838x.
  • Ng et al. (2011) M. K.-P. Ng, X. Li, and Y. Ye. MultiRank: co-ranking for objects and relations in multi-relational data. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. 2011. doi:10.1145/2020408.2020594.
  • Nie and Wang (2014) J. Nie and L. Wang. Semidefinite relaxations for best rank-1 tensor approximations. SIAM Journal on Matrix Analysis and Applications, 35 (3), pp. 1155–1179, 2014. doi:10.1137/130935112.
  • Nie and Zhang (2017) J. Nie and X. Zhang. Real eigenvalues of nonsymmetric tensors. Computational Optimization and Applications, 70 (1), pp. 1–32, 2017. doi:10.1007/s10589-017-9973-y.
  • Page et al. (1999) L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
  • Qi (2005) L. Qi. Eigenvalues of a real supersymmetric tensor. Journal of Symbolic Computation, 40 (6), pp. 1302–1324, 2005. doi:10.1016/j.jsc.2005.05.007.
  • Qi and Luo (2017) L. Qi and Z. Luo. Tensor analysis: Spectral theory and special tensors, SIAM, 2017.
  • Qi et al. (2016) Y. Qi, P. Comon, and L.-H. Lim. Uniqueness of nonnegative tensor approximations. IEEE Transactions on Information Theory, 62 (4), pp. 2170–2183, 2016. doi:10.1109/tit.2016.2532906.
  • Regalia and Kofidis (2000) P. A. Regalia and E. Kofidis. The higher-order power method revisited: convergence proofs and effective initialization. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. 2000. doi:10.1109/icassp.2000.861047.
  • Rodríguez et al. (2007) J. A. Rodríguez, E. Estrada, and A. Gutiérrez. Functional centrality in graphs. Linear and Multilinear Algebra, 55 (3), pp. 293–302, 2007. doi:10.1080/03081080601002221.
  • Rosvall et al. (2014) M. Rosvall, A. V. Esquivel, A. Lancichinetti, J. D. West, and R. Lambiotte. Memory in network flows and its effects on spreading dynamics and community detection. Nature Communications, 5 (1), 2014. doi:10.1038/ncomms5630.
  • Ruhnau (2000) B. Ruhnau. Eigenvector-centrality — a node-centrality? Social Networks, 22 (4), pp. 357–365, 2000. doi:10.1016/s0378-8733(00)00031-9.
  • Sabidussi (1966) G. Sabidussi. The centrality index of a graph. Psychometrika, 31 (4), pp. 581–603, 1966. doi:10.1007/bf02289527.
  • Scholtes (2017) I. Scholtes. When is a network a network? multi-order graphical model selection in pathways and temporal networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017. doi:10.1145/3097983.3098145.
  • Strogatz (2001) S. H. Strogatz. Exploring complex networks. Nature, 410 (6825), pp. 268–276, 2001. doi:10.1038/35065725.
  • Taylor et al. (2017) D. Taylor, S. A. Myers, A. Clauset, M. A. Porter, and P. J. Mucha. Eigenvector-based centrality measures for temporal networks. Multiscale Modeling & Simulation, 15 (1), pp. 537–574, 2017. doi:10.1137/16m1066142.
  • Tudisco et al. (2018) F. Tudisco, F. Arrigo, and A. Gautier. Node and layer eigenvector centralities for multiplex networks. SIAM Journal on Applied Mathematics, 78 (2), pp. 853–876, 2018. doi:10.1137/17m1137668.
  • Wu et al. (2016) T. Wu, A. R. Benson, and D. F. Gleich. General tensor spectral co-clustering for higher-order data. In Advances in Neural Information Processing Systems, pp. 2559–2567. 2016.
  • Xu et al. (2016) J. Xu, T. L. Wickramarathne, and N. V. Chawla. Representing higher-order dependencies in networks. Science Advances, 2 (5), pp. e1600028–e1600028, 2016. doi:10.1126/sciadv.1600028.
  • Ye and Akoglu (2015) J. Ye and L. Akoglu. Discovering opinion spammer groups by network footprints. In Proceedings of the 2015 ACM on Conference on Online Social Networks. 2015. doi:10.1145/2817946.2820606.
  • Yin et al. (2017) H. Yin, A. R. Benson, J. Leskovec, and D. F. Gleich. Local higher-order graph clustering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017. doi:10.1145/3097983.3098069.
  • Zhou et al. (2013) G. Zhou, L. Qi, and S.-Y. Wu. Efficient algorithms for computing the largest eigenvalue of a nonnegative tensor. Frontiers of Mathematics in China, 8 (1), pp. 155–168, 2013. doi:10.1007/s11464-012-0268-4.