Random Walk Fundamental Tensor and its Applications to Network Analysis

01/25/2018
by   Golshan Golnari, et al.
0

We first present a comprehensive review of various random walk metrics used in the literature and express them in a consistent framework. We then introduce fundamental tensor -- a generalization of the well-known fundamental matrix -- and show that classical random walk metrics can be derived from it in a unified manner. We provide a collection of useful relations for random walk metrics that are useful and insightful for network studies. To demonstrate the usefulness and efficacy of the proposed fundamental tensor in network analysis, we present four important applications: 1) unification of network centrality measures, 2) characterization of (generalized) network articulation points, 3) identification of network most influential nodes, and 4) fast computation of network reachability after failures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/11/2020

Vertex-reinforced Random Walk for Network Embedding

In this paper, we study the fundamental problem of random walk for netwo...
06/16/2018

Avoidance Markov Metrics and Node Pivotality Ranking

We introduce the avoidance Markov metrics and theories which provide mor...
09/25/2021

Random Walk-steered Majority Undersampling

In this work, we propose Random Walk-steered Majority Undersampling (RWM...
04/16/2018

Walk-Steered Convolution for Graph Classification

Graph classification is a fundamental but challenging problem due to the...
11/07/2018

Analysis of visitors' mobility patterns through random walk in the Louvre museum

This paper proposes a random walk model to analyze visitors' mobility pa...
09/02/2016

SynsetRank: Degree-adjusted Random Walk for Relation Identification

In relation extraction, a key process is to obtain good detectors that f...
11/03/2020

Random Walk Bandits

Bandit learning problems find important applications ranging from medica...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Random walk and Markov chain theory, which are in close relationship, shown to be powerful tools in many fields from physics and chemistry to social sciences, economics, and computer science kopp2012x ; kutchukian2009fog ; wilson1977quantum ; acemoglu2011political ; calvet2001forecasting . For network analysis, too, they have shown promises as effective tools he2002individual ; ranjan2013geometry ; golnari2015pivotality ; chen2008clustering , where the hitting time, a well-known Markov metric, is used to measure the distance (or similarity) between different parts of a network and provide more insight to structural properties of the network. We believe though that the applicability of Markov chain theory to network analysis is more extensive and is not restricted to using the hitting time. Markov chain theory enables us to provide more general solutions which cover the directed networks (digraphs) and is not tailored only to special case of undirected networks.

In this paper, we revisit the fundamental matrix in Markov chain theory snell , extend it to a more general form of tensor representation, which we call fundamental tensor, and use that to tackle four interesting network analysis applications. Fundamental tensor is defined 111Note that the fundamental matrix is mostly denoted by in Markov chain theory literature, but since might reflect other meanings in computer science venues, we usually use (or to denote the tensor) in our papers. over three dimensions of source node , middle (medial) node , and target node , which represents the expected number of times that the Markov chain visits node when started from and before hitting for the first time. We remark that the (absorbing) fundamental matrix which is frequently referred to in this paper is different from (ergodic) fundamental matrix grinstead2012introduction and is of special interest of the authors of this paper as: 1- it is nicely interpretable in terms of random walk, 2- it is conceptually interesting as aggregation over the (absorbing) fundamental tensor dimensions would result to different Markov metrics (Section 5) and the articulation points of a network can be directly found from it (Section 6), 2- it can be used for both applications that are modeled by an ergodic chain (Sections 5 and 6) and applications modeled by an absorbing chain (Sections 7 and 8), 3- it is easily generalizable to absorbing Markov chain with multiple absorbing states (Section 3.6) which we use to model network applications with multiple target nodes. We show that computing the (absorbing) fundamental tensor is no harder than computing the (ergodic) fundamental matrix () as the entire tensor can be computed very efficiently by a single matrix inversion and there is no need to compute the (absorbing) fundamental matrices for each absorbing state to form the entire tensor (which could take of computations).

As the first application, we show that the fundamental tensor provides a unified way to compute the random walk distance (hitting time), random walk betweenness measure newman2005measure , random walk closeness measure noh2004random , and random walk topological index (Kirchhoff index)klein1993resistance in a conceptual and insightful framework: hitting time distance as the aggregation of the fundamental tensor over the middle node dimension, betweenness as the aggregation over the source and target nodes, closeness as the aggregation over the source and middle node dimensions, and Kirchhoff index resulted as the aggregation over all the three dimensions. These four random walk measures are of well-known network analysis tools which have been vastly used in the literature boccaletti2006complex ; blochl2011vertex ; borgatti2005centrality ; fortunato2010community ; grady2006random ; fouss2007random .

In the second application, we extend the definition of articulation points to the directed networks which has been originally defined for undirected networks, known as cut vertices as well. We show that the (normalized) fundamental tensor nicely functions as a look up table to find all the articulation points of a directed network. Founded on the notion of articulation points, we also propose a load balancing measure for the networks. Load balancing is important for network robustness against targeted attacks, where the balance in the loads help the network to show more resilience toward the failures. Through extensive experiments, we evaluate the load balancing in several specific-shaped networks and real-world networks.

The applicability and efficiency of the fundamental tensor in social networks is the subject of the third application in this paper. We show that the (normalized) fundamental tensor can be used in the field of social networks to infer the cascade and spread of a phenomena or an influence in a network and derive a formulation to find the most influential nodes for maximizing the influence spread over the network. While the original problem is NP-hard, we propose a greedy algorithm which yields a provably near-optimal solution. We show that this algorithm outperforms the state-of-the-art as well as the centrality/importance measure baselines in maximizing the influence spread in the network.

Since it is inefficient to use the regular reachability methods in large and dense networks with high volume of reachability queries whenever a failure occurs in the network, devising an efficient dynamic reachability method is necessary in such cases. As the fourth application, we present a dynamic reachability method in the form of a pre-computed oracle which is cable of answering to reachability queries efficiently () both in the case of having failures or no failure in a general directed network. This pre-computed oracle is in fact the fundamental matrix computed for the extended network and target . The efficiency of the algorithm is resulted from the theorem that we prove on incremental computation of the fundamental tensor when a failure happens in the network. The storage requirement of this oracle is only . Note that in the last two applications, the directed network does not need to be strongly connected, and our algorithms can be applied to any general network.

For the sake of completeness, we also provide a comprehensive review of the other Markov metrics, such as hitting time, absorption probability, and hitting cost, which is a very useful metric for weighted networks and was introduced in a more recent literature

fouss2007random , but can be rarely found in Markov chain literature. In the review, we include Markov metrics’ various definitions and formulations, and express them in a consistent form (matrix form, recursive form, and stochastic form). We also show that the fundamental tensor provides a basis for computing these Markov metrics in a unified manner. In addition, we review, gather, and derive many insightful relations for the Markov metrics.

The remainder of this paper is organized as follows. A preliminary on network terminology is presented in Section 2. In Section 3, we review and present various Markov metrics in a unified format. In Section 4, we gather and derive useful relations among the reviewed Markov metrics. Finally, four applications are presented in Sections 567, and 8 to demonstrate the usefulness and efficacy of the fundamental tensor in network analysis.

2 Preliminaries

In general, a network can be abstractly modeled as a weighted and directed graph, denoted by . Here is the set of nodes in the network such as routers or switches in a communication network or users in a social network, and its size is assumed to be throughout the paper ; is the set of (directed) edges representing the (physical or logical) connections between nodes (e.g., a communication link from a node to a node ) or entity relations (e.g., follower-followee relation between two users). The affinity (or adjacency) matrix is assumed to be nonnegative, i.e., , where if and only if edge exists, . The weight (or cost) matrix represents the costs assigned to edges in a weighted network. Network is called strongly connected if all nodes can be reachable from each other via at least one path. In this paper, we focus on strongly connected networks, unless stated otherwise.

A random walk in is modeled by a discrete time Markov chain, where the nodes of represent the states of the Markov chain. The target node in the network is modeled by an absorbing state at which the random walk arrives it cannot leave anymore. The Markov chain is fully described by its transition probability matrix: , where is the diagonal matrix of (out-)degrees, i.e., and . The is often referred to as the (out-)degree of node . Throughout the paper, the words “node" and “state", “network" and “Markov chain" are often used interchangeably depending on the context. If the network is strongly connected, the associated Markov chain is irreducible and the stationary probabilities are strictly positive according to Perron-Frobenius theorem gantmacher1960theory . For an undirected and connected , the associated Markov chain is reversible and the stationary probabilities are a scalar multiple of node degrees: .

3 Definitions of Markov Metrics

We review various Markov metrics and present them using three unified forms: 1) matrix form (and in terms of the fundamental matrix), 2) recursive form, and 3) stochastic form. The matrix form is often the preferred form in this paper and we show how two other forms can be obtained from the matrix form. The stochastic form, however, provides a more intuitive definition of random walk metrics. We also introduce fundamental tensor as a generalization of the fundamental matrix and show how it can be computed efficiently.

3.1 Fundamental Matrix

The expected number of visits counts the expected number of visits at a node, when a random walk starts from a source node and before a stopping criterion. The stopping criterion in random walk (or Markov) metrics is often “visiting a target node for the first time” which is referred to as hitting the target node. Fundamental matrix is formed for a specific target node, where the entries are the expected number of visits at a medial node starting from a source node, for all such pairs. In the following, the fundamental matrix is defined formally using three different forms. 222The fundamental matrix that is referred to in this paper is defined for absorbing chain and is obtained from . It is different from the fundamental matrix which is defined for ergodic chain grinstead2012introduction .

  • Matrix form kemeny1960finite ; snell : Let be an transition probability matrix for a strongly connected network and node be the target node. If the nodes are arranged in a way to assign the last index to the target node, transition probability matrix can be written in the form of and the fundamental matrix is defined as follows:

    (1)

    where entry represents the expected number of visits of medial node , starting from source node , and before hitting (or absorption by) target node snell . Note that the target node can be any node which would be specified in the notation by to clarify that it is computed for target node . This is discussed more in Markov metrics generalization to a set of targets (3.6).

    Expanding as a geometric series, namely, , it is easy to see the probabilistic interpretation of the expected number of visits as a summation over the number of steps required to visit node .

  • Recursive form: Each entry of the fundamental matrix, , can be recursively computed in terms of the entries of ’s outgoing neighbors. Note that if , is increased by 1 to account for (the random walk starts at , thus counting as the first visit at ).

    (2)

    It is easy to see the direct connection between the recursive form and the matrix form: from , we have .

  • Stochastic form norris1998markov : Let be a discrete-time Markov chain with the transition probability matrix , where is the state of Markov chain in time step . The indicator function

    is a Bernoulli random variable, equal to 1 if the state of Markov chain is

    at time , i.e. , and 0 otherwise. The number of visits of node , denoted by , can be written in terms of the indicator function: . The stopping criteria is hitting target node for the first time. In an irreducible chain, this event is guaranteed to occur in a finite time. Hence . is defined as the expected value of starting from .

    (3)

    where the expression is simply the expanded version of the matrix form. Note that in order for to be finite (namely, the infinite summation converges), it is sufficient that node be reachable from all other nodes in network. In other words, the irreducibility of the entire network is not necessary.

3.2 Fundamental Tensor

We define the fundamental tensor, , as a generalization of the fundamental matrix , which looks to be formed by stacking up the fundamental matrices constructed for each node as the target node in a strongly connected network (Eq.(4)), but is in fact computed much more efficiently. In Theorem (1), we show that the whole fundamental tensor can be computed from Moore-Penrose pseudo-inverse of Laplacian matrix with only of complexity and there is no need to compute the fundamental matrices for every target node which require of computation in total.

(4)

Fundamental tensor is presented in three dimensions of source node, medial (middle) node, and target node (Fig. (1)).

3.3 Hitting Time

The (expected) hitting time metric, also known as the first transit time, first passage time, and expected absorption time in the literature, counts the expected number of steps (or time) required to hit a target node for the first time when the random walk starts from a source node. Hitting time is frequently used in the literature as a form of (random walk) distance metric for network analysis. We formally present it in three different forms below.

  • Matrix form snell : Hitting time can be computed from the fundamental matrix (1) as follows:

    (5)

    where

    is a vector of all ones and

    is a vector of computed for all . represents the expected number of steps required to hit node starting from and is obtained from: . The intuition behind this formulation is that enumerating the average number of nodes visited on the way from the source node to the target node yields the number of steps (distance) required to reach to the target node.

  • Recursive form grinstead2012introduction ; norris1998markov ; fouss2007random : The recursive form of is the most well-known form presented in the literature for deriving the hitting time:

    (6)

    It is easy to see the direct connection between the recursive form and the matrix form: from , we have .

  • Stochastic form norris1998markov : Let be a discrete-time Markov chain with the transition probability matrix . The hitting time of the target node is denoted by a random variable given by , where by convention the infimum of the empty set is . Assuming that the target node is reachable from all the other nodes in the network, we have . The (expected) hitting time from to is then given by

    (7)

    where for and it is 0 otherwise. The connection between the stochastic form and the matrix form can be found in the appendix.

3.3.1 Commute Time

The commute time between node and node is defined as the sum of the hitting time from to and the hitting time from to :

(8)

Clearly, commute time is a symmetric quantity, i.e., . In contrast, hitting time is in general not symmetric, even when the network is undirected.

3.4 Hitting Cost

The (expected) hitting cost, also known as average first-passage cost in the literature, generalizes the (expected) hitting time by assigning a cost to each transition. Hitting cost from to , denoted by , is the average cost incurred by the random walk starting from node to hit node for the first time. The cost of transiting edge is given by . The hitting cost was first introduced by Fouss et al. fouss2007random and given in a recursive form. In the following, we first provide a rigorous definition for hitting cost in the stochastic form, and then show how the matrix form and recursive form can be driven from this definition.

  • Stochastic form: Let be a discrete-time Markov chain with the transition probability matrix and cost matrix . The hitting cost of the target node is a random variable which is defined by . is a countable set. If we view as the length of edge (link) , then the hitting cost is the total length of steps that the random walk takes until it hits for the first time. The expected value of when the random walk starts at node is given by

    (9)

    For compactness, we delegate the more detailed derivation of the stochastic form and its connection with the matrix form to the appendix.

  • Matrix form: Hitting cost can be computed from the following closed form formulation:

    (10)

    where is the vector of expected outgoing costs and is a vector of computed for all . The expected outgoing cost of node is obtained from: . Note that the hitting time matrix in Eq.(5) is a special case of the hitting cost matrix , obtained when for all .

  • Recursive form fouss2007random : The recursive computation of is given as follows:

    (11)

    It is easy to see the direct connection between the recursive form and the matrix form: from , we have .

3.4.1 Commute Cost

Commute cost is defined as the expected cost required to hit for the first time and get back to . As in the case of commute time, commute cost is a symmetric metric and is given by

(12)

3.5 Absorption Probability

The absorption probability, also known as hitting probability in the literature, is the probability of hitting or getting absorbed by a target node (or any node in a set of target nodes) in a finite time norris1998markov . For a single target node, this probability is trivially equal to 1 for all nodes in a strongly connected network. We therefore consider more than one target nodes in this paper.

Let indexes and be assigned to two target nodes in a strongly connected network. We partition the transition probability matrix as follows:

(13)

where is an matrix, , , , and are vectors, and the rest are scalars. The corresponding absorption probability can be expressed in three forms as follows:

  • Matrix form snell : The absorption probability matrix denoted by is a matrix whose columns represent the absorption probabilities to target and respectively:

    (14)
    (15)

    where . The notation emphasizes that target is hit sooner than target , and indicates hitting target occurs sooner than target . The formulation above states that to obtain the probability of getting absorbed (hit) by a given target when starting a random walk from a source node, we add up the absorption probabilities of starting from the neighbors of the source node, weighted by the number of times we expect to be in those neighboring nodes snell . For a strongly connected network, these two probabilities are sum up to 1 for each starting node , i.e., .

  • Recursive form norris1998markov : For each of the target nodes, the absorption probability starting from any source node can be found from the absorption probabilities starting from its neighbors:

    (16)
    (17)

    where . Note that the neighbors of a node can also be the target nodes. Thus, the right-hand side of the above equations is decomposed into two parts: , and the same way for . Now, it is easy to see how the recursive form is connected to the matrix form: from , we have .

  • Stochastic form norris1998markov : Let be a discrete-time Markov chain with the transition matrix . The hitting time of the target state before is denoted by a random variable given by . Then the probability of ever hitting is norris1998markov . This can be derived as follows:

    (18)

    The stochastic form for is derived in a similar vein.

3.6 Generalization: Markov Metrics for a Set of Targets

Let be a set of target nodes. Then the transition probability matrix can be written in the following form:

(19)

where is the set of non-target nodes. Note that set of target nodes can be modeled as the set of absorbing states in a Markov chain, and then is the set of transient (non-absorbing) nodes. Since hitting the target nodes is the stopping criterion for all the Markov metrics we have reviewed so far, it does not matter where the random walk can go afterwards and what the outgoing edges of the target nodes are. Therefore, there is no difference between and for computing the Markov metrics.

For a given set of target nodes , the fundamental matrix is obtained using the following relation:

(20)

which is a general form of the fundamental matrix defined for a single target (Eq.(1)). Entry represents the expected number of visits to before hitting any of the target nodes in when starting a random walk from .

A hitting time for is defined as the expected number of steps to hit the set for the first time which can occur by hitting any of the target nodes in this set. The vector of hitting times with respect to a target set can be computed using

(21)

If there exists a matrix of costs defined for the network, the hitting cost for target set is given below

(22)

where is a vector of expected outgoing cost ’s: .

The absorption probability of target set is a matrix whose columns represents the absorption probability for each target node if it gets hit sooner than the other target nodes:

(23)

where

is a row-stochastic matrix for a strongly connected network.

We remark that if the network is not strongly connected (thus the corresponding Markov chain is not irreducible), may not be non-singular for every set of . Hence may not exist. The necessary and sufficient condition for the existence of is that target set includes at least one node from each recurrent equivalence class in the network. The recurrent equivalence class is the minimal set of nodes that have no outgoing edge to nodes outside the set. Once a random walk reaches a node in a recurrent equivalence class, it can no longer get out of that set. A recurrent equivalence class can be as small as one single node, which is called an absorbing node.

4 Useful Relations for Markov Metrics

In this section, we first establish several important theorems, and then gather and derive a number of useful relations among the Markov metrics. We start by relating the fundamental tensor with the Laplacian matrices of a general network. For an undirected network or graph , the graph Laplacian (where is the adjacent matrix of and is the diagonal matrix of node degrees) and its normalized version have been widely studied and found many applications (see, e.g.chung94spectral and the references therein). In particular, it is well-known that commute times are closely related to the Penrose-Moore pseudo-inverse of (and a variant of Eq.(24) also holds for ):

(24)

Li and Zhang Yanhua-Infocom10 ; WAW2010Yanhua ; Li:Digraph were first to introduce the (correct) generalization of graph Laplacians for directed networks/graphs (digraphs) using the stationary distribution of the transition matrix for the associated (random walk) Markov chain defined on a directed network . For a strongly connected network , its normalized digraph Laplacian is defined as , where is the diagonal matrix of stationary probabilities. Li and Zhang proved that the hitting time and commute time can be computed from the Moore-Penrose pseudo-inverse of using the following relations:

(25)

and

(26)

We define the (unnormalized) digraph Laplacian for a general (directed or undirected) network as and the random walk Laplacian as . Clearly, . Note that for a (connected) undirected graph, as where , we see that the classical graph Laplacian . Any results which hold for also hold for with a scalar multiple. In the following we relate the fundamental tensor to the digraph and random walk Laplacians and , and use this relation to establish similar expressions for computing hitting and commute times using , analogous to Eqs.(25) and (26).

Lemma 1 (boley2011commute ).

Let be an irreducible matrix such that . Let be the Moore-Penrose pseudo-inverse of partitioned similarly and , , where and are -dim column vectors, is the transpose of the column vector ( is a -dim row vector and is a -dim column vector, a la MATLAB). Then the inverse of the matrix exists and is given by:

(27)

where denotes the identity matrix.

Note that node in the above lemma can be substituted by any other node (index).

Theorem 1.

The fundamental tensor can be computed from the Moore-Penrose pseudo-inverse of the digraph Laplacian matrix as well as the random walk Laplacian matrix as follows, which results to time complexity:

(28)
(29)

where is the stationary probability of node and is a diagonal matrix whose -th diagonal entry is equal to .

Proof.

Note that as in Lemma 1. The above equations follow from Lemma 1 with . The nullity of matrix for a strongly connected network is 1. Using Eq.(28) or (29), all entries of the fundamental tensor can be computed from in constant time each. ∎

Corollary 1.
(30)

where is a constant independent of .

Proof.
(31)
(32)
(33)

where the second equality follows from the fact that the column sum of is zero. Later in Section 5, we will show that , where is the Kirchhoff index of a network. ∎

Corollary 2.

Hitting time and commute time can also be expressed in terms of entries in the digraph Laplacian matrix  Li:Digraph :

(34)
(35)
Proof.

Use Eq.(5) and (28). ∎

Note that we can also write the metrics in terms of the random walk Laplacian matrix by a simple substitution: .

Corollary 3.

Hitting cost and commute cost can be expressed in terms of the digraph Laplacian matrix :

(36)
(37)

where and .

Proof.

Use Eq.(10) and (28). From Eq.(35) and (37), it is also interesting to note that commute cost is a multiple scalar of commute time. ∎

Lemma 2 (boley2011commute ).

Let be an non-singular matrix and suppose is singular. Then the Moore-Penrose pseudo-inverse of is given as:

(38)

where , .

Theorem 2.

For an ergodic Markov chain, the Moore-Penrose pseudo-inverse of random-walk Laplacian can be computed from fundamental matrix grinstead2012introduction as follows:

(39)

where is a vector of all 1’s and denotes the vector of stationary probabilities.

Proof.

The theorem is a direct result of applying Lemma 2. ∎

Theorem 2 along with Theorem 1 reveal the relation between the fundamental matrices and . They also show that the fundamental tensor can be computed by a single matrix inverse, can it be either a Moore-Penrose pseudo-inverse or a regular matrix inverse, as in Eq. (29) can be computed by either operating the pseudo-inverse on or using Eq. (39). Discussion on computing Markov metrics via the group inverse can be found in meyer1975role ; kirkland2012group .

Theorem 3 (Incremental Computation of the Fundamental Matrix).

The fundamental matrix for target set can be computed from the fundamental matrix for target set as follows,

(40)

where denotes the row corresponding to node and the columns corresponding to set of the fundamental matrix , and the (sub-)matrices and are similarly defined.

Proof.

Consider the matrix , where the absorbing set is and the transient set . The inverse of yields the fundamental matrix , and the inverse of its sub-matrix obtained from removing rows and columns corresponding to set yields the fundamental matrix . Using the following equations from the Schur complement, we see that the inverse of a sub-matrix can be derived from that of the original matrix.

If is invertible, we can factor the matrix as follows

(41)

Inverting both sides of the equation yields

(42)
(43)
(44)

where . Therefore, can be computed from . ∎

Corollary 4.

The simplified form of Theorem 3 for a single target is given by

(45)
Lemma 3.
(46)

where

Proof.

It follows easily from Eq.(1). ∎

Corollary 5 (Another Recursive Form for the Fundamental Matrix).
(47)
Proof.

It is a special case of Lemma 3. Note that the recursive relation in Eq.(2) is in terms of ’s outgoing neighbors, while this one is in terms of incoming neighbors of . ∎

Theorem 4 (Absorption Probability & Normalized Fundamental Matrix).

The absorption probability of a target node in an absorbing set can be written in terms of the normalized fundamental matrix , where the columns are normalized by the diagonal entries:

(48)
Proof.

where the third and fifth equalities follow directly from of Theorem 3 and Lemma 3, respectively. ∎

We are now in a position to gather and derive a number of useful relations among the random walk metrics.

Relation 1 (Complementary relation of absorption probabilities).
(50)

where and .

Proof.

Based on the definition of and the assumption that all the nodes in are transient, the probability that a random walk eventually ends up in set is 1. ∎

Relation 2 (Relations between the fundamental matrix and commute time).
(51)
(52)
(53)
(54)
Proof.

Use (28) and (35). ∎

Relation 3 (The hitting time detour overhead in terms of other metrics).
(55)
(56)
Proof.

For the first equation use (28) and (34), and for the second one use the previous equation along with (4) and (51). ∎

Relation 4 (The hitting time for two target nodes in terms of hitting time for a single target).
(57)

which can also be reformulated as: .

Proof.

Aggregate two sides of Eq.(3) over and substitute Eq.(4) in it. ∎

Relation 5 (Inequalities for hitting time).