On Fast Computation of Directed Graph Laplacian Pseudo-Inverse

by   Daniel Boley, et al.
University of Minnesota

The Laplacian matrix and its pseudo-inverse for a strongly connected directed graph is fundamental in computing many properties of a directed graph. Examples include random-walk centrality and betweenness measures, average hitting and commute times, and other connectivity measures. These measures arise in the analysis of many social and computer networks. In this short paper, we show how a linear system involving the Laplacian may be solved in time linear in the number of edges, times a factor depending on the separability of the graph. This leads directly to the column-by-column computation of the entire Laplacian pseudo-inverse in time quadratic in the number of nodes, i.e., constant time per matrix entry. The approach is based on "off-the-shelf" iterative methods for which global linear convergence is guaranteed, without recourse to any matrix elimination algorithm.



There are no comments yet.


page 1

page 2

page 3

page 4


Random Walk Laplacian and Network Centrality Measures

Random walks over directed graphs are used to model activities in many d...

Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations

We show how to solve directed Laplacian systems in nearly-linear time. G...

Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications

In this paper we provide nearly linear time algorithms for several probl...

Metric preserving directed graph symmetrization

This work presents a new method for symmetrization of directed graphs th...

Spectral Sparsification of Random-Walk Matrix Polynomials

We consider a fundamental algorithmic question in spectral graph theory:...

NFFT meets Krylov methods: Fast matrix-vector products for the graph Laplacian of fully connected networks

The graph Laplacian is a standard tool in data science, machine learning...

The hydrogen identity for Laplacians

For any finite simple graph G, the hydrogen identity H=L-L^(-1) holds, w...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


8. Compute Stationary Probabilities. The vector of stationary probabilities is the eigenvector of corresponding to the eigenvalue . Since the underlying graph is strongly connected, the Perron Frobenius theory guarantees eigenvalue is simple. The number of other eigenvalues of modulus 1 is equal to the periodicity of the graph or random walk. For instance, a bipartite graph will have an eigenvalue . If per is the period of the graph and we use vectors in the following algorithm then the algorithm is guaranteed to converge at a rate bounded by [29] since is known and is a simple eigenvalue of largest modulus.

Algorithm 3. Modified Subspace Iteration. [29]
Input: matrix

, hyperparameters:

tol, initial guess .
Output: eigenvalue corresponding to eigenvalue 1.

  1. Set , where is all non-negative.

  2. Repeat until convergence:

    1. Set .

    2. Compute Schur Decomposition with the diagonal entries of ordered to put the entry closest to 1 in the 1,1 position.

    3. Set . Ensure the first column is all non-negative (flipping signs of rows of to make the first column all non-negative, if necessary).

  3. Return .

9. Restarted GMRES. The heart of the computation of the pseudo-inverse is the use of Lemma On Fast Computation of Directed Graph Laplacian Pseudo-Inversea to convert a pseudo-inverse computation to an ordinary inverse computation. The restarted GMRES algorithm has received much attention in the literature (see [26] and references therein) with many enhancements for numerical stability that do not impact the cost by more than a constant factor. For the purposes of showing the overall cost of the algorithm, we show a simplified sketch of the basic algorithm. By using restarted GMRES we bound the cost of each iteration.

Algorithm 4. Arnoldi-based Restarted GMRES.
Matrix , right hand side , hyperparameters: restart count , outer iteration limit maxit, tolerance tol, initial vector .
Output: solution such that .

  1. Compute

  2. For :

    1. Set and set .

    2. If , return solution .

    3. Generate orthonormal Arnoldi basis for the Krylov space
      , and upper Hessenberg matrix such that .

    4. Compute .

    5. Set

The cost of one outer step 2 of restarted GMRES is [26]. Here is the cost of one matrix vector product involving sparse matrix

. This takes one floating multiply and one floating add for each non-zero matrix element. So the cost is

. The number of outer iterations required is controlled by the eigen-structure of the symmetric part , which is related to the separability of the underlying graph [6]. Note step (d) is an least squares problem costing to solve, due to the special Hessenberg structure of .

Regarding the number of GMRES iterations, we have the following bound which yields Theorem On Fast Computation of Directed Graph Laplacian Pseudo-Inverse as an immediate consequence.

Theorem 5. [11, 10, 21, 17], Let be a matrix such that is Hermitian positive definite and let denote the smallest eigenvalue for . The residual obtained by GMRES [27] after steps applied to the linear system satisfies


We give a sketch of a proof, referring to to [11, 10, 21, 17] for detailed proofs, including several tighter bounds. First we need the following Lemma

Lemma 6. Let and with be given. Let . Then

Proof. is the value achieving the minimum in the scalar least squares problem and hence satisfies the Galerkin condition . So we have

where . A well known result on field of values for any matrix whose Hermitian part is positive definite is the inequality [16]

Hence the first inequality (Appendix) follows. The remaining inequality follows from the identity

Inverting both sides and taking norms yields


Sketch of Proof of Theorem Appendix.. GMRES on a matrix with initial residual will find in steps a solution with a residual satisfying , where is the set of all polynomials of degree up to satisfying . In particular, after a single step , where . Hence we have the bound from Lemma Appendix: . This amounts to a single step of a variant of the classical Richardson iteration. Repeating this Richardson iteration yields

The norm of the residual after Richardson steps would be bounded above by the convergence rate and bounded below by the norm of the GMRES residual:



  • [1] Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. Rev. Mod. Phys., 74:47–97, Jan 2002.
  • [2] A. Berman and R. J. Plemmons. Nonnegative Matrices in the Mathematical Sciences. Academic Press, New York, 1979. Reprinted by SIAM, Philadelphia, 1994.
  • [3] Daniel Boley, Alejandro Buendia, and Golshan Golnari. Random walk laplacian and network centrality measures. arxiv.org/abs/1808.02912, 2018.
  • [4] Daniel Boley, Gyan Ranjan, and Zhi-Li Zhang. Commute times for a directed graph using an asymmetric Laplacian. Lin. Alg. & Appl., 435:224–242, 2011.
  • [5] Alejandro Buendia and Daniel Boley. Optimized graph-based trust mechanisms using hitting times. In AAMAS Intl Workshop on Trust in Agent Societies, May 2017.
  • [6] Fan Chung. Laplacians and the cheeger inequality for directed graphs. Annals of Combinatorics, 9(1):1–19, April 2005.
  • [7] Fan R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997.
  • [8] Michael B. Cohen, Jon Kelner, John Peebles, Richard Peng, Aaron Sidford, and Adrian Vladu. Faster algorithms for computing the stationary distribution, simulating random walks, and more. In IEEE 57th Annual Symp. on Found. Comput. Sci. (FOCS), pages 583–592, Oct 2016.
  • [9] Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, and Aaron Sidford. Solving directed laplacian systems in nearly-linear time through sparse LU factorizations. arxiv.org/abs/1811.10722, 2018.
  • [10] M. Eiermann and O.G. Ernst. Geometric aspects of the theory of krylov subspace methods,. Acta Numer, 10:251–312, 2001.
  • [11] Howard C. Elman. Iterative methods for large sparse nonsymmetric systems of linear equations., phd. thesis, Yale University, New Haven, 1982.
  • [12] François Fouss, Alain Pirotte, Jean-Michel Renders, and Marco Saerens. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on Knowledge and Data Engineering, 19(3):355–369, 2007.
  • [13] David Gleich, Leonid Zhukov, and Pavel Berkhin. Fast parallel pagerank: A linear system approach. In WWW 2005, 2005.
  • [14] Golshan Golnari, Zhi-Li Zhang, and Daniel Boley.

    Markov fundamental tensor and its applications to network analysis.

    Linear Algebra and Appl., 564:126–158, 2019.
  • [15] Charles M. Grinstead and J. Laurie Snell. Introduction to Probability. American Mathematical Society, 2nd edition, 2006. www.dartmouth.edu/ ~chance/ teaching_aids/ books_articles/ probability_book/ book.html.
  • [16] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. Cambridge University Press, Cambridge, 1991.
  • [17] Wayne Joubert. On the convergence behavior of the restarted GMRES algorithm for solving nonsymmetric linear systems. Num. Lin. Alg. Appl., 1(5):427–447, 1994.
  • [18] J.G. Kemeny and J.L. Snell.

    Finite Markov Chains

    Springer-Verlag, 1976.
  • [19] Yanhua Li and Zhi-Li Zhang. Random walks on digraphs: A theoretical framework for estimating transmission costs in wireless routing. In The 29th IEEE Conference on Computer Communications (IEEE INFOCOM 2010), March 2010.
  • [20] Yanhua Li and Zhi-Li Zhang. Random walks on digraphs, the generalized digraph laplacian and the degree of asymmetry. In 7th Workshop on Algorithms and Models for Webgraphs WAW’10, Dec 13-17 2010. (co-located with WINE’10), www.cs.umn.edu/ ~zhzhang/ Papers/ Yanhua -Li -WAW10 .pdf.
  • [21] Jörg Liesen and Petr Tichý. The field of values bound on ideal GMRES. arxiv.org/abs/1211.5969, 2012.
  • [22] Brandon Liu, David Parkes, and Sven Seuken. Personalized hitting time for informative trust mechanisms despite sybils. In Int’l Conf. Auto. Agents & Multiagent Sys. (AAMAS), 2016.
  • [23] J. Norris. Markov Chains. Cambridge Univ. Press, 1997.
  • [24] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. TR SIDL-WP-1999-0120, Computer Systems Laboratory, Stanford Univ., 1998. ilpubs.stanford.edu:8090/ 422/.
  • [25] M. Richardson, R. Agrawal, and P. Domingos. Trust management for the semantic web. In ISWC, 2003. (Data from https://snap.stanford.edu/data/soc-Epinions1.html).
  • [26] Y. Saad. Iterative Methods for Sparse Linear Systems. SIAM, 2nd edition, 2003.
  • [27] Y. Saad and M. Schultz. Gmres: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. and Stat. Comput., 7:856–869, 1986.
  • [28] D. A. Spielman and S.-H. Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM J Matrix Anal, 35:835–885, 2014.
  • [29] G. W. Stewart. Simultaneous iteration for computing invariant subspaces of non-hermitian matrices. Numer. Math., 25:123–136, 1976.