Similarity-Aware Spectral Sparsification by Edge Filtering

11/14/2017
by   Zhuo Feng, et al.
Michigan Technological University
0

In recent years, spectral graph sparsification techniques that can compute ultra-sparse graph proxies have been extensively studied for accelerating various numerical and graph-related applications. Prior nearly-linear-time spectral sparsification methods first extract low-stretch spanning tree of the original graph to form the backbone of the sparsifier, and then recover small portions of spectrally-critical off-tree edges to the spanning to significantly improve the approximation quality. However, it is not clear how many off-tree edges should be recovered for achieving a desired spectral similarity level within the sparsifier. Motivated by recent graph signal processing techniques, this paper proposes a similarity-aware spectral graph sparsification framework that leverages an efficient off-tree edge filtering scheme to construct spectral sparsifiers with guaranteed spectral similarity (relative condition number) level. An iterative graph densification framework and a generalized eigenvalue stability checking scheme are introduced to facilitate efficient and effective filtering of off-tree edges even for highly ill-conditioned problems. The proposed method has been validated using various kinds of graphs obtained from public domain sparse matrix collections relevant to VLSI CAD, finite element analysis, as well as social and data networks frequently studied in many machine learning and data mining applications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/04/2019

GRASS: Spectral Sparsification Leveraging Scalable Spectral Perturbation Analysis

Spectral graph sparsification aims to find ultra-sparse subgraphs whose ...
10/12/2017

Towards Scalable Spectral Clustering via Spectrum-Preserving Sparsification

The eigendeomposition of nearest-neighbor (NN) graph Laplacian matrices ...
11/23/2019

GRASPEL: Graph Spectral Learning at Scale

Learning meaningful graphs from data plays important roles in many data ...
12/11/2018

Towards Scalable Spectral Sparsification of Directed Graphs

Recent spectral graph sparsification research allows constructing nearly...
12/21/2018

Nearly-Linear Time Spectral Graph Reduction for Scalable Graph Partitioning and Data Visualization

This paper proposes a scalable algorithmic framework for spectral reduct...
07/14/2020

Graph Sparsification by Universal Greedy Algorithms

Graph sparsification is to approximate an arbitrary graph by a sparse gr...
12/19/2019

A Maximum Entropy approach to Massive Graph Spectra

Graph spectral techniques for measuring graph similarity, or for learnin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Spectral methods are playing increasingly important roles in many graph and numerical applications [22], such as scientific computing [20], numerical optimization [3], data mining [14], graph analytics [10], machine learning [7], graph signal processing [16], and VLSI computer-aided design [23, 9]

. For example, classical spectral graph partitioning (data clustering) algorithms embed original graphs into low-dimensional space using the first few nontrivial eigenvectors of graph Laplacians and subsequently perform graph partitioning (data clustering) on the low-dimensional graphs to obtain high-quality solution

[14].

Recent spectral graph sparsification research [17, 2, 19, 14, 4, 12]

allows computing nearly-linear-sized subgraphs (sparsifiers) that can robustly preserve the spectrum (i.e., eigenvalues and eigenvectors) of the original graph’s Laplacian, which immediately leads to a series of theoretically nearly-linear-time numerical and graph algorithms for solving sparse matrices, graph-based semi-supervised learning (SSL), spectral graph partitioning (data clustering), and max-flow problems

[11, 19, 3, 20]. For example, sparsified circuit networks allow for developing more scalable computer-aided (CAD) design algorithms for designing large VLSI systems [9, 23]; sparsified social (data) networks enable to more efficiently understand and analyze large social (data) networks [22]; sparsified matrices can be immediately leveraged to accelerate the solution computation of large linear system of equations [24]

. To this end, a spectral sparsification algorithm leveraging an edge sampling scheme that sets sampling probabilities proportional to edge effective resistances (of the original graph) has been proposed in

[17].

A practically-efficient, nearly-linear complexity spectral graph sparsification algorithm has been recently introduced in [9]

, which first extracts a “spectrally critical" spanning tree subgraph as a backbone of the sparsifier, and subsequently recovers a small portion of dissimilar “spectrally critical" off-tree edges to the spanning tree. However, in many scientific computing and graph-related applications, it is important to compute spectral graph sparsifiers of desired spectral similarity level: introducing too few edges may lead to poor approximation of the original graph, whereas too many edges can result in high computational complexity. For example, when using a preconditioned conjugate gradient (PCG) solver to solve a symmetric diagonally dominant (SDD) matrix for multiple right-hand-side (RHS) vectors, it is hoped the PCG solver would converge to a good solution as quickly as possible, which usually requires the sparsifier (preconditioner) to be highly spectrally-similar to the original problem; on the other hand, in many graph partitioning tasks, only the Fielder vector (the first nontrivial eigenvector) of graph Laplacian is needed

[18], so even a sparsifier with much lower spectral similarity will suffice.

This paper introduces a similarity-aware spectral graph sparsification framework that leverages efficient spectral off-tree edge embedding and filtering schemes to construct spectral sparsifiers with guaranteed spectral similarity. The contribution of this work has been summarized as follows:

  1. We present a similarity-aware spectral graph sparsification framework by leveraging spectral off-tree edge embedding and filtering schemes that have been motivated by recent graph signal processing techniques [16].

  2. An iterative graph densification procedure is proposed to incrementally improve the approximation of the sparsifier, which enables to flexibly trade off the complexity and spectral similarity of the sparsified graph.

  3. Extensive experiments have been conducted to validate the proposed method in various numerical and graph-related applications, such as solving sparse SDD matrices, and spectral graph partitioning, as well as simplification of large social and data networks.

2 Spectral Graph Sparsification

Consider a graph with denoting the vertex (data point) set of the graph, denoting the edge set of the graph, and denoting a weight (similarity) function that assigns positive weights to all edges. The graph Laplacian of is an SDD matrix defined as follows:

(1)

Spectral graph sparsification [19] aims to preserve the original graph spectrum within ultra-sparse subgraphs (graph sparsifiers), which allows preserving not only cuts in the graph but also eigenvalues and eigenvectors of the original graph Laplacian, distances (e.g. effective resistances) between vertices, low-dimensional graph embedding, etc. Two graphs and are said to be spectrally similar if for all real vectors their quadratic forms satisfy:

(2)

Define the relative condition number to be , where and denote the largest and smallest generalized eigenvalues satisfying:

(3)

with denoting the generalized eigenvector of . It can be further shown that , which indicates that a smaller relative condition number or corresponds to a higher spectral similarity. Obviously, we can simply use to denote the upper bound of the relative condition number.

3 Similarity-Aware Spectral Sparsification By Edge Filtering

3.1 Overview of Our Approach

The overview of the proposed method for similarity-aware spectral sparsification of undirected graphs has been summarized as follows. For a given input graph, the following key procedures are involved in the proposed algorithm flow: (a) low-stretch spanning tree [8, 1] extraction based on its original graph Laplacian; (b) spectral (generalized eigenvalue) embedding and filtering of off-tree edges by leveraging the recent spectral perturbation analysis framework [9]; (c) incremental sparsifier improvement (graph densification) by gradually adding small portions of dissimilar off-tree edges to the spanning tree. Fig. 1 shows the spectral drawings [10] of an airfoil graph [6] as well as its spectrally-similar subgraph computed by the proposed similarity-aware spectral sparsification algorithm.

In the rest of this paper, we assume that is a weighted, undirected and connected graph, whereas is its sparsifier. To simplify the our analysis, we assume the edge weights in the sparsifier remain the same as the original ones, though edge re-scaling schemes [19] can be applied to further improve the approximation. The descending eigenvalues of are denoted by , where denotes the Moore-Penrose pseudoinverse of .

Figure 1: Two spectrally-similar airfoil graphs.

3.2 Spectral Embedding of Off-Tree Edges

It has been shown that there are not too many large generalized eigenvalues for spanning-tree sparsifiers [21]: has at most generalized eigenvalues greater than , where denotes the total stretch of the spanning-tree subgraph with respect the original graph . Recent research results show that every undirected graph has a low-stretch spanning tree (LSST) such that [8, 1]:

(4)

where and . As a result, it is possible to construct an ultra-sparse yet spectrally similar sparsifier by recovering only a small portion of important off-tree edges to the spanning tree: for example, -similar spectral sparsifiers with off-tree edges can be computed efficiently using perturbation-based method [9].

To identify important off-tree edges the following generalized eigenvalue perturbation analysis is considered [9]:

(5)

where a perturbation matrix is applied for the inclusion of extra off-tree edges into and results in perturbed generalized eigenvalues and eigenvectors and for , respectively. The key to effective spectral sparsification is to identify the key off-tree edges that will result in the greatest reduction in dominant generalized eigenvalues. To this end, the following scheme for embedding generalized eigenvalues into each off-tree edge is adopted in this work [9]:
Step 1: Start with an initial random vector , where are the -orthogonal generalized eigenvectors of that satisfy for , and for ;
Step 2: Perform -step generalized power iterations with to obtain ; will be a good approximation of dominant eigenvectors;
Step 3: Compute the Laplacian quadratic form for with :

(6)

where denotes the perturbation of including all off-tree edges, denotes the vector that has the -th element being , the -th element being and others being , and denotes the edge Joule heat of the off-tree edge . The amplitude of reflects the spectral similarity between graphs and : larger indicates greater and thus lower spectral similarity. More importantly, (6) allows embedding generalized eigenvalues into the Laplacian quadratic form of each off-tree edge and ranking each off-tree edge according to its edge Joule heat (spectral criticality): recovering the off-tree edges with largest will most significantly decrease the largest generalized eigenvalues. In practice, using a small number (e.g. ) of generalized power iterations will suffice for spectral edge embedding purpose.

3.3 “Spectrally-Unique” Off-Tree Edges

To simplify the following analysis, we define a “spectrally unique" off-tree edge to be the one that connects to vertices and , and only impacts a single large generalized eigenvalue . Then the truncated version of (6) including the top dominant “spectrally-unique" off-tree edges for fixing the top largest eigenvalues of can be expressed as follows for :

(7)

Since each off-tree edge only impacts one generalized eigenvalue, we can express according to (6), which leads to:

(8)

Then the effective resistance of edge in becomes:

(9)

which immediately leads to:

(10)

Since the stretch of off-tree edge is computed by , (10) also indicates that holds for “spectrally-unique" off-tree edges. Consequently, the key off-tree edges identified by (6) or (10) will have the largest stretch values and therefore most significantly impact the largest eigenvalues of . (10) also can be considered as a randomized version of that is further scaled up by a factor of .

3.4 Spectral Sparsification as A Graph Filter

Although (6) and (10) provide a spectral ranking for each off-tree edge, it is not clear how many off-tree edges should be recovered to the spanning tree for achieving a desired spectral similarity level. To this end, we introduce a simple yet effective spectral off-tree edge filtering scheme motivated by recent graph signal processing techniques [16]. To more efficiently analyze signals on general undirected graphs, graph signal processing techniques have been extensively studied recently [16]. There is a clear analogy between traditional signal processing based on classical Fourier analysis and graph signal processing: 1) the signals at different time points in classical Fourier analysis correspond to the signals at different nodes in an undirected graph; 2) the more slowly oscillating functions in time domain correspond to the graph Laplacian eigenvectors associated with lower eigenvalues and more slowly varying (smoother) components across the graph. A comprehensive review of fundamental signal processing operations, such as filtering, translation, modulation, dilation, and down-sampling to the graph setting has been provided in [16].

Spectral sparsification aims to maintain a simplest subgraph sufficient for preserving the slowly-varying or “low-frequency" signals on graphs, which therefore can be regarded as a “low-pass" graph filter. In other words, such spectrally sparsified graphs will be able to preserve the eigenvectors associated with low eigenvalues more accurately than high eigenvalues, and thus will retain “low-frequency" graph signals sufficiently well, but not so well for highly-oscillating (signal) components due to the missing edges.

In practice, preserving the spectral (structural) properties of the original graph within the spectral sparsifier is key to design of many fast numerical and graph-related algorithms [17, 11, 3, 20]. For example, when using spectral sparsifier as a preconditioner in preconditioned conjugate gradient (PCG) iterations, the convergence rate only depends on the spectral similarity (or relative condition number) for achieving a desired accuracy level, while in spectral graph partitioning and data clustering tasks only the first few eigenvectors associated with the smallest nontrivial eigenvalues of graph Laplacian are needed [18, 14].

3.5 Off-Tree Edge Filtering with Joule Heat

To only recover the off-tree edges that are most critical for achieving the desired spectral similarity level, we propose the following scheme for truncating “spectrally-unique" off-tree edges based on each edge’s Joule heat. For a spanning-tree preconditioner, since there will be at most generalized eigenvalues that are greater than , the following simple yet nearly worst-case generalized eigenvalue distribution can be assumed:

(11)

To most economically select the top- “spectrally-unique" off-tree edges that will dominantly impact the top- largest generalized eigenvalues, the following sum of quadratic forms (Joule heat levels) can be computed based on (10) by performing -step generalized power iterations with multiple random vectors :

(12)

The goal is to select top “spectrally-unique" off-tree edges for fixing the top largest generalized eigenvalues such that the resulting upper bound of the relative condition number will become , where and denote the largest and smallest eigenvalues of after adding top-k “spectrally-unique" off-tree edges. Then we have:

(13)

When using multiple random vectors for computing (12), it is expected that , which allows us to define the normalized edge Joule heat for the -th “spectrally-unique" off-tree edge through the following simplifications:

(14)

The key idea of the proposed similarity-aware spectral sparsification is to leverage the normalized Joule heat (14) as a threshold for filtering off-tree edges: only the off-tree edges with normalized Joule heat values greater than will be selected for inclusion into the spanning tree for achieving the desired spectral similarity () level. Although the above scheme is derived for filtering “spectrally-unique" off-tree edges, general off-tree edges also can be filtered using similar strategies. Since adding the off-tree edges with largest Joule heat to the subgraph will mainly impact the largest generalized eigenvalues but not the smallest ones, we will assume , and use the following edge truncation scheme for filtering general off-tree edges: the off-tree edge will be included into the sparsifier if its normalized Joule heat value is greater than the threshold determined by:

(15)

where denotes the threshold for achieving the spectral similarity in the sparsifier, and denotes the maximum Joule heat of all off-tree edges computed by (6) with multiple initial random vectors.

3.6 Estimation of Extreme Eigenvalues

To achieve the above spectral off-tree edge filtering scheme, we need to compute in (15

) that further requires to estimate the extreme eigenvalues

and of . In this work, we propose the following efficient methods for computing these extreme generalized eigenvalues.

3.6.1 Estimating via Power Iterations

Since generalized power iterations converge at a geometric rate determined by the separation of the two largest generalized eigenvalues , the error of the estimated eigenvalue will decrease quickly when is small. It has been shown that the largest eigenvalues of are well separated from each other [21], which thus leads to very fast convergence of generalized power iterations for estimating . To achieve scalable performance of power iterations, we can adopt recent graph-theoretic algebraic multigrid (AMG) methods for solving the sparsified Laplacian matrix [13, 24].

3.6.2 Estimating via Node Coloring

Since the smallest eigenvalues of are crowded together [21], using (shifted) inverse power iterations may not be efficient due to the extremely slow convergence rate. To the extent of our knowledge, none of existing eigenvalue decomposition methods can efficiently compute .

This work exploits the following Courant-Fischer theorem for generalized eigenvalue problems:

(16)

where is also required to be orthogonal to the all-one vector. (16) indicates that if we can find a vector that minimizes the ratio between the quadratic forms of the original and sparsified Laplacians, can be subsequently computed. By restricting the values in to be only or , which can be considered as assigning one of the two colors to each node in graphs and , the following simplifications can be made:

(17)

which will always allow estimating an upper bound for . To this end, we first initialize all nodes with value and subsequently try to find a node such that the ratio between quadratic forms can be minimized:

(18)

The above procedure for estimating only requires finding the node with the smallest node degree ratio and thus can be easily implemented and efficiently performed for even very graph problems. Our results for real-world graphs show that the proposed method is highly efficient and can reasonably estimate the smallest generalized eigenvalues when compared with existing generalized eigenvalue methods [15].

3.7 Iterative Graph Densification

To achieve more effective edge filtering for similarity-aware spectral graph sparsification, we propose to iteratively recover off-tree edges to the sparsifier through an incremental graph densification procedure. Each densification iteration adds a small portion of “filtered" off-tree edges to the latest spectral sparsifier, while the spectral similarity is estimated to determine if more off-tree edges are needed. The -th graph densification iteration includes the following steps:

  1. Update the subgraph Laplacian matrix as well as its solver by leveraging recent graph-theoretic algebraic multigrid methods [13, 24];

  2. Estimate the spectral similarity by computing and using the methods described in Section 3.6;

  3. If the spectral similarity is not satisfactory, continue with the following steps; otherwise, terminate the subgraph densification procedure.

  4. Perform -step generalized power iterations with random vectors to compute the sum of Laplacian quadratic forms (12);

  5. Rank and filter each off-tree edge according to its normalized Joule heat value using the threshold in (15);

  6. Check the similarity of each selected off-tree edge and only add dissimilar edges to the latest sparsifier.

4 Experimental results

The proposed spectral sparsification algorithm has been implemented in ++ 111https://sites.google.com/mtu.edu/zhuofeng-graphspar. The test cases used in this paper have been selected from a great variety of matrices that have been used in circuit simulation, finite element analysis, machine learning and data mining applications. If the original matrix is not a graph Laplacian, it will be converted into a graph Laplacian by setting each edge weight using the absolute value of each nonzero entry in the lower triangular matrix; if edge weights are not available in the original matrix file, a unit edge weight will be assigned to all edges. All of our experiments have been conducted using a single CPU core of a computing platform running 64-bit RHEW 7.2 with a GHz 12-core CPU and GB memory.

4.1 Estimation of Extreme Eigenvalues

Test Cases
fe_rotor
pdb1HYS
bcsstk36
brack2
raefsky3
Table 1: Results of extreme eigenvalue estimations.

In Table 1, the extreme generalized eigenvalues ( and ) estimated by the proposed methods (Section 3.6) are compared with the ones ( and ) computed by the “eigs" function in Matlab for sparse matrices in [6], while the relative errors ( and ) are also shown. is estimated using less than ten generalized power iterations.

We also illustrate the results of spectral edge ranking and filtering according to Joule heat levels computed by one-step generalized power iteration using (6) in Fig. 2 for two sparse matrices in [6]. The thresholds of normalized edge Joule heat values required for spectral edge filtering are labeled using red dash lines. It is observed in Fig. 2 there is a sharp change of the top normalized edge Joule heat values, which indicates that there are not many large eigenvalues of in both cases and agrees well with the prior theoretical analysis [21].

Figure 2: Spectral edge ranking and filtering by normalized Joule heat of off-tree edges for (left) and (right) test cases[6] with top off-tree edges highlighted in red rectangles.

4.2 A Scalable Sparse SDD Matrix Solver

The spectral sparsifier obtained by the proposed similarity-aware algorithm is also leveraged as a preconditioner in a PCG solver. The RHS input vector is generated randomly and the solver is set to converge to an accuracy level for all test cases. “" and “" denote the numbers of nodes and edges in the original graph, whereas “", “" and “" denote the number of edges in the sparsifier, the number of PCG iterations required for converging to the desired accuracy level, and the total time of graph sparsification for achieving the spectral similarity of , respectively. As observed in all test cases, there are very clear trade-offs between the graph density, computation time, and spectral similarity for all spectral sparsifiers extracted using the proposed method: sparsifiers with higher spectral similarities (smaller ) allow converging to the required solution accuracy level in much fewer PCG iterations, but need to retain more edges in the subgraphs and thus require longer time to compute (sparsify).

Graphs
G3_circuit 1.6E6 3.0E6 1.11 21 20s 1.05 37 8s
thermal2 1.2E6 3.7E6 1.14 20 23s 1.06 36 9s
ecology2 1.0E6 2.0E6 1.14 20 16s 1.06 40 5s
tmt_sym 0.7E6 2.2E6 1.21 19 16s 1.14 38 4s
paraboli_fem 0.5E6 1.6E6 1.22 18 16s 1.09 38 3s
Table 2: Results of iterative SDD matrix solver.

4.3 A Scalable Spectral Graph Partitioner

It has been shown that by applying only a few inverse power iterations, the approximate Fiedler vector () that corresponds to the smallest nonzero eigenvalue of the (normalized) graph Laplacian matrix can be obtained for obtaining high-quality graph partitioning solution [20]. Therefore, using the spectral sparsifiers computed by the proposed spectral sparsification algorithm can immediately accelerate the PCG solver for inverse power iterations, leading to scalable performance for graph partitioning problems [20]. In fact, if the spectral sparsifier is already a good approximation of the original graph, its Fiedler vector can be directly used for partitioning the original graph.

We implement the accelerated spectral graph partitioning algorithm, and test it with sparse matrices in [6] and several 2D mesh graphs synthesized with random edge weights. As shown in Table 3, the graphs associated with sparse matrices have been partitioned into two pieces using sign cut method [18] according to the approximate Fiedler vectors computed by a few steps of inverse power iterations. The direct solver [5] and the preconditioned iterative solver are invoked within each inverse power iteration for updating the approximate Fiedler vectors and , respectively. “" denotes the ratio of nodes assigned with positive and negative signs according to the approximate Fiedler vector, and “Rel.Err." denotes the relative error of the proposed solver compared to the direct solver computed by , where denotes the number of nodes with different signs in and . “ " (“ ") and “" (“ ") denote the total solution time (excluding sparsification time) and memory cost of the direct (iterative) method. We extract sparsifiers with for all test cases.

Test Cases () () Rel.Err.
G3_circuit 1.6E6 1.35 52.3s (2.3G) 7.6s (0.3G) 2.2E-2
thermal2 1.2E6 1.00 13.0s (0.9G) 3.0s (0.2G) 6.8E-4
ecology2 1.0E6 1.03 12.1s (0.7G) 3.4s (0.2G) 8.9E-3
tmt_sym 0.7E6 0.99 10.2s (0.6G) 1.9s (0.1G) 2.1E-2
paraboli_fem 0.5E6 0.98 8.8s (0.4G) 2.4s (0.1G) 3.9E-2
mesh_1M 1.0E6 1.01 10.2s (0.7G) 1.7s (0.2G) 3.3E-3
mesh_4M 4.5E6 0.99 49.6s (3.0G) 8.2s (0.7G) 7.5E-3
mesh_9M 9.0E6 0.99 138.5s (6.9G) 13.3s (1.5G) 7.8E-4
Table 3: Results of spectral graph partitioning.

4.4 Sparsification of Other Complex networks

Test Cases
fe_tooth 7.8E4 4.5E5 3.0s 14.5s (2.7s)
appu 1.4E4 9.2E5 5.4s 2,400s (15s)
coAuthorsDBLP 3.0E5 1.0E6 7.2s 2,047s (36s)
auto 4.5E5 3.3E6 29.0s N/A (54s)
RCV-80NN 1.9E5 1.2E7 46.5s N/A (170s)
Table 4: Results of complex network sparsification.

As shown in Table 4, a few finite element, protein, data and social networks have been spectrally sparsified to achieve using the proposed similarity-aware method. “" is the total time for extracting the sparsifier, “" denotes the ratio of the largest generalized eigenvalues before and after adding off-tree edges into the spanning tree sparsifier, and denotes the time for computing the first ten eigenvectors of the original (sparsified) graph Laplacians using the “

" function in Matlab. Since spectral sparsifiers can well approximate the spectral (structural) properties of the original graph, the sparsified graphs can be leveraged for accelerating many numerical and graph-related tasks. For example, spectral clustering (partitioning) using the original “RCV-80NN" (80-nearest-neighbor) graph can not be performed on our server with

memory, while it only takes a few minutes using the sparsified one.

5 Conclusions

For the first time, this paper introduces a similarity-aware spectral graph sparsification framework that can be immediately leveraged to develop fast numerical and graph-related algorithms. Motivated by recent graph signal processing concepts, an iterative graph densification procedure based on spectral embedding and filtering of off-tree edges has been proposed for extracting ultra-sparse yet spectrally-similar graph sparsifiers, which enables to flexibly trade off the complexity and spectral similarity of the sparsified graph in numerical and graph-related applications. Extensive experiment results have confirmed the effectiveness and scalability of an iterative matrix solver and a spectral graph partitioning algorithm for a variety of large-scale, real-world graph problems, such as VLSI and finite element analysis problems, as well as data and social networks.

6 Acknowledgments

This work is supported in part by the National Science Foundation under Grants CCF-1350206 and CCF-1618364.

References

  • [1] I. Abraham and O. Neiman. Using petal-decompositions to build a low stretch spanning tree. In

    Proceedings of the forty-fourth annual ACM symposium on Theory of computing (STOC)

    , pages 395–406. ACM, 2012.
  • [2] J. Batson, D. Spielman, and N. Srivastava. Twice-Ramanujan Sparsifiers. SIAM Journal on Computing, 41(6):1704–1721, 2012.
  • [3] P. Christiano, J. Kelner, A. Madry, D. Spielman, and S. Teng. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In Proc. ACM STOC, pages 273–282, 2011.
  • [4] M. B. Cohen, J. Kelner, J. Peebles, R. Peng, A. B. Rao, A. Sidford, and A. Vladu.

    Almost-linear-time algorithms for markov chains and new spectral primitives for directed graphs.

    In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 410–419. ACM, 2017.
  • [5] T. Davis. CHOLMOD: sparse supernodal Cholesky factorization and update/downdate. [Online]. Available: http://www.cise.ufl.edu/research/sparse/cholmod/, 2008.
  • [6] T. Davis and Y. Hu. The university of florida sparse matrix collection. ACM Trans. on Math. Soft. (TOMS), 38(1):1, 2011.
  • [7] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844–3852, 2016.
  • [8] M. Elkin, Y. Emek, D. Spielman, and S. Teng. Lower-stretch spanning trees. SIAM Journal on Computing, 38(2):608–628, 2008.
  • [9] Z. Feng. Spectral graph sparsification in nearly-linear time leveraging efficient spectral perturbation analysis. In Design Automation Conference (DAC), 2016 53nd ACM/EDAC/IEEE, pages 1–6. IEEE, 2016.
  • [10] Y. Koren. On spectral graph drawing. In International Computing and Combinatorics Conference, pages 496–508. Springer, 2003.
  • [11] I. Koutis, G. Miller, and R. Peng. Approaching Optimality for Solving SDD Linear Systems. In Proc. IEEE FOCS, pages 235–244, 2010.
  • [12] Y. T. Lee and H. Sun. An SDP-based Algorithm for Linear-sized Spectral Sparsification. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, pages 678–687, New York, NY, USA, 2017. ACM.
  • [13] O. Livne and A. Brandt. Lean algebraic multigrid (LAMG): Fast graph Laplacian linear solver. SIAM Journal on Scientific Computing, 34(4):B499–B522, 2012.
  • [14] R. Peng, H. Sun, and L. Zanetti. Partitioning well-clustered graphs: Spectral clustering works. In Proceedings of The 28th Conference on Learning Theory (COLT), pages 1423–1455, 2015.
  • [15] Y. Saad. Numerical Methods for Large Eigenvalue Problems: Revised Edition, volume 66. Siam, 2011.
  • [16] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst.

    The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains.

    IEEE Signal Processing Magazine, 30(3):83–98, 2013.
  • [17] D. Spielman and N. Srivastava. Graph sparsification by effective resistances. SIAM Journal on Computing, 40(6):1913–1926, 2011.
  • [18] D. Spielman and S. Teng. Spectral partitioning works: Planar graphs and finite element meshes. In Foundations of Computer Science (FOCS), 1996. Proceedings., 37th Annual Symposium on, pages 96–105. IEEE, 1996.
  • [19] D. Spielman and S. Teng. Spectral sparsification of graphs. SIAM Journal on Computing, 40(4):981–1025, 2011.
  • [20] D. Spielman and S. Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM Journal on Matrix Analysis and Applications, 35(3):835–885, 2014.
  • [21] D. Spielman and J. Woo. A note on preconditioning by low-stretch spanning trees. arXiv preprint arXiv:0903.2816, 2009.
  • [22] S.-H. Teng. Scalable algorithms for data and network analysis. Foundations and Trends® in Theoretical Computer Science, 12(1–2):1–274, 2016.
  • [23] Z. Zhao and Z. Feng. A spectral graph sparsification approach to scalable vectorless power grid integrity verification. In Proceedings of the 54th Annual Design Automation Conference 2017, page 68. ACM, 2017.
  • [24] Z. Zhao, Y. Wang, and Z. Feng. SAMG: Sparsified Graph Theoretic Algebraic Multigrid for Solving Large Symmetric Diagonally Dominant (SDD) Matrices. In Proceedings of the 36th International Conference on Computer-Aided Design (ICCAD). ACM, 2017.