1 Introduction
Spectral methods are playing increasingly important roles in many graph and numerical applications [22], such as scientific computing [20], numerical optimization [3], data mining [14], graph analytics [10], machine learning [7], graph signal processing [16], and VLSI computeraided design [23, 9]
. For example, classical spectral graph partitioning (data clustering) algorithms embed original graphs into lowdimensional space using the first few nontrivial eigenvectors of graph Laplacians and subsequently perform graph partitioning (data clustering) on the lowdimensional graphs to obtain highquality solution
[14].Recent spectral graph sparsification research [17, 2, 19, 14, 4, 12]
allows computing nearlylinearsized subgraphs (sparsifiers) that can robustly preserve the spectrum (i.e., eigenvalues and eigenvectors) of the original graph’s Laplacian, which immediately leads to a series of theoretically nearlylineartime numerical and graph algorithms for solving sparse matrices, graphbased semisupervised learning (SSL), spectral graph partitioning (data clustering), and maxflow problems
[11, 19, 3, 20]. For example, sparsified circuit networks allow for developing more scalable computeraided (CAD) design algorithms for designing large VLSI systems [9, 23]; sparsified social (data) networks enable to more efficiently understand and analyze large social (data) networks [22]; sparsified matrices can be immediately leveraged to accelerate the solution computation of large linear system of equations [24]. To this end, a spectral sparsification algorithm leveraging an edge sampling scheme that sets sampling probabilities proportional to edge effective resistances (of the original graph) has been proposed in
[17].A practicallyefficient, nearlylinear complexity spectral graph sparsification algorithm has been recently introduced in [9]
, which first extracts a “spectrally critical" spanning tree subgraph as a backbone of the sparsifier, and subsequently recovers a small portion of dissimilar “spectrally critical" offtree edges to the spanning tree. However, in many scientific computing and graphrelated applications, it is important to compute spectral graph sparsifiers of desired spectral similarity level: introducing too few edges may lead to poor approximation of the original graph, whereas too many edges can result in high computational complexity. For example, when using a preconditioned conjugate gradient (PCG) solver to solve a symmetric diagonally dominant (SDD) matrix for multiple righthandside (RHS) vectors, it is hoped the PCG solver would converge to a good solution as quickly as possible, which usually requires the sparsifier (preconditioner) to be highly spectrallysimilar to the original problem; on the other hand, in many graph partitioning tasks, only the Fielder vector (the first nontrivial eigenvector) of graph Laplacian is needed
[18], so even a sparsifier with much lower spectral similarity will suffice.This paper introduces a similarityaware spectral graph sparsification framework that leverages efficient spectral offtree edge embedding and filtering schemes to construct spectral sparsifiers with guaranteed spectral similarity. The contribution of this work has been summarized as follows:

We present a similarityaware spectral graph sparsification framework by leveraging spectral offtree edge embedding and filtering schemes that have been motivated by recent graph signal processing techniques [16].

An iterative graph densification procedure is proposed to incrementally improve the approximation of the sparsifier, which enables to flexibly trade off the complexity and spectral similarity of the sparsified graph.

Extensive experiments have been conducted to validate the proposed method in various numerical and graphrelated applications, such as solving sparse SDD matrices, and spectral graph partitioning, as well as simplification of large social and data networks.
2 Spectral Graph Sparsification
Consider a graph with denoting the vertex (data point) set of the graph, denoting the edge set of the graph, and denoting a weight (similarity) function that assigns positive weights to all edges. The graph Laplacian of is an SDD matrix defined as follows:
(1) 
Spectral graph sparsification [19] aims to preserve the original graph spectrum within ultrasparse subgraphs (graph sparsifiers), which allows preserving not only cuts in the graph but also eigenvalues and eigenvectors of the original graph Laplacian, distances (e.g. effective resistances) between vertices, lowdimensional graph embedding, etc. Two graphs and are said to be spectrally similar if for all real vectors their quadratic forms satisfy:
(2) 
Define the relative condition number to be , where and denote the largest and smallest generalized eigenvalues satisfying:
(3) 
with denoting the generalized eigenvector of . It can be further shown that , which indicates that a smaller relative condition number or corresponds to a higher spectral similarity. Obviously, we can simply use to denote the upper bound of the relative condition number.
3 SimilarityAware Spectral Sparsification By Edge Filtering
3.1 Overview of Our Approach
The overview of the proposed method for similarityaware spectral sparsification of undirected graphs has been summarized as follows. For a given input graph, the following key procedures are involved in the proposed algorithm flow: (a) lowstretch spanning tree [8, 1] extraction based on its original graph Laplacian; (b) spectral (generalized eigenvalue) embedding and filtering of offtree edges by leveraging the recent spectral perturbation analysis framework [9]; (c) incremental sparsifier improvement (graph densification) by gradually adding small portions of dissimilar offtree edges to the spanning tree. Fig. 1 shows the spectral drawings [10] of an airfoil graph [6] as well as its spectrallysimilar subgraph computed by the proposed similarityaware spectral sparsification algorithm.
In the rest of this paper, we assume that is a weighted, undirected and connected graph, whereas is its sparsifier. To simplify the our analysis, we assume the edge weights in the sparsifier remain the same as the original ones, though edge rescaling schemes [19] can be applied to further improve the approximation. The descending eigenvalues of are denoted by , where denotes the MoorePenrose pseudoinverse of .
3.2 Spectral Embedding of OffTree Edges
It has been shown that there are not too many large generalized eigenvalues for spanningtree sparsifiers [21]: has at most generalized eigenvalues greater than , where denotes the total stretch of the spanningtree subgraph with respect the original graph . Recent research results show that every undirected graph has a lowstretch spanning tree (LSST) such that [8, 1]:
(4) 
where and . As a result, it is possible to construct an ultrasparse yet spectrally similar sparsifier by recovering only a small portion of important offtree edges to the spanning tree: for example, similar spectral sparsifiers with offtree edges can be computed efficiently using perturbationbased method [9].
To identify important offtree edges the following generalized eigenvalue perturbation analysis is considered [9]:
(5) 
where a perturbation matrix is applied for the inclusion of extra offtree edges into and results in perturbed generalized eigenvalues and eigenvectors and for , respectively. The key to effective spectral sparsification is to identify the key offtree edges that will result in the greatest reduction in dominant generalized eigenvalues. To this end, the following scheme for embedding generalized eigenvalues into each offtree edge is adopted in this work [9]:
Step 1: Start with an initial random vector , where are the orthogonal generalized eigenvectors of that satisfy for , and for ;
Step 2: Perform step generalized power iterations with to obtain ; will be a good approximation of dominant eigenvectors;
Step 3: Compute the Laplacian quadratic form for with :
(6) 
where denotes the perturbation of including all offtree edges, denotes the vector that has the th element being , the th element being and others being , and denotes the edge Joule heat of the offtree edge . The amplitude of reflects the spectral similarity between graphs and : larger indicates greater and thus lower spectral similarity. More importantly, (6) allows embedding generalized eigenvalues into the Laplacian quadratic form of each offtree edge and ranking each offtree edge according to its edge Joule heat (spectral criticality): recovering the offtree edges with largest will most significantly decrease the largest generalized eigenvalues. In practice, using a small number (e.g. ) of generalized power iterations will suffice for spectral edge embedding purpose.
3.3 “SpectrallyUnique” OffTree Edges
To simplify the following analysis, we define a “spectrally unique" offtree edge to be the one that connects to vertices and , and only impacts a single large generalized eigenvalue . Then the truncated version of (6) including the top dominant “spectrallyunique" offtree edges for fixing the top largest eigenvalues of can be expressed as follows for :
(7) 
Since each offtree edge only impacts one generalized eigenvalue, we can express according to (6), which leads to:
(8) 
Then the effective resistance of edge in becomes:
(9) 
which immediately leads to:
(10) 
Since the stretch of offtree edge is computed by , (10) also indicates that holds for “spectrallyunique" offtree edges. Consequently, the key offtree edges identified by (6) or (10) will have the largest stretch values and therefore most significantly impact the largest eigenvalues of . (10) also can be considered as a randomized version of that is further scaled up by a factor of .
3.4 Spectral Sparsification as A Graph Filter
Although (6) and (10) provide a spectral ranking for each offtree edge, it is not clear how many offtree edges should be recovered to the spanning tree for achieving a desired spectral similarity level. To this end, we introduce a simple yet effective spectral offtree edge filtering scheme motivated by recent graph signal processing techniques [16]. To more efficiently analyze signals on general undirected graphs, graph signal processing techniques have been extensively studied recently [16]. There is a clear analogy between traditional signal processing based on classical Fourier analysis and graph signal processing: 1) the signals at different time points in classical Fourier analysis correspond to the signals at different nodes in an undirected graph; 2) the more slowly oscillating functions in time domain correspond to the graph Laplacian eigenvectors associated with lower eigenvalues and more slowly varying (smoother) components across the graph. A comprehensive review of fundamental signal processing operations, such as filtering, translation, modulation, dilation, and downsampling to the graph setting has been provided in [16].
Spectral sparsification aims to maintain a simplest subgraph sufficient for preserving the slowlyvarying or “lowfrequency" signals on graphs, which therefore can be regarded as a “lowpass" graph filter. In other words, such spectrally sparsified graphs will be able to preserve the eigenvectors associated with low eigenvalues more accurately than high eigenvalues, and thus will retain “lowfrequency" graph signals sufficiently well, but not so well for highlyoscillating (signal) components due to the missing edges.
In practice, preserving the spectral (structural) properties of the original graph within the spectral sparsifier is key to design of many fast numerical and graphrelated algorithms [17, 11, 3, 20]. For example, when using spectral sparsifier as a preconditioner in preconditioned conjugate gradient (PCG) iterations, the convergence rate only depends on the spectral similarity (or relative condition number) for achieving a desired accuracy level, while in spectral graph partitioning and data clustering tasks only the first few eigenvectors associated with the smallest nontrivial eigenvalues of graph Laplacian are needed [18, 14].
3.5 OffTree Edge Filtering with Joule Heat
To only recover the offtree edges that are most critical for achieving the desired spectral similarity level, we propose the following scheme for truncating “spectrallyunique" offtree edges based on each edge’s Joule heat. For a spanningtree preconditioner, since there will be at most generalized eigenvalues that are greater than , the following simple yet nearly worstcase generalized eigenvalue distribution can be assumed:
(11) 
To most economically select the top “spectrallyunique" offtree edges that will dominantly impact the top largest generalized eigenvalues, the following sum of quadratic forms (Joule heat levels) can be computed based on (10) by performing step generalized power iterations with multiple random vectors :
(12) 
The goal is to select top “spectrallyunique" offtree edges for fixing the top largest generalized eigenvalues such that the resulting upper bound of the relative condition number will become , where and denote the largest and smallest eigenvalues of after adding topk “spectrallyunique" offtree edges. Then we have:
(13) 
When using multiple random vectors for computing (12), it is expected that , which allows us to define the normalized edge Joule heat for the th “spectrallyunique" offtree edge through the following simplifications:
(14) 
The key idea of the proposed similarityaware spectral sparsification is to leverage the normalized Joule heat (14) as a threshold for filtering offtree edges: only the offtree edges with normalized Joule heat values greater than will be selected for inclusion into the spanning tree for achieving the desired spectral similarity () level. Although the above scheme is derived for filtering “spectrallyunique" offtree edges, general offtree edges also can be filtered using similar strategies. Since adding the offtree edges with largest Joule heat to the subgraph will mainly impact the largest generalized eigenvalues but not the smallest ones, we will assume , and use the following edge truncation scheme for filtering general offtree edges: the offtree edge will be included into the sparsifier if its normalized Joule heat value is greater than the threshold determined by:
(15) 
where denotes the threshold for achieving the spectral similarity in the sparsifier, and denotes the maximum Joule heat of all offtree edges computed by (6) with multiple initial random vectors.
3.6 Estimation of Extreme Eigenvalues
To achieve the above spectral offtree edge filtering scheme, we need to compute in (15
) that further requires to estimate the extreme eigenvalues
and of . In this work, we propose the following efficient methods for computing these extreme generalized eigenvalues.3.6.1 Estimating via Power Iterations
Since generalized power iterations converge at a geometric rate determined by the separation of the two largest generalized eigenvalues , the error of the estimated eigenvalue will decrease quickly when is small. It has been shown that the largest eigenvalues of are well separated from each other [21], which thus leads to very fast convergence of generalized power iterations for estimating . To achieve scalable performance of power iterations, we can adopt recent graphtheoretic algebraic multigrid (AMG) methods for solving the sparsified Laplacian matrix [13, 24].
3.6.2 Estimating via Node Coloring
Since the smallest eigenvalues of are crowded together [21], using (shifted) inverse power iterations may not be efficient due to the extremely slow convergence rate. To the extent of our knowledge, none of existing eigenvalue decomposition methods can efficiently compute .
This work exploits the following CourantFischer theorem for generalized eigenvalue problems:
(16) 
where is also required to be orthogonal to the allone vector. (16) indicates that if we can find a vector that minimizes the ratio between the quadratic forms of the original and sparsified Laplacians, can be subsequently computed. By restricting the values in to be only or , which can be considered as assigning one of the two colors to each node in graphs and , the following simplifications can be made:
(17) 
which will always allow estimating an upper bound for . To this end, we first initialize all nodes with value and subsequently try to find a node such that the ratio between quadratic forms can be minimized:
(18) 
The above procedure for estimating only requires finding the node with the smallest node degree ratio and thus can be easily implemented and efficiently performed for even very graph problems. Our results for realworld graphs show that the proposed method is highly efficient and can reasonably estimate the smallest generalized eigenvalues when compared with existing generalized eigenvalue methods [15].
3.7 Iterative Graph Densification
To achieve more effective edge filtering for similarityaware spectral graph sparsification, we propose to iteratively recover offtree edges to the sparsifier through an incremental graph densification procedure. Each densification iteration adds a small portion of “filtered" offtree edges to the latest spectral sparsifier, while the spectral similarity is estimated to determine if more offtree edges are needed. The th graph densification iteration includes the following steps:

Estimate the spectral similarity by computing and using the methods described in Section 3.6;

If the spectral similarity is not satisfactory, continue with the following steps; otherwise, terminate the subgraph densification procedure.

Perform step generalized power iterations with random vectors to compute the sum of Laplacian quadratic forms (12);

Rank and filter each offtree edge according to its normalized Joule heat value using the threshold in (15);

Check the similarity of each selected offtree edge and only add dissimilar edges to the latest sparsifier.
4 Experimental results
The proposed spectral sparsification algorithm has been implemented in ++ ^{1}^{1}1https://sites.google.com/mtu.edu/zhuofenggraphspar. The test cases used in this paper have been selected from a great variety of matrices that have been used in circuit simulation, finite element analysis, machine learning and data mining applications. If the original matrix is not a graph Laplacian, it will be converted into a graph Laplacian by setting each edge weight using the absolute value of each nonzero entry in the lower triangular matrix; if edge weights are not available in the original matrix file, a unit edge weight will be assigned to all edges. All of our experiments have been conducted using a single CPU core of a computing platform running 64bit RHEW 7.2 with a GHz 12core CPU and GB memory.
4.1 Estimation of Extreme Eigenvalues
Test Cases  

fe_rotor  
pdb1HYS  
bcsstk36  
brack2  
raefsky3 
In Table 1, the extreme generalized eigenvalues ( and ) estimated by the proposed methods (Section 3.6) are compared with the ones ( and ) computed by the “eigs" function in Matlab for sparse matrices in [6], while the relative errors ( and ) are also shown. is estimated using less than ten generalized power iterations.
We also illustrate the results of spectral edge ranking and filtering according to Joule heat levels computed by onestep generalized power iteration using (6) in Fig. 2 for two sparse matrices in [6]. The thresholds of normalized edge Joule heat values required for spectral edge filtering are labeled using red dash lines. It is observed in Fig. 2 there is a sharp change of the top normalized edge Joule heat values, which indicates that there are not many large eigenvalues of in both cases and agrees well with the prior theoretical analysis [21].
4.2 A Scalable Sparse SDD Matrix Solver
The spectral sparsifier obtained by the proposed similarityaware algorithm is also leveraged as a preconditioner in a PCG solver. The RHS input vector is generated randomly and the solver is set to converge to an accuracy level for all test cases. “" and “" denote the numbers of nodes and edges in the original graph, whereas “", “" and “" denote the number of edges in the sparsifier, the number of PCG iterations required for converging to the desired accuracy level, and the total time of graph sparsification for achieving the spectral similarity of , respectively. As observed in all test cases, there are very clear tradeoffs between the graph density, computation time, and spectral similarity for all spectral sparsifiers extracted using the proposed method: sparsifiers with higher spectral similarities (smaller ) allow converging to the required solution accuracy level in much fewer PCG iterations, but need to retain more edges in the subgraphs and thus require longer time to compute (sparsify).
Graphs  

G3_circuit  1.6E6  3.0E6  1.11  21  20s  1.05  37  8s 
thermal2  1.2E6  3.7E6  1.14  20  23s  1.06  36  9s 
ecology2  1.0E6  2.0E6  1.14  20  16s  1.06  40  5s 
tmt_sym  0.7E6  2.2E6  1.21  19  16s  1.14  38  4s 
paraboli_fem  0.5E6  1.6E6  1.22  18  16s  1.09  38  3s 
4.3 A Scalable Spectral Graph Partitioner
It has been shown that by applying only a few inverse power iterations, the approximate Fiedler vector () that corresponds to the smallest nonzero eigenvalue of the (normalized) graph Laplacian matrix can be obtained for obtaining highquality graph partitioning solution [20]. Therefore, using the spectral sparsifiers computed by the proposed spectral sparsification algorithm can immediately accelerate the PCG solver for inverse power iterations, leading to scalable performance for graph partitioning problems [20]. In fact, if the spectral sparsifier is already a good approximation of the original graph, its Fiedler vector can be directly used for partitioning the original graph.
We implement the accelerated spectral graph partitioning algorithm, and test it with sparse matrices in [6] and several 2D mesh graphs synthesized with random edge weights. As shown in Table 3, the graphs associated with sparse matrices have been partitioned into two pieces using sign cut method [18] according to the approximate Fiedler vectors computed by a few steps of inverse power iterations. The direct solver [5] and the preconditioned iterative solver are invoked within each inverse power iteration for updating the approximate Fiedler vectors and , respectively. “" denotes the ratio of nodes assigned with positive and negative signs according to the approximate Fiedler vector, and “Rel.Err." denotes the relative error of the proposed solver compared to the direct solver computed by , where denotes the number of nodes with different signs in and . “ " (“ ") and “" (“ ") denote the total solution time (excluding sparsification time) and memory cost of the direct (iterative) method. We extract sparsifiers with for all test cases.
Test Cases  ()  ()  Rel.Err.  

G3_circuit  1.6E6  1.35  52.3s (2.3G)  7.6s (0.3G)  2.2E2 
thermal2  1.2E6  1.00  13.0s (0.9G)  3.0s (0.2G)  6.8E4 
ecology2  1.0E6  1.03  12.1s (0.7G)  3.4s (0.2G)  8.9E3 
tmt_sym  0.7E6  0.99  10.2s (0.6G)  1.9s (0.1G)  2.1E2 
paraboli_fem  0.5E6  0.98  8.8s (0.4G)  2.4s (0.1G)  3.9E2 
mesh_1M  1.0E6  1.01  10.2s (0.7G)  1.7s (0.2G)  3.3E3 
mesh_4M  4.5E6  0.99  49.6s (3.0G)  8.2s (0.7G)  7.5E3 
mesh_9M  9.0E6  0.99  138.5s (6.9G)  13.3s (1.5G)  7.8E4 
4.4 Sparsification of Other Complex networks
Test Cases  

fe_tooth  7.8E4  4.5E5  3.0s  14.5s (2.7s)  
appu  1.4E4  9.2E5  5.4s  2,400s (15s)  
coAuthorsDBLP  3.0E5  1.0E6  7.2s  2,047s (36s)  
auto  4.5E5  3.3E6  29.0s  N/A (54s)  
RCV80NN  1.9E5  1.2E7  46.5s  N/A (170s) 
As shown in Table 4, a few finite element, protein, data and social networks have been spectrally sparsified to achieve using the proposed similarityaware method. “" is the total time for extracting the sparsifier, “" denotes the ratio of the largest generalized eigenvalues before and after adding offtree edges into the spanning tree sparsifier, and denotes the time for computing the first ten eigenvectors of the original (sparsified) graph Laplacians using the “
" function in Matlab. Since spectral sparsifiers can well approximate the spectral (structural) properties of the original graph, the sparsified graphs can be leveraged for accelerating many numerical and graphrelated tasks. For example, spectral clustering (partitioning) using the original “RCV80NN" (80nearestneighbor) graph can not be performed on our server with
memory, while it only takes a few minutes using the sparsified one.5 Conclusions
For the first time, this paper introduces a similarityaware spectral graph sparsification framework that can be immediately leveraged to develop fast numerical and graphrelated algorithms. Motivated by recent graph signal processing concepts, an iterative graph densification procedure based on spectral embedding and filtering of offtree edges has been proposed for extracting ultrasparse yet spectrallysimilar graph sparsifiers, which enables to flexibly trade off the complexity and spectral similarity of the sparsified graph in numerical and graphrelated applications. Extensive experiment results have confirmed the effectiveness and scalability of an iterative matrix solver and a spectral graph partitioning algorithm for a variety of largescale, realworld graph problems, such as VLSI and finite element analysis problems, as well as data and social networks.
6 Acknowledgments
This work is supported in part by the National Science Foundation under Grants CCF1350206 and CCF1618364.
References

[1]
I. Abraham and O. Neiman.
Using petaldecompositions to build a low stretch spanning tree.
In
Proceedings of the fortyfourth annual ACM symposium on Theory of computing (STOC)
, pages 395–406. ACM, 2012.  [2] J. Batson, D. Spielman, and N. Srivastava. TwiceRamanujan Sparsifiers. SIAM Journal on Computing, 41(6):1704–1721, 2012.
 [3] P. Christiano, J. Kelner, A. Madry, D. Spielman, and S. Teng. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In Proc. ACM STOC, pages 273–282, 2011.

[4]
M. B. Cohen, J. Kelner, J. Peebles, R. Peng, A. B. Rao, A. Sidford, and
A. Vladu.
Almostlineartime algorithms for markov chains and new spectral primitives for directed graphs.
In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 410–419. ACM, 2017.  [5] T. Davis. CHOLMOD: sparse supernodal Cholesky factorization and update/downdate. [Online]. Available: http://www.cise.ufl.edu/research/sparse/cholmod/, 2008.
 [6] T. Davis and Y. Hu. The university of florida sparse matrix collection. ACM Trans. on Math. Soft. (TOMS), 38(1):1, 2011.
 [7] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844–3852, 2016.
 [8] M. Elkin, Y. Emek, D. Spielman, and S. Teng. Lowerstretch spanning trees. SIAM Journal on Computing, 38(2):608–628, 2008.
 [9] Z. Feng. Spectral graph sparsification in nearlylinear time leveraging efficient spectral perturbation analysis. In Design Automation Conference (DAC), 2016 53nd ACM/EDAC/IEEE, pages 1–6. IEEE, 2016.
 [10] Y. Koren. On spectral graph drawing. In International Computing and Combinatorics Conference, pages 496–508. Springer, 2003.
 [11] I. Koutis, G. Miller, and R. Peng. Approaching Optimality for Solving SDD Linear Systems. In Proc. IEEE FOCS, pages 235–244, 2010.
 [12] Y. T. Lee and H. Sun. An SDPbased Algorithm for Linearsized Spectral Sparsification. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, pages 678–687, New York, NY, USA, 2017. ACM.
 [13] O. Livne and A. Brandt. Lean algebraic multigrid (LAMG): Fast graph Laplacian linear solver. SIAM Journal on Scientific Computing, 34(4):B499–B522, 2012.
 [14] R. Peng, H. Sun, and L. Zanetti. Partitioning wellclustered graphs: Spectral clustering works. In Proceedings of The 28th Conference on Learning Theory (COLT), pages 1423–1455, 2015.
 [15] Y. Saad. Numerical Methods for Large Eigenvalue Problems: Revised Edition, volume 66. Siam, 2011.

[16]
D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst.
The emerging field of signal processing on graphs: Extending highdimensional data analysis to networks and other irregular domains.
IEEE Signal Processing Magazine, 30(3):83–98, 2013.  [17] D. Spielman and N. Srivastava. Graph sparsification by effective resistances. SIAM Journal on Computing, 40(6):1913–1926, 2011.
 [18] D. Spielman and S. Teng. Spectral partitioning works: Planar graphs and finite element meshes. In Foundations of Computer Science (FOCS), 1996. Proceedings., 37th Annual Symposium on, pages 96–105. IEEE, 1996.
 [19] D. Spielman and S. Teng. Spectral sparsification of graphs. SIAM Journal on Computing, 40(4):981–1025, 2011.
 [20] D. Spielman and S. Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM Journal on Matrix Analysis and Applications, 35(3):835–885, 2014.
 [21] D. Spielman and J. Woo. A note on preconditioning by lowstretch spanning trees. arXiv preprint arXiv:0903.2816, 2009.
 [22] S.H. Teng. Scalable algorithms for data and network analysis. Foundations and Trends® in Theoretical Computer Science, 12(1–2):1–274, 2016.
 [23] Z. Zhao and Z. Feng. A spectral graph sparsification approach to scalable vectorless power grid integrity verification. In Proceedings of the 54th Annual Design Automation Conference 2017, page 68. ACM, 2017.
 [24] Z. Zhao, Y. Wang, and Z. Feng. SAMG: Sparsified Graph Theoretic Algebraic Multigrid for Solving Large Symmetric Diagonally Dominant (SDD) Matrices. In Proceedings of the 36th International Conference on ComputerAided Design (ICCAD). ACM, 2017.
Comments
There are no comments yet.