A Posteriori Error Estimates for Multilevel Methods for Graph Laplacians

07/01/2020 ∙ by Xiaozhe Hu, et al. ∙ Tufts University Penn State University 0

In this paper, we study a posteriori error estimators which aid multilevel iterative solvers for linear systems of graph Laplacians. In earlier works such estimates were computed by solving a perturbed global optimization problem, which could be computationally expensive. We propose a novel strategy to compute these estimates by constructing a Helmholtz decomposition on the graph based on a spanning tree and the corresponding cycle space. To compute the error estimator, we solve efficiently a linear system on the spanning tree and then a least-squares problem on the cycle space. As we show, such estimator has a nearly-linear computational complexity for sparse graphs under certain assumptions. Numerical experiments are presented to demonstrate the efficacy of the proposed method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graphs are frequently employed to model networks in social science, energy, biological applications [5, 18, 26]. In many cases, the application of the graphs involves solving the large-scale linear systems of graph Laplacians [37, 20, 52, 46, 33]

. In addition, in the numerical solutions to the partial differential equations (PDEs), the stiffness matrices arising from the finite-element or finite-difference method also have the form of the graph Laplacians as discussed in

[54]. Therefore, it is important to develop efficient and robust methods for solving graph Laplacians.

To solve large-scale graph Laplacians, the direct methods suffer from their expensive computational costs. Iterative methods, such as the algebraic multigrid (AMG) methods originated in [9], are often applied to solve the linear systems (see also [53] and the references therein for a recent survey on the AMG methods). In practice, the AMG method achieves optimal computational complexity for many applications, including solving the graph Laplacians [34, 29, 30, 4, 11, 21, 40].

As is well known, an efficient AMG method should damp high-frequency error using relaxations/smoothers and eliminate low-frequency error using coarse grid corrections. The latter requires the “smoothed” error to be transferred to and represented on coarse levels accurately. Many different coarsening strategies have been developed based on good estimations of the error, for example, the classical AMG [9, 6, 15], the smoothed aggregation AMG [16, 13, 17], bootstrap AMG [8, 7], the unsmoothed aggregation AMG [42, 39, 34, 29, 49, 11, 10]. Thus, efficient, reliable, and computable a posterior error estimation is at the core of developing robust AMG methods.

The idea of a posteriori estimator is to devise an algorithm which provides a computable estimation for the true error. Our approach borrows several ideas from the finite-element (FE) literature (equilibrated error estimators [2, 28, 50, 1] and functional a posteriori error estimators [41, 24, 48, 44]). In [54], the authors derived a posteriori error estimator for solving graph-Laplacians for the first time based on the functional a posteriori error estimation framework. Such a technique was used to predict the error of approximation from coarse grids for multilevel unsmoothed aggregation AMG and the estimator is computed by solving a perturbed global optimization problem. Such an approach provides an accurate error estimator. However, it could be computationally expensive, which affects the efficiency of the resulting adaptive AMG method. In this work, we propose a novel a posteriori error estimator and an efficient algorithm to reduce the computational cost, which could lead to the efficient construction of multilevel hierarchy for AMG. Roughly speaking, this is achieved by taking advantage of the Helmholtz decomposition on the graph computationally and there are mainly two steps for computing our proposed a posterior error estimator as follows,

  1. solving a linear system on a spanning tree of the graph to get the curl-free component of the Helmholtz decomposition, which estimates the component of the error,

  2. solving a constrained minimization problem in the cycle space of the graph to obtain the div-free component of the Helmholtz decomposition, which estimates the component of the error.

The first step can be done in linear time with Gaussian elimination with special ordering [51, 45]. Exactly solving the constrained minimization in the second step might be computationally expensive. Therefore, we propose to solve is approximately by applying several steps of relaxation schemes, such as the Schwarz method. The overall computational cost of our approach gives an accurate a posteriori error estimates in a nearly optimal time for certain types of sparse graphs, which is verified by our numerical experiments.

The rest of this paper is organized as follows. In Section 2 we review backgrounds on graph and graph Laplacians, along with some previous results in [54]. The main algorithm to compute a posterior error estimates is stated in Section 3. We present and analyze some numerical experiments in Section 4. Finally, in Section 5 we summarize the main contribution and list some future work.

2 Background and Notations

In this section, we define necessary notation and a posteriori error estimates for solving graph Laplacians. We also recall some previous results about a posteriori error estimates in the graph settings as presented in [54].

2.1 Graph and Graph Laplacian

Consider an undirected weighted graph , where is the set of vertices, is the edge set, and is the edge weight set. Here, the weights are assumed to be positive, i.e., . For unweighted graphs, we take all the edge weights as . To each edge we assign an orientation that determines the “head” (denoted by ) and“ tail” (denoted by ), though the graph itself is not directed. We fix this arbitrary choice of the orientations for later usage.

Denote and . Let and be the vertex space and edge space, respectively. The inner product on vertex space and edge space are defined as:

The weighted graph Laplacian matrix can be defined via the bilinear form:

Associated with the graph is the discrete gradient operator (or edge-node incidence matrix) and the edge weight matrix . They are defined as the following: for each edge with predetermined “head” (denoted by ) and“ tail” (denoted by ),

(1)

The adjoint of , denoted by , is the discrete divergence operator (or node-edge incidence matrix) on the graph,

(2)

By direct computation, we have the following identity,

Thus, we can write . Based on this definition of the graph Laplacian , it is straightforward to verify that,

where, , and , .

In addition to the vertex space and edge space , another important space of a graph  is the so-called cycle space, denoted by , which is defined as (see [3] for more details),

(3)

Each cycle on the graph corresponds to an element in the cycle space . To be more specific, if we assign the cycle a predetermined orientation (either clockwise or counterclockwise), the associated is defined as the following:

Besides its definition (3), we can also characterize the cycle space by its basis. As discussed in [27], the cycle space of a simple connected graph has dimension . Note there are more than one way to find the basis of the cycle space (see the survey paper [27]). For planar graphs, due to Euler’s formula, there are exactly bounded faces and each of the face is bounded by a cycle. It can be shown those face cycles are linearly independent and, therefore, they form a cycle basis, which usually is referred as face cycle bases. For more general graphs, a commonly used set of cycle basis is the so-called fundamental cycle basis, which is induced by a spanning tree. To construct the fundamental cycle basis with a given spanning tree of a graph , for each edge does not belong to the tree, i.e., , define where is the path from vertex to vertex on the tree . is a cycle on the graph with a predetermined orientation. Since the spanning tree has edges, there are such cycles. It can be shown that they are linearly independent [27] and, therefore, form a cycle basis.

In Fig. 1, we give a simple example of the fundamental cycle basis. The tree Fig. 1(b) is a spanning tree of the graph Fig. 1(a). First, the edge is added back (see Fig. 1(c)) which results in the first cycle consisting of edge and

. By comparing the orientation of the cycle and the edges, the vector representation of the cycle

is given by . Similarly, by adding edge back, we have the second cycle , which is represented by . And they form a cycle basis.

original graph
spanning tree
adding edge to get cycle
add edge to get cycle
Figure 1: Fundamental cycle basis

2.2 Previous Results on a Posteriori Error Estimators

We are interested in solving the following linear system of graph Laplacians:

(4)

by some iterative methods. After iterations we get an approximated solution . If we can somehow construct the current error , then the true solution will be easily obtained by . In practice the true error is not computable because is unknown, so alternatively we seek to find , an accurate estimation of , and use to improve the current approximation. Furthermore, an accurate estimation of the error gives insight of the performance of the iterative methods. For example, in the AMG methods, such an estimation approximates the so called “smooth error”, which is responsible to the slow convergence of the AMG methods, and can be used to improve the AMG algorithm adaptively. This leads to the adaptive AMG methods [14, 36, 38, 12, 22] which has been an actively research direction in the past two decades.

Since our a posteriori estimator is motivated by the a posteriori error estimator developed in [54], we recall main results and algorithms presented in [54] and start with the following fundamental lemma which relates the error and computed approximate solution.

Lemma 1

Let be the solution to (4). Then for arbitrary , the following inequality holds for all :

(5)

where is the Poincaré’s constant of the graph Laplacian .

For a fixed , denote the right-hand side of (5) by:

This naturally provides a posteriori error estimator for estimating the error if is an approximate solution, i.e., . Moreover, by minimizing the right-hand side of (5) with respect to , we can obtain an accurate estimator. To solve the minimization problem efficiently, in [54], an upper bound of was introduced as follows,

where

And an accurate estimator can be obtained by computing . In [54], an alternating process is applied to minimize with respect to (with the techniques proposed in [31]) and iteratively, as summarized in Algorithm 1 (see [54] for details):

1:procedure [] =MinimizeBound()
2:     for  max_iter do
3:         compute .
4:         compute .
5:     end for
6:end procedure
Algorithm 1 Alternating Process for Solving

Although the approach developed in [54] provides a reliable error estimator, the corresponding computational cost might be expensive due to the iterative minimization of in step 3 and 4 in  Algorithm 1. In order to improve the accuracy of the a posteriori error estimator as well as the efficiency of computing it, in this paper, we develop a novel technique for estimating the error based on (5) and design a fast algorithm to compute the error estimator.

3 Efficient Algorithm for Computing a Posteriori Error Estimator

In this section we present the derivation of the a posteriori error estimator, followed by the discussion of an efficient algorithm to compute it based on the Helmholtz decomposition on graphs. This is a tighter error bound than the one proposed in [54] and can be implemented efficiently.

3.1 The Error Estimator

Our design of an a posteriori error estimator is motivated by (5). For a given , we define the space . If we choose , then the second term on the right hand side of (5) vanishes and we only have the first term left. If we minimize this term with respect to , we can immediately get an accurate estimation. We summarize this in the following theorem.

Theorem 1

Let be the solution to (4). Then for any , we have,

(6)

To prove Theorem 1, we will make use of the next lemma, which was first proposed in [43]:

Lemma 2

Let be the solution to (4). Then for any and any the following identity holds:

Now we are ready to prove Theorem 1.

Proof

We first show . It follows from Lemma 2 that,

Since the inequality holds for any , we have:

To show the other direction, note that,

This completes the proof.

Form Theorem 1, we observe that

for any . This motivates us to define the following computable quantity,

(7)

If is the approximate solution to (4), gives an a posteriori estimator for the true error for any choices of . If is the minimizer of the right-hand side of (6), then . Of course, computing would be computationally expensive and, therefore, the rest of this section focuses on approximately solve the constrained minimization problem (6) so that we can obtain a reasonable good to compute a posteriori error estimator , while keeping the total computational cost low.

3.2 Efficient Evaluation of the Error Estimator

Our approach is to solve the minimization problem

(8)

based on the Helmholtz decomposition of on the graph, i.e., , where (which is curl-free) and (which is div-free) such that . In particular, we first find a by solving a graph Laplacian on a spanning tree of the graph. Then for a given , the minimization problem (8) becomes,

Solving this constrained minimization problem exactly will give the true minimizer and thus theoretically give the true error, which is a overkill in terms of finding a posterior error estimator. In practice, we only need solve it approximately since as long as , will provide an upper bound of the error, which can be used as an error estimator. Note that this approximation subject to an inevitable trade-off: the error estimator will approximate the true error very accurately if we devote to solve the optimization almost exactly at expensive computation cost; or we accept not so tight error estimator at a cheap cost.

3.2.1 Compute

For any , we have,

(9)

Since is the discrete divergence operator on the graph, the solution to the above equation is not unique and difficult to compute in general. However, we just need to find one . Here, based on a spanning tree of the graph , we present an approach with optimal complexity, i.e., computational cost.

For a given spanning tree , we look for a satisfies (9) but only has nonzero entries on the edges that belong to the spanning tree . In this case, we can rewrite (9) as:

(10)

where is the discrete divergence operator that acts on edges in the tree , and is the discrete divergence operator on edges that are only in the graph but not in the tree . From (10), we have,

(11)

Therefore, once we solve (11), we can assemble by adding back the edges that are in the graph but not in the tree . Note that equation (11) is defined only on the spanning tree . We can first solve

(12)

where is the graph Laplacian of the tree . With the fact that , where and are the discrete diagonal edge weight matrix and discrete gradient operator on tree , respectively, (12) can be rewritten as follows,

(13)

Comparing (11) and (13), we naturally have,

and can thereafter assemble the full . The procedure for computing is summarized in Algorithm 2.

1:procedure [] =Compute(, )
2:     Build the spanning tree from .
3:     Solve , where .
4:     Compute .
5:     Assemble as .
6:end procedure
Algorithm 2 Compute

The main computational cost of Algorithm 2 comes from Step 3, i.e., solving the linear system (12). As discussed in [45, 51], it takes linear time to solve (12) on the tree if we order the vertices from the leaves to the root and apply Gaussian Elimination. Additionally, the matrix-vector multiplication in Step 4 has complexity for sparse graphs. Therefore, the overall complexity of Algorithm 2 is .

3.2.2 Compute

For a given , we need to solve the following constrained minimization problem,

(14)

The difficulty here is that we need to satisfy the constraint exactly when we compute an approximate to get the approximate value of the error. Our approach is to explicitly build the basis of the cycle space as discussed in Section 2 and transform the constrained minimization problem (14) into a unconstrained minimization problem. Based on the cycle bases, we write for any , where is the index set of the cycle basis. Denote , . Thus, the minimization problem (14) becomes,

(15)

This is an unconstrained least-squares problem and we can solve it with usual approaches. Moreover, the approximate solution is guaranteed to belong to the cycle space. Solving (15) exactly will eventually give us the exact error . This step, however, has a computational complexity comparable to solving the original problem (4). Therefore, we solve it approximately via a few steps of simple relaxation schemes, for example, the Schwarz method(see [47, 19]). First, we decompose the cycle space into the following subspaces:

(16)

Note that is not necessarily empty. Then we solve the minimization problem (15) in each of the subspace . That is, for , compute:

(17)

where is the approximation to after solving (17) in the first subspaces, where .

To keep a modest cost in computing the error estimator, we only run iterations of Schwarz method. Later in Section 4 we will show that the error estimator computed with an approximated by only iterations of Schwarz method is indeed accurate enough to capture the true error. The steps to compute are summarized in Algorithm 3:

1:procedure  =Compute()
2:     Build the cycle basis
3:     Given initial guess ,
4:     for  max_iter do
5:         for   do iterate over each subdomain
6:              .
7:              
8:         end for
9:     end for
10:     return .
11:end procedure
Algorithm 3 Compute Approximately

In Algorithm 3, the cost of one step of the Schwarz method depends on the number of subspaces, i.e., , and the cost of solving (17) in each subspace. In this paper we choose the following overlapping subspace decomposition: the -the subspace is the span of the basis for the cycles incident with the vertex ,

(18)

Since there are vertices, we have , i.e., subspaces. Considering sparse graphs here, if solving the minimization problem (17) on each subspace has computational complexity at most , then the overall computational cost of each iteration of Schwarz method is for this choice of subspace decomposition (18), which assures a low computational cost of the proposed estimator. For general graphs, proper choice of cycle basis and subspaces decomposition are needed to keep the computational cost nearly optimal, which is a subject of our on-going research.

3.2.3 Overall Algorithm

Now we are ready to present the overall Algorithm 4 to (approximately) solve the minimization problem (15) and compute a posteriori error estimation for solving the graph Laplacian (4).

1:procedure  =ErrorEstimates(, )
2:     [] = Compute().
3:      = Compute().
4:      Compute the value of the estimator
5:     return .
6:end procedure
Algorithm 4 Compute error estimator

In Algorithm 4, Step 2 to compute takes for any graph. Step 3 to compute has complexity for sparse graphs since the minimization problem (15) is solved approximately with step of Schwarz method. As a result, the overall computational complexity of the Algorithm 4 is for sparse graph .

To make the a postriori error estimator more useful, especially for developing the adaptive AMG methods for solving graph Laplacians [54, 32, 35], we need to localize the a posteriori error estimator. Since,

we can localize the error estimator on each edge as follows,

(19)

We comment that the above localized error estimators is obtained for free in practice, since we have available from the computation of the global error estimator (see Step 4 in Algorithm 4).

This localized error estimator (19) then can be used to design adaptive AMG methods. For example, it can be utilized in generating coarser aggregations that approximate the fine aggregates (vertices) accurately [54] or generate approximations to the level sets of the error for the path cover adaptive AMG method [25].

4 Numerical Results

In this section, we present some numerical experiment results to demonstrate the efficiency of the a posteriori error estimator.

4.1 Test on 2D Uniform Grid

We first test the performance of the algorithm on the unweighted graph Laplacian of 2D uniform triangle grids, which corresponds to solving a Poisson equation on a 2D square domain with Neumann boundary condition. The uniform triangle grid with grid size , is used, and we take . We set the approximate solution and obtain the a posteriori error estimator with Algorithm 4, in which the minimization problem (15) is solved approximately with several iterations of the overlapping Schwarz method. We use the face cycle bases that correspond to the small triangles in the grid (cycle length is 3). With this choice of cycle basis, each of the decomposed subspaces in (18) have dimensions since there are at most six cycles incident from a given vertex . The low dimension of the subspaces assures that solving Eq. 17 costs no more than computation and thus the computation cost of one iteration of Schwarz method remains .

In Table 1, we report the true error and the a posterori error estimator  on graph Laplacian systems of different scales. is also reported to show the efficiency of the error estimator. From Table 1, we observe that the CPU time for one iteration of Schwarz method grows linearly as the size of graph Laplacian systems increases. The error estimator gradually approaches the true error when we increase the steps of Schwarz iteration.

1 iter 3 iters 5 iters
time time time
1.73 2.25 1.30 0.03 1.99 1.15 0.04 1.91 1.10 0.06
1.73 2.67 1.55 0.05 2.28 1.32 0.11 2.16 1.25 0.16
1.73 3.36 1.95 0.14 2.76 1.60 0.37 2.56 1.48 0.62
1.72 4.43 2.57 0.53 3.51 2.03 1.40 3.20 1.86 2.31
1.72 6.01 3.49 1.92 4.66 2.71 5.64 4.19 2.43 9.53
Table 1: Efficiency of the error estimator on graph Laplacian systems on uniform triangle grids of different sizes. The value of the estimator is computed by solving (15) approximately with 1,3, and 5 iterations of the overlapping Schwarz method. The CPU time (in seconds) is also shown in the table.

More importantly, we would like to know whether the localized error estimator (19) approximates the true error on each edge accurately, since the localized estimation is the key to effective coarsening scheme in adaptive AMG. Take as the weighted graph Laplacian of the uniform grid with grid size , and obtained by three iterations of Gauss Seidel method with random initial guess. We compute the error estimator using three iterations of the Schwarz method to solve the minimization problem in Algorithm 4. In Fig. 2, we plot the difference between the true error and the error estimator on each edge. On most of the edges the error estimator captures the true error well since the difference is no larger than .

Figure 2: Difference between the true error and error estimator on each edge .

4.2 Tests on “Real World” Graphs

In this section we test the proposed error estimator on some real world graphs from the SuiteSparse Matrix Collection [23]. We pre-process the undirected graphs by extracting the largest connected component of each graph and deleting self-loops. For each of these graphs, if the original edge weight is negative, we take its absolute value.

ID Problem Type Type
8 292 958 Least Squares Problem u 1.74 1.75 1.00
1196 1879 5525 Circuit Simulation w 2.71 2.71 1.00
22 5300 8271 Power Network u 5.82 5.82 1.00
1614 2048 4034 Electronagnetics Problem w 0.47 0.50 1.07
33 1423 16342 Structural Problem w 14.5 19.7 1.36
791 8205 58681 Accoustic Problem w 23.8 37.7 1.58
Table 2: Efficiency of Error Estimator on graph Laplacian systems arising from real world applications. The value of the estimator is computed by solving (15) approximately with 3 iteration of Schwarz method. The graph types tested are unweighted (u) and weighted (w).

In Table 2, we summarize the basic information of the graphs and the performance of the error estimator. In our setting is the exact solution for a problem with arbitrarily chosen right-hand side . The approximate solution is obtained as a result of three iterations of the Gauss-Seidel method with this right-hand side. To compute the error estimator, we first use the breadth-first search to find a spanning tree and then construct the spanning-tree-induced fundamental cycle basis. Finally, we apply three steps of Schwarz iterations to solve the minimization problem in Algorithm 4 to compute the overall error estimator. As we can see from the results, for real-world graphs with different sizes, structures, and density, the error estimator approximates the true error well in all cases, which demonstrate the effectiveness of our proposed algorithm for computing the a posterior error estimator.

5 Conclusions

In this paper we proposed an a posteriori error estimator for solving linear systems of graph Laplacians. A novel approach is devised to reduce the computation cost of computing such an estimator to comparing with existing approaches and could be nearly-linear in time for sparse graphs. Our approach is based on the Helmholtz decomposition on the graphs. It includes solving a linear system on a spanning tree and solving (approximately) a minimization problem in the cycle space of the graph.

For the ongoing and future work, we plan to incorporate this error estimator in the adaptive AMG coarsening schemes. For example, the estimates can be used as an approximation to the level sets of error for path cover adaptive AMG proposed in [25].

References

  • [1] M. Ainsworth and J. Oden, A posteriori error estimation in finite element analysis, Computer Methods in Applied Mechanics and Engineering, 142 (1997), pp. 1 – 88.
  • [2] I. Babu˘ska and W. C. Rheinboldt, A-posteriori error estimates for the finite element method, International Journal for Numerical Methods in Engineering, 12 (1978), pp. 1597–1615.
  • [3] B. Bollobás, Modern graph theory, vol. 184, Springer Science & Business Media, 2013.
  • [4] M. Bolten, S. Friedhoff, A. Frommer, M. Heming, and K. Kahl, Algebraic multigrid methods for Laplacians of graphs, Linear Algebra Appl., 434 (2011), pp. 2225–2243, https://doi.org/10.1016/j.laa.2010.11.008, http://dx.doi.org/10.1016/j.laa.2010.11.008.
  • [5] S. P. Borgatti, A. Mehra, D. J. Brass, and G. Labianca, Network analysis in the social sciences, Science, 323 (2009), pp. 892–895.
  • [6] A. Brandt, Multiscale scientific computation: Review 2001, in Multiscalle and Multiresolution Methods: Theory and Applications, T. J. Barth, T. F. Chan, and R. Haimes, eds,, 1 (2001), pp. 1–96.
  • [7] A. Brandt, J. Brannick, K. Kahl, and I. Livshits, Bootstrap algebraic multigrid: status report, open problems, and outlook, Numer. Math. Theory Methods Appl., 8 (2015), pp. 112–135, https://doi.org/10.4208/nmtma.2015.w06si, http://dx.doi.org/10.4208/nmtma.2015.w06si.
  • [8] A. Brandt, J. J. Brannick, K. Kahl, and I. Livshits, Bootstrap amg, SIAM J. Scientific Computing, 33 (2011), pp. 612–632.
  • [9] A. Brandt, S. F. McCormick, and J. W. Ruge, Algebraic multigrid (AMG) for automatic multigrid solutions with application to geodetic computations, tech. report, Inst. for Computational Studies, Fort Collins, CO, October 1982.
  • [10] J. Brannick, Y. Chen, J. Kraus, and L. Zikatanov, An algebraic multigrid method based on matching in graphs, in Domain Decomposition Methods in Science and Engineering XX, R. Bank, M. Holst, and J. Xu, eds., Lecture Notes in Computational Science and Engineering, 7 2013, pp. 143–150.
  • [11] J. Brannick, Y. Chen, J. Kraus, and L. Zikatanov, Algebraic multilevel preconditioners for the graph laplacian based on matching in graphs, SIAM Journal on Numerical Analysis, 51 (2013), pp. 1805–1827.
  • [12] M. Brezina, R. Falgout, S. MacLachlan, T. Manteuffel, S. McCormick, and J. Ruge, Adaptive smoothed aggregation multigrid, SIAM Rev., 47 (2005), pp. 317–346, https://doi.org/10.1137/050626272, http://dx.doi.org/10.1137/050626272.
  • [13] M. Brezina, R. Falgout, S. Maclachlan, T. Manteuffel, S. Mccormick, and J. Ruge, Adaptive smoothed aggregation (SA) multigrid, SIAM Rev., 47 (2005), pp. 317–346, https://doi.org/http://dx.doi.org/10.1137/050626272.
  • [14] M. Brezina, R. Falgout, S. MacLachlan, T. Manteuffel, S. McCormick, and J. Ruge, Adaptive algebraic multigrid, SIAM Journal on Scientific Computing, 27 (2006), pp. 1261–1286.
  • [15] M. Brezina, R. Falgout, S. MacLachlan, T. Manteuffel, S. McCormick, and J. Ruge, Adaptive algebraic multigrid (amg), SIAM J. Sci. Comp., 27 (2006), pp. 1261–1286.
  • [16] M. Brezina, R. Falgout, S. MacLachlan, T. A. Manteuffel, S. F. McCormick, and J. Ruge, Adaptive smoothed aggregation (SA), SIAM J. Sci. Comp., 25 (2004), pp. 1896–1920.
  • [17] M. Brezina, T. Manteuffel, S. MCormick, J. Ruge, and G. Sanders, Towards adaptive smoothed aggregation (sa) for nonsymmetric problems, SIAM Journal on Scientific Computing, 32 (2010), pp. 14–39.
  • [18] E. Bullmore and O. Sporns, Complex brain networks: graph theoretical analysis of structural and functional systems, Nature Reviews Neuroscience, 10 (2009), pp. 186–198.
  • [19] L. Chen, X. Hu, and S. Wise, Convergence analysis of the fast subspace descent method for convex optimization problems, Mathematics of Computation, (2020), p. 1, https://doi.org/10.1090/mcom/3526.
  • [20] C. Colley, J. Lin, X. Hu, and S. Aeron, Algebraic multigrid for least squares problems on graphs with applications to hodgerank, in 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), May 2017, pp. 627–636, https://doi.org/10.1109/IPDPSW.2017.163.
  • [21] P. D’Ambra and P. S. Vassilevski, Adaptive amg with coarsening based on compatible weighted matching, Computing and Visualization in Science, 16 (2013), pp. 59–76.
  • [22] P. D’Ambra and P. S. Vassilevski, Adaptive AMG with coarsening based on compatible weighted matching, Comput. Vis. Sci., 16 (2013), pp. 59–76, https://doi.org/10.1007/s00791-014-0224-9, http://dx.doi.org/10.1007/s00791-014-0224-9.
  • [23] T. A. Davis and Y. Hu, The university of florida sparse matrix collection, ACM Trans. Math. Softw., 38 (2011), https://doi.org/10.1145/2049662.2049663, https://doi.org/10.1145/2049662.2049663.
  • [24] P. Destuynder and B. Métivet, Explicit error bounds in a conforming finite element method, Mathematics of Computation, 68 (1999), pp. 1379–1396.
  • [25] X. Hu, J. Lin, and L. T. Zikatanov, An adaptive multigrid method based on path cover, SIAM J. Scientific Computing, 41 (2018), pp. S220–S241.
  • [26] N. T. I. Gutman, Graph theory and molecular orbitals. total -electron energy of alternant hydrocarbons, Chemical Physics Letters, 17 (1972), pp. 535 – 538.
  • [27] T. Kavitha, C. Liebchen, K. Mehlhorn, D. Michail, R. Rizzi, T. Ueckerdt, and K. A. Zweig, Cycle bases in graphs characterization, algorithms, complexity, and applications, Computer Science Review, 3 (2009), pp. 199 – 243.
  • [28] D. W. Kelly, J. P. De S. R. Gago, O. C. Zienkiewicz, and I. Babuska, A posteriori error analysis and adaptive processes in the finite element method: Part i—error analysis, International Journal for Numerical Methods in Engineering, 19 (1983), pp. 1593–1619.
  • [29] H. Kim, J. Xu, and L. Zikatanov, A multigrid method based on graph matching for convection-diffusion equations, Numerical Linear Algebra with Applications, 10 (2003), pp. 181–195.
  • [30] I. Koutis, G. L. Miller, and D. Tolliver, Combinatorial preconditioners and multilevel solvers for problems in computer vision and image processing

    , Computer Vision and Image Understanding, 115 (2011), pp. 1638–1646.

  • [31] J. K. Kraus and S. K. Tomar, Algebraic multilevel iteration method for lowest order raviart-thomas space and applications, International Journal for Numerical Methods in Engineering, 86 (2011), pp. 1175–1196, https://doi.org/10.1002/nme.3103.
  • [32] D. Krishnan, R. Fattal, and R. Szeliski, Efficient preconditioning of laplacian matrices for computer graphics, ACM Transactions on Graphics, 32 (2013).
  • [33] J. Lin, L. J. Cowen, B. Hescott, and X. Hu, Computing the diffusion state distance on graphs via algebraic multigrid and random projections, Numerical Linear Algebra with Applications, 25 (2018), p. e2156.
  • [34] O. Livne and A. Brandt, Lean algebraic multigrid (LAMG): fast graph Laplacian linear solver, SIAM Journal on Scientific Computing, 34 (2012), pp. 499–523.
  • [35] O. E. Livne and A. Brandt, Lean algebraic multigrid (lamg): Fast graph laplacian linear solver, SIAM Journal on Scientific Computing, 34 (2012), pp. B499–B522.
  • [36] S. P. MacLachlan, J. D. Moulton, and T. P. Chartier, Robust and adaptive multigrid methods: comparing structured and algebraic approaches, Numer. Linear Algebra Appl., 19 (2012), pp. 389–413, https://doi.org/10.1002/nla.837, http://dx.doi.org/10.1002/nla.837.
  • [37] R. Merris, Laplacian matrices of graphs: a survey, Linear Algebra and its Applications, 197-198 (1994), pp. 143 – 176, https://doi.org/https://doi.org/10.1016/0024-3795(94)90486-3, http://www.sciencedirect.com/science/article/pii/0024379594904863.
  • [38] A. Nägel, R. D. Falgout, and G. Wittum, Filtering algebraic multigrid and adaptive strategies, Comput. Vis. Sci., 11 (2008), pp. 159–167, https://doi.org/10.1007/s00791-007-0066-9, http://dx.doi.org/10.1007/s00791-007-0066-9.
  • [39] A. Napov and Y. Notay, An algebraic multigrid method with guaranteed convergence rate, SIAM Journal on Scientific Computing, 34 (2012), pp. A1079 – A1109.
  • [40] A. Napov and Y. Notay, An efficient multigrid method for graph laplacian systems, Electronic Transactions on Numerical Analysis, 45 (2016), pp. 201–218.
  • [41] R. H. Nochetto and A. Veeser, Primer of Adaptive Finite Element Methods, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012, pp. 125–225.
  • [42] Y. Notay, An aggregation-based algebraic multigrid method, Electronic transactions on numerical analysis, 37 (2010), pp. 123–146, http://www.emis.ams.org/journals/ETNA/vol.37.2010/pp123-146.dir/pp123-146.pdf.
  • [43] W. PRAGER and J. L. SYNGE, Approximations in elasticity based on the concept of function space, Quarterly of Applied Mathematics, 5 (1947), pp. 241–269, http://www.jstor.org/stable/43633616.
  • [44] S. I. Repin and S. K. Tomar, Guaranteed and robust error bounds for nonconforming approximations of elliptic problems, IMA Journal of Numerical Analysis, 31 (2010), pp. 597–615.
  • [45] D. J. Rose, R. E. Tarjan, and G. S. Lueker, Algorithmic aspects of vertex elimination on graphs, SIAM J. Comput., 5 (1976), pp. 266–283.
  • [46] G. Rucker, Network meta-analysis, electrical networks and graph theory, Research Synthesis Methods, 3 (2012), pp. 312–324.
  • [47] X.-C. Tai and J. Xu, Global and uniform convergence of subspace correction methods for some convex optimization problems, Mathematics of Computation, 71 (2002), p. 105–124.
  • [48] S. Tomar and S. Repin, Efficient computable error bounds for discontinuous galerkin approximations of elliptic problems, Journal of Computational and Applied Mathematics, 226 (2009), pp. 358 – 369. Special Issue: Large scale scientific computations.
  • [49] J. Urschel, J. Xu, X. Hu, and L. Zikatanov, A cascadic multigrid algorithm for computing the fiedler vector of graph laplacians, Journal of Computational Mathematics, 33 (2015), pp. 209–226, https://doi.org/10.4208/jcm.1412-m2014-0041.
  • [50] R. V. urth, A posteriori error estimates for nonlinear problems. finite element discretizations of elliptic equations, Mathematics of Computation, 62 (1994), pp. 445–475.
  • [51] P. S. Vassilevski and L. T. Zikatanov, Commuting projections on graphs, Numerical Linear Algebra with Applications, 21 (2014), pp. 297–315, https://doi.org/10.1002/nla.1872.
  • [52] K. Q. Weinberger, F. Sha, Q. Zhu, and L. K. Saul, Graph laplacian regularization for large-scale semidefinite programming, in Proceedings of the 19th International Conference on Neural Information Processing Systems, NIPS’06, Cambridge, MA, USA, 2006, MIT Press, p. 1489–1496.
  • [53] J. Xu and L. Zikatanov, Algebraic multigrid methods, Acta Numerica, 26 (2017), p. 591–721, https://doi.org/10.1017/S0962492917000083.
  • [54] W. Xu and L. T. Zikatanov, Adaptive aggregation on graphs, Journal of Computational and Applied Mathematics, 340 (2018), pp. 718 – 730.