Maximizing the Number of Spanning Trees in a Connected Graph

04/09/2018 ∙ by Huan Li, et al. ∙ Rensselaer Polytechnic Institute FUDAN University 0

We study the problem of maximizing the number of spanning trees in a connected graph by adding at most k edges from a given candidate edge set. We give both algorithmic and hardness results for this problem: - We give a greedy algorithm that, using submodularity, obtains an approximation ratio of (1 - 1/e - ϵ) in the exponent of the number of spanning trees for any ϵ > 0 in time Õ(m ϵ^-1 + (n + q) ϵ^-3), where m and q is the number of edges in the original graph and the candidate edge set, respectively. Our running time is optimal with respect to the input size up to logarithmic factors, and substantially improves upon the O(n^3) running time of the previous proposed greedy algorithm with approximation ratio (1 - 1/e) in the exponent. Notably, the independence of our running time of k is novel, comparing to conventional top-k selections on graphs that usually run in Ω(mk) time. A key ingredient of our greedy algorithm is a routine for maintaining effective resistances under edge additions in an online-offline hybrid setting. - We show the exponential inapproximability of this problem by proving that there exists a constant c > 0 such that it is NP-hard to approximate the optimum number of spanning trees in the exponent within (1 - c). This inapproximability result follows from a reduction from the minimum path cover in undirected graphs, whose hardness again follows from the constant inapproximability of the Traveling Salesman Problem (TSP) with distances 1 and 2. Thus, the approximation ratio of our algorithm is also optimal up to a constant factor in the exponent. To our knowledge, this is the first hardness of approximation result for maximizing the number of spanning trees in a graph, or equivalently, by Kirchhoff's matrix-tree theorem, maximizing the determinant of an SDDM matrix.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We study the problem of maximizing the number of spanning trees in a weighted connected graph by adding at most edges from a given candidate edge set. By Kirchhoff’s matrix-tree theorem [Kir47], the number of spanning trees in is equivalent to the determinant of a minor of the graph Laplacian . Thus, an equivalent problem is to maximize the determinant of a minor of , or, more generally, to maximize the determinant of an SDDM matrix. The problem of maximizing the number of spanning trees, and the related problem of maximizing the determinant of an SDDM matrix, have applications in a wide variety of problem domains. We briefly review some of these applications below.

In robotics, the problem of maximizing the number of spanning trees has been applied in graph-based Simultaneous Localization and Mapping (SLAM). In graph-based SLAM [TM06]

, each vertex corresponds to a robot’s pose or position, and edges correspond to relative measurements between poses. The graph is used to estimate the most likely pose configurations. Since measurements can be noisy, a larger number of measurements results in a more accurate estimate. The problem of selecting which

measurements to add to a SLAM pose graph to most improve the estimate has been recast as a problem of selecting the edges to add to the graph that maximize the number of spanning trees [KHD15, KHD16, KSHD16a, KSHD16b]. We note that the complexity of the estimation problem increases with the number of measurements, and so sparse, well-connected pose graphs are desirable [DK06]. Thus, one expects to be moderately sized with respect to the number of vertices.

In network science, the number of spanning trees has been studied as a measure of reliability in communication networks, where reliability is defined as the probability that every pair of vertices can communicate 

[Myr96]. Thus, network reliability can be improved by adding edges that most increase the number of spanning trees [FL01]. The number of spanning trees has also been used as a predictor of the spread of information in social networks [BAE11], with a larger number of spanning trees corresponding to better information propagation.

In the field of cooperative control, the log-number of spanning trees has been shown to capture the robustness of linear consensus algorithms. Specifically, the log-number of spanning trees quantifies the network entropy, a measure of how well the agents in the network maintain agreement when subject to external stochastic disturbances [SM14, dBCM15, ZEP11]. Thus, the problem of selecting which edges to add to the network graph to optimize robustness is equivalent to the log-number of spanning trees maximization problem [SM18, ZSA13]. Finally, the log-determinant of an SDDM matrix has also been used directly as a measure of controllability in more general linear dynamical systems [SCL16]. In this paper, we provide an approximation algorithm to maximize the log-number of spanning trees of a connected graph by adding edges.

1.1 Our Results

Let denote an undirected graph with vertices and edges, and let denote the edge weight function. For another graph with edges supported on a subset of , we write “ plus ” or to denote the graph obtained by adding all edges in to .

Let denote the Laplacian matrix of a graph . The effective resistance between two vertices and is given by

where denotes the

standard basis vector and

denotes the Moore-Penrose inverse of .

The weight of a spanning tree in is defined as

and the weighted number of spanning trees in is defined as the sum of the weights of all spanning trees, denoted by . By Kirchhoff’s matrix-tree theorem [Kir47], the weighted number of spanning trees equals the determinant of a minor of the graph Laplacian:

In this paper, we study the problem of maximizing the weighted number of spanning trees in a connected graph by adding at most edges from a given candidate edge set. We give a formal description of this problem below.

Problem 1 (Number of Spanning Trees Maximization (NSTM)).

Given a connected undirected graph , an edge set of edges, an edge weight function , and an integer , add at most edges from to so that the weighted number of spanning trees in is maximized. Namely, the goal is to find a set of at most edges such that

Algorithmic Results.

Our main algorithmic result is solving Problem 1 with an approximation factor of in the exponent of in nearly-linear time, which can be described by the following theorem:

Theorem 1.1.

There is an algorithm , which takes a connected graph with vertices and edges, an edge set of edges, an edge weight function , a real number , and an integer , and returns an edge set of at most edges in time . With high probability, the following statement holds:

where denotes an optimum solution.

The running time of NSTMaximize is independent of the number of edges to add to , and it is optimal with respect to the input size up to logarithmic factors. This running time substantially improves upon the previous greedy algorithm’s running time [KSHD16b], or the running time of its direct acceleration via fast effective resistance approximation [SS11, DKP17] , where the latter becomes quadratic when is . Moreover, the independence of our running time of is novel, comparing to conventional top- selections on graphs that usually run in time, such as the ones in [BBCL14, Yos14, MTU16, LPS18]. We briefly introduce these top- selections in Section 1.3.

A key ingredient of the algorithm NSTMaximize is a routine AddAbove that, given a sequence of edges and a threshold, sequentially adds to the graph any edge whose effective resistance (up to a error) is above the threshold at the time the edge is processed. The routine AddAbove runs in nearly-linear time in the total number of edges in the graph and the edge sequence. The performance of AddAbove is characterized in the following lemma:

Lemma 1.2.

There is a routine , which takes a connected graph with vertices and edges, an edge sequence , an edge weight function , real numbers and , and an integer , and performs a sequential edges additions to and returns the set of edges that have been added with . The routine AddAbove runs in time . With high probability, there exist such that AddAbove has the same return value as the following procedure, in which

holds for all : for  to  do

       if  and  then
             ,
      else
            
      
Return the set of edges in but not in . The routine AddAbove can be seen as a hybrid of online and offline processing techniques. The routine is provided a specific edge sequence as input, as is typical in offline graph algorithms. However, the routine does not know what operation should be performed on an edge (i.e., whether the edge should be added to the graph) until the edge is processed, in an online fashion. The routine thus has to alternately update the graph and query effective resistance. This routine may be of independent interest in areas including dynamic algorithms and data streams.

Hardness Results.

To further show that the approximation ratio of the algorithm NSTMaximize is also nearly optimal, we prove the following theorem, which indicates that Problem 1 is exponentially inapproximable:
Theorem 1.3.
There is a constant such that given an instance of Problem 1, it is -hard to find an edge set with satisfying
where is an optimum solution defined in Theorem 1.1.
The proof of Theorem 1.3 follows by Lemma 1.5. By the same lemma, we can also state the inapproximability of Problem 1 using the following theorem:
Theorem 1.4.
There is a constant such that given an instance of Problem 1, it is -hard to find an edge set with satisfying
where is an optimum solution defined in Theorem 1.1.
Theorem 1.3 implies that the approximation ratio of NSTMaximize is optimal up to a constant factor in the exponent. To our knowledge, this is the first hardness of approximation result for maximizing the number of spanning trees in a graph, or equivalently, maximizing the determinant of an SDDM matrix (a graph Laplacian minor). In proving Theorem 1.3, we give a reduction from the minimum path cover in undirected graphs, whose hardness follows from the constant inapproximability of the traveling salesman problem (TSP) with distances and . The idea behind our reduction is to consider a special family of graphs, each graph from which equals a star graph plus an arbitrary graph supported on its leaves. Let be a graph equal to a star plus its subgraph supported on ’s leaves. We can construct an instance of Problem 1 from by letting the original graph, the candidate edge set, and the number of edges to add be, respectively
We give an example of such an instance in Figure 1.
Figure 1: An instance of Problem 1 constructed from , which equals a star graph plus supported on its leaves. Here, is the central vertex of the star. All red edges and green edges belong to the candidate edge set, where red edges denotes a possible selection with size . We then show in the following lemma that for two such instances whose s have respective path cover number and , the optimum numbers of spanning trees differ by a constant factor in the exponent.
Lemma 1.5.
Let be an unweighted graph equal to a star plus ’s subgraph supported on ’s leaves. For any constant , there exists an absolute constant such that, if does not have any path cover with , then
holds for any with . Here is a fan graph with triangles (i.e., a star plus a path supported on its leaves).
We remark that our reduction uses only simple graphs with all edge weights being 1. Thus, Problem 1 is exponentially inapproximable even for unweighted graphs without self-loops and multi-edges.

1.2 Ideas and Techniques

Algorithms.

By the matrix determinant lemma [Har97], the weighted number of spanning trees multiplies by
upon the addition of edge . Then, the submodularity of follows immediately by Rayleigh’s monotonicity law [Sto87]. This indicates that one can use a simple greedy algorithm [NWF78] that picks the edge with highest effective resistance iteratively for times to achieve a -approximation. By computing effective resistances in nearly-linear time [SS11, DKP17], one can implement this greedy algorithm in time and obtain a -approximation. To avoid searching for the edge with maximum effective resistance, one can invoke another greedy algorithm proposed in [BV14], which maintains a geometrically decreasing threshold and sequentially picks any edge with effective resistance above the threshold. However, since the latter part of this greedy algorithm requires the recomputation of effective resistances after each edge addition, it still needs running time. Thus, our task reduces to performing the sequential updates faster.
We note that for a specific threshold, the ordering in which we perform the sequential updates does not affect our overall approximation. Thus, by picking an arbitrary ordering of the edges in candidate set , we can transform this seemingly online task of processing edges sequentially into an online-offline hybrid setting. While we do not know whether to add an edge until the time we process it, we do know the order in which the edges will be processed. We perform divide-and-conquer on the edge sequence, while alternately querying effective resistance and updating the graph. The idea is that if we are dealing with a short interval of the edge sequence, instead of working with the entire graph, we can work with a graph with size proportional to the length of the interval that preserves the effective resistances of the edges in the sequence. As we are querying effective resistances for candidate edges, the equivalent graph for an interval can be obtained by taking the Schur complement onto endpoints of the edges in it. And, this can be done in nearly-linear time in the graph size using the approximate Schur complement routine in [DKP17]. Specifically, in the first step of divide-and-conquer, we split the edge sequence into two halves
We note the following: Edge additions in do not affect effective resistance queries in . An effective resistance query in is affected by edge additions in , and edge additions in which are performed before the specific query. Since is completely independent of , we can handle queries and updates in by performing recursion to its Schur complement. We then note that edge additions in are performed entirely before queries in , and thus can be seen as offline modifications to . Moreover, all queries in are affected by the same set of modifications in . We thus address the total contribution of to by computing Schur complement onto in the graph updated by edge additions in . In doing so, we have addressed (2a) for all queries in , and thus have made independent of . This indicates that we can process by also performing recursion to its Schur complement. We keep recursing until the interval only contains one edge, where we directly query the edge’s effective resistance and decide whether to add it to the graph. Essentially, our algorithm computes the effective resistance of each edge with an elimination of the entire rest of the graph, while heavily re-using previous eliminations. This gives a nearly-linear time routine for performing sequential updates. Details for this routine can be found in Section 3.1.

Hardness.

A key step in our reduction is to show the connection between the minimum path cover and Problem 1. To this end, we consider an instance of Problem 1 in which is a star graph with leaves, the candidate edge set forms an underlying graph supported on ’s leaves, and the number of edges to add equals . We show that for two instances whose underlying graphs have respective path cover number and , their optimum numbers of spanning trees differ exponentially. Consider any set that consists of edges from , and any path cover of the underlying graph using only edges in . Clearly is greater than or equal to the minimum path cover number of the underlying graph. If forms a Hamiltonian path in the underlying graph, can be explicitly calculated [MEM14] and equals
(1)
When the path cover number of the underlying graph is more than , can be expressed by a product of a sequence of effective resistances and . Specifically, for an arbitrary ordering of edges in but not in the path cover , we define a graph sequence by
By the matrix determinant lemma, we can write the number of spanning trees in as
(2)
Note that we omit edge weights here since we are dealing with unweighted graphs.
Let be the number of edges in path . Since all paths are disjoint, can be expressed as
(3)
where denotes the star graph with leaves, and the second equality follows from (1).
When the path cover number of the underlying graph is at least , we show that the number of spanning trees in is exponentially smaller than . Let denote the set of paths in with lengths. Let denote the number of paths in . Then, is exponentially smaller due to the following reasons. First, by (1) and (3), is less than by at least a multiplicative factor of for some constant . Second, the effective resistances between endpoints of the edges in but not in the path cover are less than and hence cannot compensate for the factor. Third, is at least by Markov’s inequality, which ensures that the factor is exponential. This leads to the exponential drop of . We defer our proof details to Section 4.

1.3 Related Work

Maximizing the Number of Spanning Trees.

There has been limited previous algorithmic study of the problem of maximizing the number of spanning trees in a graph. Problem 1 was also studied in [KSHD16b]. This work proposed a greedy algorithm which, by computing effective resistances exactly, achieves an approximation factor of in the exponent of the number of spanning trees in time. As far as we are aware, the hardness of Problem 1 has not been studied in any previous work. A related problem of, given and , identifying graphs with vertices and edges that have the maximum number of spanning trees has been studied. However, most solutions found to this problem are for either sparse graphs with edges [BLS91, Wan94], or dense graphs with edges [Shi74, Kel96, GM97, PBS98, PR02]. Of particular note, a regular complete multipartite graph has been shown to have the maximum number of spanning trees from among all simple graphs with the same number of vertices and edges [Che81].

Maximizing Determinants.

Maximizing the determinant of a positive semidefinite matrix (PSD) is a problem that has been extensively studied in the theory of computation. For selecting a principle minor of a PSD with the maximum determinant under certain constraints,  

[Kha95, ÇM09, SEFM15, NS16, ESV17] gave algorithms for approximating the optimum solution. [SEFM15, Nik15] also studied another related problem of finding a -dimensional simplex of maximum volume inside a given convex hull, which can be reduced to the former problem under cardinality constraint. For finding the principal submatrix of a positive semidefinite matrix with the largest determinant, [Nik15] gave an algorithm that obtains an approximation of . On the hardness side, all these problems have been showed to be exponentially inapproximable [Kou06, ÇM13, SEFM15].
The problem studied in [Nik15] can also be stated as the following: Given vectors and an integer , find a subset of cardinality so that the product of the largest eigenvalues of the matrix is maximized where . [Nik15] gave a polynomial-time algorithm that obtains an -approximation when . When , Problem  is equivalent to maximizing the determinant of by selecting vectors. [SX18] showed that one can obtain an -approximation for . Moreover, they showed that given , one can obtain a -approximation. Using the algorithms in [Nik15, SX18], we can obtain an -approximation to a problem of independent interest but different from Problem 1: Select at most edges from a candidate edge set to add to an empty graph so that the number of spanning trees is maximized. In contrast, in Problem 1, we are seeking to add edges to a graph that is already connected. Thus, their algorithms cannot directly apply to Problem 1. In [ALSW17a, ALSW17b], the authors also studied Problem . They gave an algorithm that, when , gives a -approximation. Their algorithm first computes a fractional solution using convex optimization and then rounds the fractional solution to integers using spectral sparsification. Since spectral approximation is preserved under edge additions, their algorithm can apply to Problem 1 obtaining a -approximation. However, their algorithm needs to be , given that the candidate edge set is supported on vertices (which is natural in real-world datasets [KGS11, KHD16]). In contrast, could be arbitrarily smaller than in our setting. We remark that both our setting of adding edges to a connected graph and the scenario that could be arbitrarily smaller than have been used in previous works solving graph optimization problems such as maximizing the algebraic connectivity of a graph [KMST10, NXC10, GB06]. We also remark that the algorithms in [Nik15, SX18, ALSW17a, ALSW17b] all need to solve a convex optimization for their continuous relaxation, which runs in polynomial time in contrast to our nearly-linear running time, while the efficiency is crucial in applications [KHD16, KSHD16a, SCL16, SM18]. We also note that [DPPR17] gave an algorithm that computes the determinant of an SDDM matrix to a -error in time. In our algorithm, we are able to maximize the determinant in nearly-linear time without computing it.

Fast Computation of Effective Resistances.

Fast computation of effective resistances has various applications in sparsification [SS11, ADK16, LS17], sampling random spanning trees [DKP17, DPPR17, Sch18], and solving linear systems [KLP16]. [SS11, KLP16] gave approximation routines that, using Fast Laplacian Solvers [ST14, CKM14], compute effective resistances for all edges to -errors in time. [CGP18] presents an algorithm that computes the effective resistances of all edges to -errors in time. For computing effective resistances for a given set of vertex pairs, [DKP17] gave a routine that, using divide-and-conquer based on Schur complements approximation [KS16], computes the effective resistances between pairs of vertices to -errors in time. [DKP17] also used a divide-and-conquer to sample random spanning trees in dense graphs faster. For maintaining -approximations to all-pair effective resistances of a fully-dynamic graph, [DGGP18] gave a data-structure with expected amortized update and query time. In [DPPR17], the authors combined the divide-and-conquer idea and their determinant-preserving sparsification to further accelerate random spanning tree sampling in dense graphs. A subset of the authors of this paper (Li and Zhang) recently [LZ18] used a divide-and-conquer approach to compute, for every edge , the sum of effective resistances between all vertex pairs in the graph in which is deleted. Our routine for performing fast sequential updates in Section 3.1 is motivated by these divide-and-conquer methods and is able to cope with an online-offline hybrid setting.

Top- Selections on Graphs.

Conventional top- selections on graphs that rely on submodularity usually run in time, where is the number of edges. Here, we give a few examples of them. In [BBCL14], the authors studied the problem of maximizing the spread of influence through a social network. Specifically, they studied the problem of finding a set of initial seed vertices in a network so that, under the independent cascade model [KKT03] of network diffusion, the expected number of vertices reachable from the seeds is maximized. Using hypergraph sampling, the authors gave a greedy algorithm that achieves a -approximation in time. [Yos14, MTU16] studied the problem of finding a vertex set with maximum betweenness centrality subject to the constraint . Both algorithms in [Yos14, MTU16] are based on sampling shortest paths. To obtain a -approximation, their algorithms need at least running time according to Theorem 2 of [MTU16]. Given the assumption that the maximum betweenness centrality among all sets of vertices is , the algorithm in [MTU16] is able to obtain a solution with the same approximation ratio in time. A subset of the authors of this paper (Li, Yi, and Zhang) and Peng and Shan recently [LPS18] studied the problem of finding a set of vertices so that the quantity
is minimized. Here equals, in the graph in which is identified as a new vertex, the effective resistance between and the new vertex. By computing marginal gains for all vertices in a way similar to the effective resistance estimation routine in [SS11], the authors achieved a -approximation in time.
We remark that there are algorithms for maximizing submodular functions that use only nearly-linear evaluations of the objective function [BV14, EN17]. However, in many practical scenarios, evaluating the objective function or the marginal gain is expensive. Thus, directly applying those algorithms usually requires superlinear or even quadratic running time.

Acknowledgements

We thank Richard Peng, He Sun, and Chandra Chekuri for very helpful discussions.

2 Preliminaries

2.1 Graphs, Laplacians, and Effective Resistances

Let be a positively weighted undirected graph. and is respectively the vertex set and the edge set of the graph, and is the weight function. Let and . The Laplacian matrix of is given by
where , and we write iff . We will use and interchangeably when the context is clear.
If we assign an arbitrary orientation to each edge of , we obtain a signed edge-vertex incident matrix of graph defined as
Let be an diagonal matrix in which . Then we can express as . It follows that a quadratic form of can be written as
It is then observed that is positive semidefinite, and only has one zero eigenvalue if is a connected graph. If we let be the column of , we can then write in a sum of rank-1 matrices as .
The Laplacian matrix is related to the number of spanning trees by Kirchhoff’s matrix-tree theorem [Kir47], which expresses using any principle minors of . We denote by the principle submatrix derived from by removing the row and column corresponding to vertex . Since the removal of any matrix leads to the same result, we will usually remove the vertex with index . Thus, we write the Kirchhoff’s matrix-tree theorem as
(4)
The effective resistance between any pair of vertices can be defined by a quadratic form of the Moore-Penrose inverse of the Laplacian matrix [KR93].
Definition 2.1.
Given a connected graph with Laplacian matrix , the effective resistance any two vertices and is defined as
For two matrices and , we write to denote for all vectors . If for two connected graph and their Laplacians satisfy , then .

2.2 Submodular Functions

We next give the definitions for monotone and submodular set functions. For conciseness we use to denote .
Definition 2.2 (Monotonicity).
A set function is monotone if holds for all .
Definition 2.3 (Submodularity).
A set function is submodular if holds for all and .

2.3 Schur Complements

Let and be a partition of vertex set , which means . Then, we decompose the Laplacian into blocks using and as the block indices:
The Schur complement of , or , onto is defined as:
and we will use and interchangeably.
The Schur complement preserves the effective resistance between vertices .
Fact 2.4.
Let be a subset of vertices of a graph . Then for any vertices , we have:

3 Nearly-Linear Time Approximation Algorithm

By the matrix determinant lemma [Har97], we have
Thus, by Kirchhoff’s matrix tree theorem, we can write the increase of upon the addition of edge as
which immediately implies ’s submodularity by Rayleigh’s monotonicity law [Sto87].
Lemma 3.1.
is a monotone submodular function.
Thus, one can obtain a -approximation for Problem 1 by a simple greedy algorithm that, in each of iterations, selects the edge that results in the largest effective resistance times edge weight [NWF78]. Our algorithm is based on another greedy algorithm for maximizing a submodular function, which is proposed in [BV14]. Instead of selecting the edge with highest effective resistance in each iteration, the algorithm maintains a geometrically decreasing threshold and sequentially selects any edge with effective resistance above the threshold. The idea behind this greedy algorithm is that one can always pick an edge with highest effective resistance up to a -error. In doing so, the algorithm is able to obtain a -approximation using nearly-linear marginal value evaluations. We give this algorithm in Algorithm 1. Input :  : A connected graph. : A candidate edge set with . : An edge weight function. : An error parameter. : Number of edges to add. Output :  : A subset of with at most edges. 1 while  do 2       forall  do 3             if  and  then 4                   5             6       return P Algorithm 1 The performance of algorithm GreedyTh is characterized in the following theorem.
Theorem 3.2.
The algorithm takes a connected graph with vertices and edges, an edge set of edges, an edge weight function , a real number , and an integer , and returns an edge set of at most edges. The algorithm computes effective resistances for pairs of vertices, and uses arithmetic operations. The returned satisfies the following statement:
where denotes an optimum solution.
A natural idea to accelerate the algorithm GreedyTh is to compute effective resistances approximately, instead of exactly, using the routines in [SS11, DKP17]. To this end, we develop the following lemma, which shows that to obtain a multiplicative approximation of
it suffices to compute a multiplicative approximation of . We note that if given that and are within a factor of of each other, then it follows that and are within as well, as the function is a -Lipschitz function. Since we are using -approximation, we give an alternative proof.
Lemma 3.3.
For any non-negative scalars and such that
the following statement holds:
Proof.
Since and , we only need to prove the -approximation between and . Let be the positive root of the equation
Then, we have for and for .
For , we have
For , we have
By using the effective resistance approximation routine in [DKP17], one can pick an edge with effective resistance above the threshold up to a error. Therefore, by an analysis similar to that of Algorithm 1 of [BV14], one can obtain a -approximation in time . The reason that the running time has a factor is that one has to recompute the effective resistances whenever an edge is added to the graph. To make the running time independent of , we will need a faster algorithm for performing the sequential updates, i.e., Lines 1-1 of Algorithm 1.

3.1 Routine for Faster Sequential Edge Additions

We now use the idea we stated in Section 1.2 to perform the sequential updates at Lines 1-1 of Algorithm 1 in nearly-linear time. We use a routine from [DKP17] to compute the approximate Schur complement:
Lemma 3.4.
There is a routine that takes a Laplacian corresponding to graph with vertices and edges, a vertex set , and real numbers and , and returns a graph Laplacian with nonzero entries supported on . With probability at least , satisfies
The routine runs in time.
We give the routine for performing fast sequential updates in Algorithm 2. Input :  : A connected graph. : An edge sequence of edges. : An edge weight function. : A threshold. : An error parameter. : Number of edges to add. Output :  : A subset of with at most edges. 1 Let and . Let be the Laplacian matrix of . if  then 2       Compute by inverting in time. if and  then 3             return 4      else 5             return 6       7else 8       Divide into two intervals
and let and be the respective set of endpoints of edges in and . Update the graph Laplacian by:
return
Algorithm 2
The performance of AddAbove is characterized in Lemma 1.2.
Proof of Lemma 1.2.
We first prove the correctness of this lemma by induction on . When , the routine goes to lines 2-2. Lemma 3.4 guarantees that satisfies
which implies