Graph sparsification is a procedure that, given a graph returns another graph , typically with much fewer edges, that approximately preserves some characteristics of . Graph sparsification originated from the study of combinatorial graph algorithms related to cuts and flows [BK96, EGIN97]. Many different notions of graph sparsification have been extensively studied, for instance, spanners [Che86] approximately preserve pairwise distances, whereas cut-sparsification approximately preserves the sizes of all cuts [BK96]. Spielman and Teng [ST14, ST11b] defined spectral sparsification, a notion that’s strictly stronger than a cut-sparsification.
Spectral sparsifiers have found numerous applications to graph algorithms. They are key to fast solvers for Laplacian linear systems [ST14, ST11b, KMP14, KMP11]. Recently they have been used as the sole graph theoretic primitive in graph algorithms including solving linear systems [PS14, KLP16], sampling random spanning trees [DKP17, DKP17], measuring edge centrality [LZ18, LPS18], etc.
For an undirected, weighted graph we recall that the Laplacian of is the unique symmetric matrix such that for all we have
For two positive scalars we write if We say the graph is an -spectral sparsifier of if,
Restricting the above definition only to vectors one obtains cut sparsifiers. For a graph with vertices and edges, Spielman and Teng gave the first algorithm for constructing spectral sparsifiers with edges111The notation hides factors.. Spielman and Srivastava [SS11] proved that one could construct a sparsifier for by independently sampling edges with probabilities proportional to their leverage scores in Finally, Batson, Spielman, and Srivastava [BSS12] proved that one could construct sparsifiers with edges, and that this is optimal even for constructing sparsifiers for the complete graph. Recently, Carlson et al. [CKST17] have proved a more general lower bound, proving that one needs bits to store any data structure that can approximately compute the sizes of all cuts in
Given the tight upper and lower bounds, it is natural to guess at this point that our understanding of graph sparsification is essentially complete. However, numerous recent works have surprisingly brought to attention several aspects that we do not seem to understand as yet.
Are our bounds tight if we relax the requirement in Equation (1) to hold only for a fixed unknown with high probability? Andoni et al. [ACK16] define such an object to be a spectral sketch. They also construct a data structure (not a graph) with space that is a spectral sketch for even though is a lower bound if one must answer correctly for all Building on their work, Jambulapati and Sidford [JS18] showed how to construct such data structures that can answer queries for any with high probability. A natural question remains open: whether there exist graphs that are spectral sketches with edges?
What if we only want to preserve the effective resistance222The effective resistance between a pair is the voltage difference between if we consider the graph as an electrical network with every edge of weight as a resistor of resistance and we send one unit of current from to between all pairs of vertices? Dinitz, Krauthgamer, and Wagner [DKW15] define such a graph as a resistance sparsifier of , and show their existence for regular expanders with degree They conjecture that every graph admits an -resistance sparsifier with edges.
An -spectral sparsifier preserves weighted vertex degrees up to Do there exist spectral sparsifiers that exactly preserve weighted degrees? Dinitz et al. [DKW15] also explicitly pose a related question – does every dense regular expander contain a sparse regular expander?
What about sparsification for directed graphs? The above sparsification notions, and algorithms are difficult to generalize to directed graphs. Cohen et al. [CKP16] developed a notion of sparsification for Eulerian directed graphs (directed graphs with all vertices having in-degree equal to out-degree), and gave the first almost-linear time algorithms333An algorithm is said to be almost-linear time if it runs in time on graphs with edges. for building such sparsifiers. However, their algorithm is based on expander decomposition, and isn’t as versatile as the importance sampling based sparsification of undirected graphs [SS11]. Is there an easier approach to sparsifying Eulerian directed graphs?
. Could the above improved guarantees lead to even faster algorithms for some of these problems? Two problems of significant interest include estimating determinants[DKP17] and sampling random spanning trees [DKP17, DPPR17, Sch17].
1.1 Our Contributions
In this paper, we develop a framework for graph sparsification based on a new graph-theoretic tool we call short cycle decomposition. Informally, a short cycle decomposition of a graph is a decomposition into a sparse graph, and several cycles of short length. We use our framework to give affirmative answers to all the challenges in graph sparsification discussed in the previous section. Specifically:
We show that every graph has a graph with edges that is an -spectral-sketch for The existence of such graphic spectral-sketches was not known before. Moreover, we give an algorithm to construct an -spectral-sketch with edges in time. In addition, is also a spectral-sketch for
We show every graph has an -resistance sparsifier with edges, affirmatively answering the question raised by Dinitz et al. [DKW15]. We also give an algorithm to construct -resistance sparsifiers with edges in time.
We show that every graph has an -spectral sparsifier with edges that exactly preserves the weighted-degrees of all vertices. It follows that every dense regular expander contains a sparse (weighted) regular expander. Before our work, it was not known if there exist sparse degree-preserving sparsifiers (even for cut sparsifiers).
We show that short cycle decompositions can be used for constructing sparse spectral approximations for Eulerian directed graphs under the notion of spectral approximation given by Cohen et al. [CKP16] for Eulerian directed graphs (see 3.2 for definition). We show that short-cycle decompositions are sufficient for sparsifying Eulerian directed graphs, and prove that every directed Eulerian graph has a spectral approximation with edges.
We build on our spectral-sketches, to give an algorithm for estimating the effective resistances of all edges up to a factor of in time. The previous best results for this algorithm were [SS11] and [JS18].
Incorporating this result into the work of Durfee et al. [DPPR17] gives an time algorithm for approximating the determinant of a (rather, after deleting the last row and column, which is the number of spanning trees in a graph), up to a factor of The previous best algorithm for this problem ran in time [DKP17].
As a key component of all our results, we present efficient algorithms for constructing short cycle decompositions. From a bird’s eye view, the key advantage provided by short cycle decompositions for all the above results, is that they allow us to sample edges in a coordinated manner so to preserve weighted vertex degrees exactly.
An -short cycle decomposition of an unweighted undirected graph , decomposes into several edge-disjoint cycles, each of length at most and at most edges not in these cycles.
The existence of such a decomposition with and is a simple observation. We repeatedly remove vertices of degree at most 2 from the graph, along with their incident edges (removing at most edges in total). If the remaining graph has no cycle of length at most a breadth-first search tree of depth starting from any remaining vertex will contain more than vertices, a contradiction. This can be implemented as an time algorithm to find a -short cycle decomposition, which in turn implies a similar running time for all the existential results above. Finding such decompositions faster is a core component of this paper: we give an algorithm that constructs an -short cycle decomposition of a graph in time.
Organization. To keep this section brief, we defer the formal definitions and theorem statements to the overview of the work (Section 3), after defining a few necessary preliminaries in Section 2. We start with degree-preserving spectral sparsifiers in Section 4, and then give the algorithm for sparsification of Eulerian directed graphs (Section 5). Next, we present the construction of spectral-sketches and resistance sparsifiers in Section 6, followed by our algorithm for estimating effective resistances for all edges in Section 7. Finally, we give our almost-linear time algorithm for constructing a short cycle decomposition in Section 8.
A square symmetric matrix is positive semi-definite (PSD), denoted if for all we have For two matrices we write if for all or equivalently
For and two positive real numbers we write to express Observe that if For two PSD matrices we write if for all
For any PSD and if we have and then
For two graphs we often abuse notation to write to mean and to mean
For any PSD matrix we let denote the Moore-Penrose pseudoinverse of Thus, if
has an eigenvalues
with unit-norm eigenvectorsrespectively, we have and Similarly, we have and
Our notion of approximation is preserved under inverses:
For any PSD and , and any error , we have if and only if .
For any we let denote the vector such that the coordinate is 1, and all other coordinates are 0. We let For any edge in a connected graph the effective resistance of is defined as For a directed graph , its directed Laplacian , can be defined as
All logarithms throughout the paper are with base 2. Unless mentioned, we assume that our input graph has edges and vertices. Throughout the paper, we consider graphs with positive integral weights on the edges. Whenever we say the weights are poly bounded, we assume they are bounded by The expression with high-probability means with probability larger than
There are 4 major approaches to date towards graph sparsification: expander partitioning [ST11b, ACK16, JS18], importance sampling [BK96, SS11, KLP12], potential function based [BSS12, ALO15, LS15, LS17], and spanners based, which use sampling via matrix concentration [KP12, Kou14, KPPS17]. A survey of these approaches can be found in [BSST13].
We present a framework for graph sparsification built on short cycle decomposition that merges several ideas from the importance-sampling and spanners based approaches. Before giving an overview of the results in our paper, we first present an alternative algorithm for the classic graph sparsification result of Spielman and Srivastava [SS11]. This will be quite useful since our algorithms for constructing degree-preserving sparsifiers and sparsifying Eulerian directed graphs are immediately built on the following algorithm, and degree-preserving sparsification is a key idea underlying all our remaining results.
Say we have a graph with edges and edges. We start by expressing where for edge We can re-write this as
where is the projection orthogonal to the all ones vector. Given a subset of edges we draw a random graph as follows, independently for every edge , we include it in with probability and weight Otherwise, we delete the edge . All edges are included in with weight Observe that the expectation of is
It follows from standard concentration results for sums of matrix random variables that if for each edgein the norm is bounded (up to constants) by then with high probability,
Now, observe that (this is defined as the leverage score of the edge ). A simple trace argument implies and hence at least half the edges satisfy Letting these edges with low leverage score be the set of edges we toss random coins for, we obtain that Moreover, in expectation, has at most edges.
We can repeat the above sparsification roughly times to go down to edges, at each step sparsifying the graph just obtained. By Fact 2.1, the final approximation error is given by the sum of the error at each sparsification step. Since the number of edges is going down geometrically, the error is increasing geometrically, and hence is dominated by the error at the last step, yielding that the final graph is an spectral-sparsifier for
In order to implement this algorithm efficiently, we need to estimate effective resistances for the edges. For the above algorithm, constant factor estimates of the effective resistances suffice (at the cost of changing the constants involved). Spielman and Srivastava [SS11] showed that one can obtain constant factor estimates for all the edges together in time, resulting in a complete running time of for the above sparsification algorithm.
3.1 Degree Preserving Spectral Sparsification
Now, we adapt the above algorithm to leverage a short-cycle decomposition of the graph. Short cycles permit us to sample correlated edges in the graph while still keeping each random sample small in spectral norm. We first use this approach to construct degree-preserving spectral sparsifiers.
We first formally define a degree-preserving sparsifier.
Definition 3.1 (Degree-Preserving Sparsifier).
A graph is said to be a degree-preserving -sparsifier of if
for every we have, and
every vertex has the same weighted degree in and i.e.,
Given the above algorithm for usual graph sparsification, the main obstacle is that at each sparsification step, the weighted degrees are not preserved. This is where we require our key tool, a short cycle decomposition, which we now formally define.
For an undirected unweighted graph we say that is a an -short cycle decomposition, if are edge-disjoint cycles in each is a cycle of length at most and
Assuming that we have an efficient algorithm for constructing an -short cycle decomposition of any given graph, we show the following theorem.
Given every undirected graph with poly-bounded weights has a degree-preserving -sparsifier with edges. The algorithm DegreePreservingSparsify, given our short cycle decomposition algorithm, takes in a graph and runs in time and returns a degree-preserving -sparsifier of with edges.
The following is a brief description of our degree-preserving sparsification algorithm.
Assume first that our graph is an unweighted graph that has been given to us as the union of disjoint cycles of even length. We sample a random graph as follows. For each cycle independently, we index the edges in order start from an arbitrary vertex, and perform the following correlated sampling procedure: with probability , we keep only the even indexed edges with weight 2, and with probability
we keep only the odd indexed edges with weight 2 (see Figure1). Observe that has half as many edges as and has exactly the same weighted degrees as . In order to apply matrix concentration, we need to ensure that for each cycle the norm is at most , where is the Laplacian of the cycle This norm is easily upper bounded by
If instead, was any arbitrary unweighted graph, we move all the edges with to Again, by averaging, we still have at least edges remaining. Now, we greedily pick a bi-partition of the vertices of such that at least half the remaining edges are crossing the cut. We add all the non-crossing edges to Now, we utilize an -short cycle decomposition of Thus, all but edges of are partitioned into cycles of length at most . Observe that all the cycle edges crossing the bi-partition, at least must now be in even cycles, each with total bounded by Now, independently for each cycle, we pick even or odd edges with probability and add them to with weight 2. Assuming has at most edges, the same weighted degree as and with high probability
Note that re-framing original sparsification into an algorithm for reducing the edges by a constant fraction is crucial for this algorithm. We are only able to reduce the edges in a cycle by half. Further, the cycle decomposition of the graph will necessarily change with every iteration.
For starting with a weighted graph with poly-bounded weights, we can use the binary representation of the edge weight to split each edge into edges, each with a weight that’s a power of 2. Now, repeating the above procedure as before, we can construct a degree-preserving -sparsifier for with roughly edges. Using the length short-cycle decomposition, this gives roughly edges.
3.2 Sparsification of Eulerian Directed Graphs
Now, we can take a very similar approach to sparsifying Eulerian directed graphs. This is a primitive introduced in [CKP16], and is at the core of recent developments in fast solvers for linear systems in directed Laplacians [CKP16, CKP17, Kyn17]. In contrast to undirected graphs, it has been significantly more challenging to give an appropriate notion of approximation for directed graphs (see Section 5 for the definition of Laplacian of a directed graph ). Cohen et al. [CKP17] showed that for the purpose of solving linear systems in Eulerian directed graphs, one such useful notion is to say -approximates if
where is the undirectification of i.e., the underlying undirected graph of with edge-weights halved. In the case where is Eulerian,
The key obstacle in sparsifying Eulerian directed graphs is to sample directed subgraphs that are Eulerian since independent sampling cannot provide us with such precise control on the degrees. The work of Cohen et al. [CKP17] fixed this locally by modifying the diagonal in in order to make the sampled graph Eulerian. This approach induces an error in of the order of where is the diagonal out-degree matrix for In order for this error to be small relative to must be an expander. Hence, the need of expander partitioning in their approach.
However, as we saw above, a short cycle decomposition allows us to perform correlated sampling on edges with precise control on the degrees. For sampling directed graphs, consider a single cycle where the edges may have arbitrary direction (see Figure 2). With probability we sample the edges in clockwise-direction, and with probability we sample the edges in the anti-clockwise direction. In either case, we double the weights of the sampled edges. Observe that for each vertex, the difference between the outgoing and incoming degrees is preserved exactly. Hence, if we started with an Eulerian directed graph, we end up with an Eulerian directed graph. Moreover, in expectation, we only keep half the edges of the cycle.
We can now basically follow the algorithm for degree-preserving sparsification. We treat the graph as undirected for all steps except for sampling edges from a cycle. In particular, the cycle decomposition is found in the corresponding undirected graph. Using the above approach for sampling edges from each cycle, we can sample an Eulerian directed graph, that has a constant fraction fewer edges in expectation. Since the matrices involved are no longer symmetric, we invoke concentration bounds for rectangular matrices to obtain
Now, repeating this sparsification procedure, and observing that this notion of approximation error also composes, we obtain an Eulerian directed graph that -approximates with roughly edges. Again, using the naive cycle decomposition, this is edges.
Given for every Eulerian directed graph we can find in time an Eulerian directed graph with edges, that -approximates
This shows the existence of sparsifiers for Eulerian graphs with fewer edges than the nearly-linear sized ones constructed in Cohen et al. [CKP17]. More importantly, it shows that approaches based on importance sampling, which work well on undirected graphs, can work in the more general directed settings as well. However, the high costs of computing short cycle decompositions in this paper means this does not lead to faster asymptotic running times in the applications – we believe this is an interesting direction for future work.
3.3 Graphical Spectral Sketches and Resistance Sparsifiers
We define a graphical spectral sketch as follows:
Definition 3.5 (Graphical Spectral Sketch).
Given a graph a distribution over random graphs is said to be a graphical -spectral sketch for , if for any fixed with high probability, over the sample we have
For constructing graphical spectral sketches, we closely follow the approach of Jambulapati and Sidford [JS18] and Andoni et al. [ACK16]. However, to construct sketches which are graphical, we use an approach similar to the degree-preserving sparsification algorithm. Our result is as follows:
Given every undirected graph with vertices and edges has a graphical -spectral sketch of with edges. The algorithm SpectralSketch, given runs in time and with high probability returns a graphical -spectral sketch of with edges. In addition, both these graphical sketches satisfy444 denotes the Moore-Penrose pseudo-inverse of that for any fixed with high probability over the sample we have
The key idea in [JS18] and [ACK16] is to focus on an expander , and for each vertex with degree sample edges incident at and add them to after scaling its weight by (if we add all the edges of to ), for a total of edges. Firstly, observe that this means that we will have vertices where the degree changes by This is not good enough to preserve up to even for the vectors They get around this by explicitly storing the diagonal degree matrix of using extra space. For a fixed vector they consider the estimator Its expectation is easily seen to be
They prove that its standard deviation is bounded byFor an expander with conductance Cheeger’s inequality (see Lemma 6.5) gives that
In order to construct an estimator for general graphs, they invoke expander partitioning [ST11b], which guarantees that in any graph we can find disjoint vertex induced pieces such that each piece is contained in an expander (a well-connected subgraph, formally defined in Section 6.2), and at least half the edges are contained within such pieces. Applying this times recursively, and combining the above estimators for each piece, Jambulapati and Sidford [JS18] obtain an estimator with standard deviation
The above sketch is not a graph since sampling edges does not preserve degrees exactly. Hence, our degree-preserving sparsification algorithm presents a natural approach to convert it into a graphical sketch. We aim to reduce the edge count by a constant factor without incurring too much variance in the quadratic form (and then repeat this processtimes). We apply expander decomposition, and within each piece, add all edges incident on vertices with degree at most to On the remaining graph, as before, we find a bi-partition, a cycle decomposition, and independently pick odd/even edges in the cycles with double the weight. This reduces the number of edges by a constant factor. Since we preserve weighted degrees exactly, an analysis similar to the above gives that for a fixed vector the standard deviation in is bounded by Repeating this process times gives us a graph with edges. Averaging such sketches, and applying concentration bounds, we obtain a graphical -spectral sketch of
The fact that we have a graph allows us to reason about the quadratic form of its inverse . We first argue that is an -spectral sparsifier of by showing that the probabilities that we sample edges to form are good upper bounds of (appropriate rescalings of) effective resistances. This follows because any edge incident to vertices with degrees at least that are contained in an expander with expansion at least has effective resistance at most .
A simple, but somewhat surprising argument (Lemma 6.8) gives that if is a graphical -spectral sketch, and a -spectral sparsifier, then for any fixed vector with high probability, it also preserves the inverse quadratic form of , i.e.,
Picking and taking union bound, we obtain that with high probability, for all This means that is a resistance-sparsifier for with high probability. Again, the naive cycle decomposition gives the existence of resistance-sparsifiers with edges.
Corollary 3.7 (Resistance Sparsifiers).
For every undirected graph on vertices has a resistance sparsifier with edges. The algorithm SpectralSketch, given runs in time and with high probability returns a resistance sparsifier of with edges.
3.4 Estimating Effective-Resistances
The effective resistance of an edge is a fundamental quantity. It and its extensions have a variety of connections in the analysis of networks [SM07, Sar10], combinatorics [Lov93, DFGX18] and the design of better graph algorithms [CKM11, MST15, Sch17].
While the effective resistance of an edge can be computed to high accuracy using linear system solvers, doing so for all edges leads to a quadratic running time. On the other hand, the many algorithmic applications of resistances have motivated studies on efficient algorithms for estimating all resistances. There have been two main approaches for estimating effective resistances to date: random projections [SS11, KLP12] or recursive invocations of sparsified Gaussian elimination [DKP17]. Both of them lead to running times of for producing estimates of the resistances of all edges of a graph.
A recent result by Musco et al. [MNS18] demonstrated the unlikelihood of high accuracy algorithms (with dependency for some small ) for estimating the resistances of all edges. On the other hand, the running time of a determinant estimation algorithm for Laplacians by Durfee et al. [DPPR17] hinges on this dependency. The running time bottleneck of this algorithm is the estimation of effective resistances of edges, but to an multiplicative error of . Both methods for estimating resistances described above [SS11, DKP17] give running times of in this setting. Practical studies involving the random projection method for estimating resistances [Sar10, MGKT15] also demonstrate the factor in the runtime of such methods translates to solving linear systems for a error. Such high overhead has been a major limitation in applying effective resistances to analyzing networks.
A key advantage of our graph sketches and resistance sparsifiers is that because the resulting objects remain as graphs, they can be substituted into the intermediate states of the sparsified Gaussian elimination approach for computing graph sparsifiers [DKP17]. They give a reduction from computing effective resistances to computing approximate Schur complements, which are partial states of Gaussian elimination. Incorporating our spectral sketches in place of generic graph sparsification algorithms with dependencies gives our main algorithmic result.
Given any undirected graph with vertices, and edges, and any vertex pairs and error , we can with high probability compute -approximations to the effective resistances between all of these pairs in time.
This is the first routine for estimating effective resistances on sparse graphs that obtain an dependence better than . In the dense case an result was shown by Jambulapati and Sidford [JS18], but it relies on linear systems solves, one per column of the matrix.
We obtain this result via two key reductions:
The recursive approximate Gaussian elimination approach from [DKP17] utilizes the fact that effective resistances are preserved under Gaussian eliminations. As this recursion has depth , our guarantees for -spectral sketches imply that it suffices to work with sketches of such graphs produced by Gaussian elimination. However, Schur complement of very sparse graphs such as the degree star may have edges. Even if we eliminate an independent set of size each with roughly average degrees in our spectral sketches with edges, we will end up with at least edges. Thus, we need to directly compute spectral sketches of Schur complements without first constructing the dense graph explicitly.
The work of Kyng et al. [KLP16] builds fast solvers for Laplacian systems via approximate Cholesky factorization. As a key step, they reduce computing approximating Schur complements to implicitly sparsifying a sum of product weighted cliques 555A product weighted clique has a weight vector with the edge having weight . Assuming we start with a spectral-sketch, we know that the graph has total degree this implies that the total number of vertices involves in these product weighted cliques is Thus, our goal becomes designing an algorithm for implicitly building spectral sketches of product-weighted cliques with a total of vertices that run in time for some constant .
Our algorithm works with these weighted cliques in time dependent on their representation, which is the total number of vertices, rather than the number of edges. We do so by working with bi-cliques as the basic unit, instead of edges. Our algorithm then follows the expander-partitioning based scheme for producing spectral sketches, as in previous works on graph sketches with type dependencies [ACK16, JS18]. This requires showing that this representation as bi-cliques interacts well with both weights and graph partitions. Then on each well-connected piece, we sample matchings from each bi-clique.
This results in each vertex in the bi-clique representation contributing edges to the intermediate sketch. As we are running such routines on the output of spectral sketches, the total number of vertices in these cliques is , giving a total edge count of On this graph, we can now explicitly compute another spectral sketch of size .
An additional complication is computing an expander decomposition using Lemma 6.7 requires examining all the edges of a graph, which in our case is cost-prohibitive. We resolve this by computing these decompositions on a constant error sparse approximation of this graph instead.
Incorporating this spectral sketch of Schur complements back into [DPPR17] gives the first sub-quadratic time algorithm for estimating the determinants of a graph Laplacian with the last row and column removed. This value has a variety of natural interpretations including the number of random spanning trees in the graph. Note that while the determinant may be exponentially large, the result in [DPPR17] is stable with variable-precision floating point numbers.
Given any graph Laplacian on vertices and edges, and any error , we can produce an estimate to , the determinant of with the last row/column removed, in time .
Note that the removal of the last row / column is necessary and standard due to being low rank. Details on this algorithm, and the specific connections with [DPPR17] are in Appendix B. We remark that this algorithm however does not speed up the random spanning tree generation portion of [DPPR17] due to it relying on finer variance bounds that require sampling edges. That spanning tree sampling algorithm however, is superseded by the recent breakthrough result by Schild [Sch17].
3.5 Almost-Linear Time Short Cycle Decomposition
The bottleneck in the performances of all algorithms outlined above is the computation of short cycle decompositions (Definition 1.1). The simple existence proof from Section 1.1 can be implemented to find a short cycle decomposition in time (see Section 8 for pseudo-code and proof).
The algorithm NaiveCycleDecomposition, given an undirected unweighted graph , returns a -short cycle decomposition of in time.
While the above algorithm gives us near-optimal length and number of remaining edges666Consider the wheel graph with spokes, and replace each spoke with a path of length This graph has vertices, edges, and girth of Lubotzky, Philip, and Sarnak [LPS88] constructed explicit Ramanujan graphs that are 4-regular (and hence have 2n edges) and girth , we were unable to obtain an almost-linear time algorithm using shortest-path trees. The main obstacle is that updating shortest-path trees is expensive under edge deletions.
Possible Approaches via Spanners. Another approach is to try spanners. The existence of a short cycle decomposition is a direct consequence of spanners. A key result by Awerbuch and Peleg [AP90] for spanners states that every unweighted graph has a subgraph with edges such that for every edge in its end points are within distance in . Thus, every edge in is in a cycle of length We can remove this cycle and repeat.
Thus, another approach for generating this decomposition is by dynamic, or even decremental, spanners [BS08, BR11, BKS12]. While these data structures allow for time per update, they are randomized, and crucially, only work against oblivious adversaries. Thus, the update sequence needs to fixed before the data structure samples its randomness. To the best of our understanding, in each of these result, the choice of cycle edges depends upon the randomness. Thus, their guarantees cannot be used for constructing short cycle decompositions. The only deterministic dynamic spanner algorithm we’re aware of is the work of Bodwin and Krinninger [BK16]. However, it has overheads of at least in the spanner size / running time.
Possible Approaches via Oblivious Routings. Another possible approach of finding short cycles is via oblivious routings: a routing of an edge (that doesn’t use ) immediately gives a cycle involving Since there exist oblivious routings for routing all edges of in with small congestion, the average length of a cycle cannot be too large.
Recent works, especially those related to approximate maximum flow, have given several efficient constructions of oblivious routing schemes [Rac08, RST14, Mad10, She13, KLOS14, Pen16]. However, such routings only allow us to route a single commodity in nearly-linear time. Using current techniques, routing arbitrary demands on an expander with congestion seems to requires time. On the other hand, on more limited topologies, it is known how to route each demand in sub-linear time [Val82]. Such a requirement of only using local information to route have been studied as myopic routing [GSY17], but we are not aware of such results with provable guarantees.
Our Construction. As an important technical part of this paper, we give an almost-linear-time algorithm for constructing a short cycle decomposition of a graph.
The algorithm ShortCycleDecomposition, given an undirected unweighted graph with vertices and edges, returns a -short cycle decomposition of in time.
Our construction of short cycle decomposition is inspired by oblivious routings, and uses the properties of random walks on expanders. This can be viewed as extending previous works that utilize behaviors of electrical flows [KM11, KLOS14], but we take advantage of the much more local nature of random walks. This use of random walks to port graphs to fewer vertices is in part motivated by their use in the construction of data structures for dynamically maintaining effective resistances, involving a subset of the authors [DGGP18]. It also has similarities with the leader election algorithm for connectivity on well-connected graphs in a recent independent work by Assadiet al. [ASW18].
Say we have an expander graph with conductance We know random walks of length mix well in Choose a parameter say and pick the set of vertices of largest degree (with total degree at least ). For every edge leaving starting from its other end point we take a step random walk. This random walk hits again with probability Thus, if we pick random walks, at least one of them will hit again with high probability. This is a short cycle in ( with contracted to a single vertex). Since these are independent random walks, Chernoff bounds imply that the maximum congestion is Thus, we can greedily pick a set of cycles of length in that are disjoint.
Now, we just need to connect these cycles within We define a graph on the vertices of with one edge for every cycle in connecting the two end points in and recurse on With levels of recursion (since ), and using the naive cycle-decomposition for the base case, we find a short cycle decomposition in this graph, and then can expand it to a cycle decomposition in using the cycles in This should give cycles of length
There is a key obstacle here: this approach really needs expanders, not pieces contained in expanders, as in the expander decomposition from Spielman and Teng [ST11a]. Instead, we use a recent result of Nanongkai and Saranurak [NS17] that guarantees the pieces are expanders, at the cost of achieving and a running time of A careful trade-off of parameters allows us to recurse for iterations, resulting in an -short cycle decomposition in time.
Cycle Decomposition algorithm in the following sections
In the following sections, we assume CycleDecomposition is an algorithm that takes as input an unweighted graph with vertices and edges, runs in time at most and returns a -short cycle decomposition. Further, we assume that satisfies
for all . We also assume for any Since will remain the same throughout these sections, we will simply write and instead of and
4 Degree-Preserving Spectral Sparsification
In this section, we describe an efficient algorithm for constructing degree-preserving spectral sparsifiers, proving Theorem 3.3.
The algorithm will use a short cycle decomposition, and sparsify each cycle with the distribution
We will bound the error in this distribution via matrix Chernoff bounds [Tro12], and recursively apply this sparsification procedure until our graph achieves low edge count.
The following theorem is the main result of this section.
Given a graph with integer poly bounded edge weights, an error parameter and a cycle decomposition routine CycleDecomposition, the algorithm DegreePreservingSparsify (described in Algorithm 1) returns a graph with at most edges such that all vertices have the same weighted degrees in and , and with high probability,
The algorithm DegreePreservingSparsify runs in time
We first prove Theorem 3.3 by plugging in NaiveCycleDecomposition and ShortCycleDecomposition into Theorem 4.1 and evaluating on those routines. It is easy to check that their runtimes satisfy assumption 2.
Proof of Theorem 3.3..
Note that DegreePreservingSparsify always returns a graph with the same weighted degrees as , such that with high probability. Using either NaiveCycleDecomposition or ShortCycleDecomposition as the algorithm CycleDecomposition, we obtain the following guarantees:
Using NaiveCycleDecomposition: DegreePreservingSparsify runs in time, and returns an with edges.
Using ShortCycleDecomposition: DegreePreservingSparsify runs in time, and returns an with edges.
Thus we have our theorem. ∎
In order to prove Theorem 4.1, we first prove the following lemma about the effect of sampling the cycles independently. It is a direct consequence of matrix concentration bounds.
Let be independent distributions over graphs containing at most edges, and let their expectations be
and define their sum to be . For any graph with
such that the maximum leverage score of any edge with respect to in bounded above by , the random graph
with satisfies with high probability
This is a corollary of Matrix Chernoff bounds from [Tro12], which state that for a sequence of independent random PSD matrices such that , we have for
where is such that for each we have almost surely.
To prove Lemma 4.2, we set , , and . Then,
where the last inequality follows since the number of edges in is at most , and is the definition of the leverage score of any edge in the support of w.r.t. . Note that our edge leverage scores are bounded above by . Now, we can set to get
Similarly, we bound the other direction, and by the union bound, we obtain with high probability: