1 Introduction
Graph decomposition is a useful algorithmic primitive with various applications. The general framework is to remove few edges so that the remaining components have nice properties, and then specific problems are solved independently in each component. Several types of graph decomposition results have been studied in the literature. The most relevant to this work are low diameter graph decompositions and expander decompositions. We refer the reader to Section 2 for notation and definitions.
Low Diameter Graph Decompositions:
Given a weighted undirected graph and a parameter , a low diameter graph decomposition algorithm seeks to partition the vertex set into sets with the following two properties:

Each component has bounded shortest path diameter, i.e. , where is the shortest path distance between and using the edge weight .

There are not too many edges between the sets , i.e. , where is the “distortion” that depends on the input graph.
This widely studied [LS93, KPR93, Bar96, LS10, AGG14]
primitive (and its generalization to decomposition into padded partitions) has been very useful in designing approximation algorithms
[CCC98, CKR01, FHRT03, FHL08, KR11, BFK11, LOT14]. This approach is particularly effective when the input graph is of bounded genus or minor free, in which case [LS10] and [AGG14]. For these special graphs, this primitive can be used to proving constant flowcut gaps [KPR93], proving tight bounds on the Laplacian spectrum [BLR10, KLPT09], and obtaining constant factor approximation algorithms for NPhard problems [BFK11, AL17]. However, there are graphs for which is necessarily where is the number of vertices, and this translates to a factor loss in applying this approach to general graphs. For example, in a hypercube, if we only delete a small constant fraction of edges, some remaining components will have diameter .Expander Decompositions:
Given an undirected graph and a parameter , an expander decomposition algorithm seeks to partition the vertex set into sets with the following two properties.

Each component is a expander, i.e. , where is the conductance of the induced subgraph ; see Section 2 for the definition of conductance.

There are not too many edges between the sets , i.e. , where is a parameter depending on the graph and .
This decomposition is also well studied [KVV04, ST11, ABS10, OT14], and is proved useful in solving Laplacian equations, approximating Unique Games, and designing clustering algorithms. It is of natural interest to minimize the parameter . Similar to the low diameter partitioning case, there are graphs where . For example, in a hypercube, if we delete a small constant fraction of edges, some remaining components will have conductance .
Motivations:
In some applications, we could not afford to have an factor loss in the approximation ratio. One motivating example is the Unique Games problem. It is known that Unique Games can be solved effectively in graphs with constant conductance [AKK08] and more generally in graphs with low threshold rank [Kol11, GS11, BRS11], and in graphs with constant diameter [GT06]. Some algorithms for Unique Games on general graphs are based on graph decomposition results that remove a small constant fraction of edges so that the remaining components are of low threshold rank [ABS10] or of low diameter [AL17], but the factor loss in the decomposition is the bottleneck of these algorithms. This leads us to the question of finding a property that is closely related to low diameter and high expansion, so that every graph admits a decomposition into components with such a property without an factor loss.
Effective Resistance Diameter:
The property that we consider in this paper is having low effective resistance diameter. We interpret the graph as an electrical circuit by viewing every edge as a resistor with resistance . The effective resistance distance between the vertices and is then the potential difference between and when injecting a unit of electric flow into the circuit from the vertex and removing it out of the circuit from the vertex . We define
as the effective resistance diameter of . Both the properties of low diameter and of high expansion have the property of low effective resistance diameter as a common denominator: The effective resistance distance is upper bounded by the shortest path distance for any graph, and so every low diameter component has low effective resistance diameter. Also, a regular graph with constant expansion has effective resistance diameter [BK89, CRR97], and so an expander graph also has low effective resistance diameter. See Section 2 for more details.
In this paper, we study the connection between effective resistance and graph conductance. Roughly speaking, we show if all sets have mild expansion (see Theorem 1), then the effective resistance diameter is small. We use this observation to design a graph partitioning algorithm to decompose a graph into clusters with effective resistance diameter at most the inverse of the average degree (up to constant losses) while removing only a constant fraction of edges. This shows that although we cannot partition a graph into expanders by removing a constant fraction of edges, we can partition it into components that satisfy the “electrical properties” of expanders.
Applications of Effective Resistance:
Besides the motivation from the Unique Games problem, we believe that effective resistance is a natural property to be investigated on its own. The effective resistance distance between two vertices has many useful probabilistic interpretations, such as the commute time [CRR97], the cover time [Mat88]
, and the probability of an edge being in a random spanning tree
[Kir47]. See Section 2 for more details. Recently, the concept of effective resistance has found surprising applications in spectral sparsification [SS11], in computing maximum flows [CKM11], in finding thin trees [AO15], and in generating random spanning trees [KM09, MST15, DKP17]. The recent algorithms in generating a random spanning tree are closely related to our work. Madry and Kelner [KM09] showed how to sample a random spanning tree in time where is the number of edges, faster than the worst case cover time (see Section 2). A cruicial ingredient of their algorithm is the low diameter graph decomposition technique, which they use to ensure that the resulting components have small cover time. In subsequent work, Madry, Straszak and Tarnawski [MST15] have improved the time complexity of their algorithm to by working with the effective resistance metric instead of the shortest path metric. Indeed, their technique of reducing the effective resistance diameter is similar to our technique – even though it cannot recover our result.1.1 Our Results
Our main technical result is the following connection between effective resistance and graph partitioning.
Theorem 1.
Let be a weighted graph with weights . Suppose for any set with we have
(mild expansion) 
for some and . Then, for any pair of vertices , we have
(resistance bound) 
where is the weighted degree of .
In [CRR97], Chandra et al. proved that a regular graph with constant expansion has effective resistance diameter . They also proved that the effective resistance diameter of a dimensional grid is when even though it is a poor expander. Theorem 1 can be seen as a common generalization of these two results, using the mild expansion condition as a unifying assumption. Chandra et al. [CRR97] also showed that the effective resistance diameter of a dimensional grid is . Note that for a grid, for any square. This shows that the mild expansion assumption of the theorem cannot be weakened in the sense that if for some sets , then may grow as a function of .
The proof of Theorem 1 also provides an efficient algorithm to find such a sparse cut. The highlevel idea is to prove that if all level sets of the
electric potential vector satisfy the
mild expansion condition, then the potential difference between and must be small, i.e., is small. Combining with a fast Laplacian solver [ST14], we show that the existence of a pair of vertices with high effective resistance distance implies the existence of a sparse cut which can be found in nearly linear time.Corollary 2.
Let be a weighted undirected graph. If for all , then for any , there is a subset of vertices such that
Furthermore, the set can be found in time .
Using Corollary 2 repeatedly, we can prove the following graph decomposition result.
Theorem 3 (Main).
Given a weighted undirected graph , and a large enough parameter , there is an algorithm with time complexity that finds a partition satisfying
(loss bound) 
and
(resistance bound) 
for all .
Let be a regular unweighted graph. Theorem 3 implies that it is possible to remove a constant fraction of the edges of and decompose into components with effective resistance diameter at most . Note that regular expanders with have the least effective resistance diameter among all regular graphs. So, even though it is impossible to decompose regular graphs into graphs with expansion while removing only a constant fraction of edges, we can find a decomposition with analogous “electrical properties”.
We can also view Theorem 3 as a generalization of the following result: Any regular graph can be decomposed into edge connected subgraphs by removing only a constant fraction of edges. This is because if the effective resistance diameter of an unweighted graph is , then must be edge connected. Recall that a graph is edge connected, if the size of every cut in that graph is at least .
2 Preliminaries
In this section, we will first define the notations used in this paper, and then we will review the background in effective resistances, Laplacian solvers, and graph expansions in the following subsections.
Given an undirected graph and a subset of vertices , we use the notation for the set of edges with both endpoints in , i.e. . We write for the complement of with respect to , i.e. . The variables and stand for the number of vertices and the edges of the graph respectively, i.e. and . We use the notation for the edge boundary of , i.e. . For a graph with weights , we write for the weighted degree of . For , the volume of is defined to be . When the graph is clear in the context we may drop the subscript in all aforementioned notation.
Scalar functions and vectors are typed in bold, i.e. , or . For a subset , the notation stands for the sum of the weights of all edges in , i.e. . The th canonical basis vector is denoted by . Matrices are typed in serif, i.e. .
Time complexities are given in asymptotic notation. We employ the notation to hide polylogarithmic factors in , i.e. . We use the notation for asymptotic inequalities, i.e. ; and the notation for asymptotic equalities, i.e. .
2.1 Electric Flow, Electric Potential, and Effective Resistance
Let be a given graph with nonnegative edge weights . The notion of an electric flow arises when one interprets the graph as an electrical network where every edge represents a resistor with resistance .
We fix an arbitrary orientation of the edges and define a unit flow in this network as a function (where for we define ) satisfying the following:
where is the set of edges having as the head in our orientation, and is the set of edges having as tail. Let be an oriented edge. The flow has to obey Ohm’s law
(Ohm’s law) 
for some vector which we call the potential vector. The electrical flow between the vertices and is the unit flow that satisfies 2.1 and Ohm’s law.
The electrical energy of a flow is defined as the following quantity,
(electrical energy) 
It is known that the electric flow between and is the unit flow with minimal electrical energy. The effective resistance between the vertices and is the potential difference between the vertices and induced by this flow, i.e. . It is known that the potential difference between and equals the energy of this flow. This is often referred as Thomson’s principle.
The electric potential vector and the effective resistance are known to have the following closed form expressions: Let be the weighted adjacency matrix of , i.e. the matrix satisfying , and the weighted degree matrix, i.e. the diagonal matrix satisfying . The (weighted) Laplacian is defined to be the matrix
(weighted Laplacian) 
It is wellknown that this is a symmetric positive semidefinite matrix. We will take
as the spectral decomposition of , where
are the eigenvalues of
sorted in increasing order. It is easy to verify and further it can be shown that this is the only vector (up to scaling) satisfying this when is connected. This means if is connected, the matrix is invertible in the subspace perpendicular to . This inversion will be done by the matrix , the socalled MoorePenrose pseudoinverse of defined by(pseudoinverse of ) 
Let be the unit electric flow vector. It can be verified that the electric potential – i.e. the vector satisfying for all – satisfies the equation
(2.1) 
In particular, this implies the following closed form expression for
( effective resistance) 
It can be verified that this defines a () metric on the set vertices of [KR93], as we have

if and only if .

for all .

for all .
Further, by routing the unit flow along the shortest path we see that the shortest path metric dominates the effective resistance metric, i.e. for all the pairs of vertices .
It is known that the commute time distance between and – the expected number of steps a random walk starting from the vertex needs to visit the vertex and then return to – is times the effective resistance distance [CRR97]. Also, the effective resistance of an edge corresponds to the probability of this edge being contained in a uniformly sampled random spanning tree [Kir47]. A wellknown result of Matthews [Mat88] relates the effective resistance diameter to the cover time of the graph – the expected number of steps a random walk needs to visit all the vertices of . Aldous [Ald90] and Broder [Bro89] have shown that simulating a random walk until every vertex has been visited allows one to sample a uniformly random spanning tree of the graph.
2.2 Solving Laplacian Systems
For our algorithmic results, it will be important to be able to compute electric potentials, and effective resistances quickly. We will do this by appealing to Equation (2.1) and the definition of the effective resistance. Both of these equations require us to solve a Laplacian system. Fortunately, it is known that these systems can be solved in nearly linear time [ST14, KMP10, KMP11, KOSA13, CKM14, KS16].
Lemma 4 (The SpielmanTeng Solver, [St14]).
Let a (weighted) Laplacian matrix , a righthand side vector , and an accuracy parameter be given. Then, there is a randomized algorithm which takes time and produces a vector that satisfies
(accuracy guarantee) 
with constant probability, where .
For our purposes it will suffice to pick inversely polynomial in the size of the graph in the unweighted case, and in the weighted case.
Extending the ideas of Kyng and Sachdeva [KS16], Durfee et al. [DKP17] show that it is possible to compute approximations for effective resistances between a set of given pairs efficiently.
Lemma 5.
Let be a weighted graph, an accuracy parameter, and . There is an time algorithm which returns numbers for all satisfying
This lemma will aid us in computing fast approximations for furthest points in the effective resistance metric. For our purposes, we only need to pick as a small enough constant, i.e. . Similar guarantees can also be obtained using the ideas of Spielman and Srivastava [SS11].
2.3 Conductance
For a graph with nonnegative edge weights , we define the conductance of a set as
(conductance of a set) 
The conductance of the graph is then defined as
(conductance of a graph) 
It is wellknown [Che70, AM85] that the conductance of the graph is controlled by the spectral gap (second smallest eigenvalue) of the normalised Laplacian matrix , i.e.
(Cheeger’s inequality) 
Appealing to the closed form formula for the effective resistance it can be verified that the spectral gap of the (unnormalised) Laplacian controls the effective resistance distance, i.e.
By an easy application of Cheeger’s inequality we see that the expansion controls the effective resistance as well, i.e.
Indeed, Theorem 1 and Corollary 2 will improve upon this bound.
3 From Well Separated Points to Sparse Cuts
In this section, we are going to prove Theorem 1 and Corollary 2. As previously mentioned, we will prove that if all the level sets of the potential vector have mild expansion, the effective resistance cannot be high.
See 1
Proof.
In the following let be a unit electric flow from to , and be the corresponding vector of potentials where we assume without loss of generality that . We direct our attention to the following threshold sets
Then, we have
Using Ohm’s law, we can rewrite this into
(3.1) 
where is the potential difference along the endpoints of the edge . Normalizing this, we get
(3.2) 
Now, set . Restricted over the set of edges ,
is a probability distribution and the LHS of (
3.2) corresponds to the expected potential drop when edges are sampled with respect to the probability distribution , i.e. we haveThen, by Markov’s inequality, we get a set such that

all edges satisfy

, equivalently
Using the observation that the endpoint of an edge that is not contained in should have potential at least , we obtain
Assuming , using the mild expansion property, we have . So, from above we get
where in the first inequality we also used that increases as decreases. Now, iterating this procedure times we obtain
(3.3) 
as increases as decreases. We set , then . Inductively define
Then, using the inequality (3.3), we have
(3.4) 
Note that we can run the above procedure as long as . Therefore, for some , we must have
Therefore,
By a similar argument (sending flow from to ), we see that more than half of the vertices have potential smaller than
Combining these two bounds, we obtain
where the equality follows since the flow is a unit flow. ∎
Remark 6.
For our proof to go through, we do not need the mild expansion condition to be satisfied by all cuts. It suffices to have this condition satisfied by electric potential threshold cuts only.
For computational purposes, it will be important to show that our argument is robust to small perturbations in the potentials, i.e. we need to show that the proof will still go through when we are working with threshold cuts with respect to a vector which is close to the electric potential vector , rather than working with the potential vector directly. We will show this in Appendix A, Theorem 13.
3.1 Finding the Sparse Cuts Algorithmically
Next we prove Corollary 2.
See 2
Proof.
First, we prove the existence of . Let such that
(3.5) 
The choice of
(3.6) 
Then, by Theorem 1, there must be a threshold set of the potential vector corresponding to sending one unit of electrical flow from to such that
where the last inequality follows from our assumption that for all . This proves the first part of the corollary.
It remains to devise a near linear time algorithm to find the set . First, suppose that we are given the optimum pair of vertices satisfying (3.5). Using the SpielmanTeng solver (Lemma 4), we can compute the potential vector corresponding to sending one unit of electrical flow from to to in time . We can then sort the vertices by their potential values in time . Finally, we simply go over the sorted list and find the least expanding level set. This can be done in time in total, since getting from (resp. from ) can be done by considering the edges incident to .
It remains to find such an optimal pair of vertices satisfying (3.5). Instead, we find a pair of vertices such that , which is enough for our purposes as this only causes a constant factor loss in the conductance of .
Lemma 7.
Let be a weighted graph. In time , one can compute a pair of vertices satisfying
Proof.
By the triangle inequality for effective resistances, we have the following inequality for any :
(3.7) 
Thus, we fix a . Appyling Lemma 5 (with ), we get the numbers which multiplicatively approximate within a factor . Let . By combining the inequality (3.7) with
we obtain for some . The algorithm consists of an application of Lemma 5 with , and a linear scan for finding the maximum. Hence, the time bound follows. ∎
4 Low Effective Resistance Diameter Graph Decomposition
In this section we prove Theorem 3.
See 3
Proof.
Let be the target effective resistance diameter and be the target sum of the weights of edges that we are going to cut. We will write the algorithm in terms of , and we will optimize for these parameters later in the proof. Note that is the number of vertices of the original graph , and it is fixed throughout the execution of the following algorithm.
Algorithm 9 (Effective Resistance Partitioning).
Intput  A graph , and parameters . 

Output  A partition of . 

If there is a vertex such that , then delete all the edges incident to . Repeat this step until there are no such vertices in the remaining graph .

Use Lemma 7 to find vertices such that .

If , return .

Otherwise, find the cut with by invoking Corollary 2, with minimum degree at least and .

Call the algorithm recursively on and .

Return the union of the outputs of both recursive calls.
First of all, by construction, every set in the output partition satisfies . It is not hard to see that the running time is , as the most expensive of the above algorithm takes time , and we make at most recursive calls.
It remains to calculate the sum of the weights of all edges that we cut. Note that we cut edges either when a vertex has a low degree or when we find a low conductance set
. We classify the cut edges into two types as follows:

Edges where is cut as an incident edge of a vertex with .

The rest of the edges, i.e., edges where for some where .
We observe that we are going to remove edges of type (i) for at most times, because each such removal isolates a vertex of . So, the sum of the weights of edges of type (i) that we cut is at most . It remains to bound the sum of the weight of edges of type (ii) that we cut.
We use an amortization argument: Let stand for the tokens charged from an edge. We assume that for each edge , the number of tokens is initially set to . Every time we make a cut of type (ii), we assume without loss of generality that and we modify the number of tokens as follows
(4.1) 
By definition, after the termination of the algorithm, we have
(4.2) 
Therefore, to bound the total weight of type (ii) edges that are cut, it is enough to show that no edge is charged with too many tokens provided is large enough.
Claim 10.
If , we will have for all edges after the termination of the algorithm.
Proof.
Fix an edge . Let be the increment of due to a cut . We have
(4.3) 
where is chosen as in (3.6) in the proof of Corollary 2 so that for the last inequality to hold. Since the minimum degree is at least by Step (1) of the algorithm, we have
The minimum degree condition also implies that . Note that the denominator of the rightmost term of (4.3) is nonnegative as long as , which holds when .
Let be the set for which was charged for the last time, and in general be the th last set for which was charged. We write to denote the increment in due to .
Note that by (4.1) we have for all . Furthermore, since for all , we have
(4.4) 
for all . Therefore, using (4.3) and (4.4), we can write
where the last inequality assumes that . As argued before, the minimum degree condition implies that every vertex is of degree at least and thus . Therefore, by the geometric sum formula, we have
Plugging the value of and setting , we conclude that
∎
5 Conclusions and Open Problems
We have shown that we can decompose a graph into components of bounded effective resistance diameter while losing only a small number of edges. There are few questions which arise naturally from this work.

Can the decomposition in Theorem 3 be computed in near linear time? Is this decomposition useful in generating a random spanning tree?

For the Unique Games Conjecture, Theorem 3 implies that we can restrict our attention to graphs with bounded effective resistance diameter. Can we solve Unique Games instances better in such graphs? More generally, are there some natural and nontrivial problems that can be solved effectively in graphs of bounded effective resistance diameter?

Is there a generalization of Theorem 1 to multipartitioning, i.e. does the existence of vertices with high pairwise effective resistance distance help us in finding a partitioning of the graph where every cut is very sparse?

Theorem 1 says that a smallset expander has bounded effective resistance diameter. Is it possible to strengthen Theorem 3 to show that every graph can be decomposed into smallset expanders? This may be used to show that the SmallSet Expansion Conjecture and the Unique Games Conjecture are equivalent, depending on the quantitative bounds.
Acknowledgements
We would like to thank Hong Zhou for helpful discussions and anonymous referees for their useful suggestions.
References
 [ABS10] Sanjeev Arora, Boaz Barak, and David Steurer. Subexponential algorithms for unique games and related problems. In 51th Annual Symposium on Foundations of Computer Science, pages 563–572, 2010.

[AGG14]
Ittai Abraham, Cyril Gavoille, Anupam Gupta, Ofer Neiman, and Kunal Talwar.
Cops, robbers, and threatening skeletons: padded decomposition for
minorfree graphs.
In
46th Annual Symposium on Theory of Computing
, pages 79–88, 2014.  [AKK08] Sanjeev Arora, Subhash Khot, Alexandra Kolla, David Steurer, Madhur Tulsiani, and Nisheeth K. Vishnoi. Unique games on expanding constraint graphs are easy. In 40th Annual Symposium on Theory of Computing, pages 21–28, 2008.
 [AL17] Vedat Levi Alev and Lap Chi Lau. Approximating unique games using low diameter graph decomposition. In APPROX/RANDOM 2017, pages 18:1–18:15, 2017.
 [Ald90] David Aldous. The random walk construction of uniform spanning trees and uniform labelled trees. SIAM J. Discrete Math., 3(4):450–465, 1990.
 [AM85] Noga Alon and V. D. Milman. lambda, isoperimetric inequalities for graphs, and superconcentrators. J. Comb. Theory, Ser. B, 38(1):73–88, 1985.
 [AO15] Nima Anari and Shayan Oveis Gharan. Effectiveresistancereducing flows, spectrally thin trees, and asymmetric TSP. In 56th Annual Symposium on Foundations of Computer Science, pages 20–39, 2015.
 [Bar96] Yair Bartal. Probabilistic approximations of metric spaces and its algorithmic applications. In 37th Annual Symposium on Foundations of Computer Science, pages 184–193, 1996.
 [BFK11] Nikhil Bansal, Uriel Feige, Robert Krauthgamer, Konstantin Makarychev, Viswanath Nagarajan, Joseph Naor, and Roy Schwartz. Minmax graph partitioning and small set expansion. In 52nd Annual Symposium on the Theory of Computation, pages 17–26. IEEE, 2011.
 [BK89] Andrei Z Broder and Anna R Karlin. Bounds on the cover time. Journal of Theoretical Probability, 2(1):101–120, 1989.
 [BLR10] Punyashloka Biswal, James R. Lee, and Satish Rao. Eigenvalue bounds, spectral partitioning, and metrical deformations via flows. J. ACM, 57(3):13:1–13:23, 2010.
 [Bro89] Andrei Z. Broder. Generating random spanning trees. In 30th Annual Symposium on Foundations of Computer Science, pages 442–447, 1989.
 [BRS11] Boaz Barak, Prasad Raghavendra, and David Steurer. Rounding semidefinite programming hierarchies via global correlation. In 52nd Annual Symposium on Foundations of Computer Science, pages 472–481, 2011.
 [CCC98] Moses Charikar, Chandra Chekuri, ToYat Cheung, Zuo Dai, Ashish Goel, Sudipto Guha, and Ming Li. Approximation algorithms for directed steiner problems. In 9th Annual Symposium on Discrete Algorithms, pages 192–200, 1998.
 [Che70] Jeff Cheeger. A lower bound for the smallest eigenvalue of the laplacian. Problems in analysis, pages 195–199, 1970.
 [CKM11] Paul Christiano, Jonathan A. Kelner, Aleksander Madry, Daniel A. Spielman, and ShangHua Teng. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In 43rd Symposium on Theory of Computing, pages 273–282, 2011.
 [CKM14] Michael B Cohen, Rasmus Kyng, Gary L Miller, Jakub W Pachocki, Richard Peng, Anup B Rao, and Shen Chen Xu. Solving sdd linear systems in nearly m log 1/2 n time. In 46th Annual Symposium on Theory of Computing, pages 343–352, 2014.
 [CKR01] Gruia Călinescu, Howard J. Karloff, and Yuval Rabani. Approximation algorithms for the 0extension problem. In 12th Annual Symposium on Discrete Algorithms, pages 8–16, 2001.
 [CRR97] Ashok K. Chandra, Prabhakar Raghavan, Walter L. Ruzzo, Roman Smolensky, and Prasoon Tiwari. The electrical resistance of a graph captures its commute and cover times. Computational Complexity, 6(4):312–340, 1997.
 [DKP17] David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, and Sushant Sachdeva. Sampling random spanning trees faster than matrix multiplication. In 49th Annual Symposium on Theory of Computing, pages 730–742, 2017.
 [FHL08] Uriel Feige, MohammadTaghi Hajiaghayi, and James R. Lee. Improved approximation algorithms for minimum weight vertex separators. SIAM J. Comput., 38(2):629–657, 2008.
 [FHRT03] Jittat Fakcharoenphol, Chris Harrelson, Satish Rao, and Kunal Talwar. An improved approximation algorithm for the 0extension problem. In 14th Annual Symposium on Discrete Algorithms, pages 257–265, 2003.
 [GS11] Venkatesan Guruswami and Ali Kemal Sinop. Lasserre hierarchy, higher eigenvalues, and approximation schemes for graph partitioning and quadratic integer programming with PSD objectives. In 52nd Annual Symposium on Foundations of Computer Science, pages 482–491, 2011.
 [GT06] Anupam Gupta and Kunal Talwar. Approximating unique games. In 17th AnnualSymposium on Discrete Algorithms, pages 99–106, 2006.
 [Kir47] Gustav Kirchhoff. Ueber die auflösung der gleichungen, auf welche man bei der untersuchung der linearen vertheilung galvanischer ströme geführt wird. Annalen der Physik, 148(12):497–508, 1847.
 [KLPT09] Jonathan A. Kelner, James R. Lee, Gregory N. Price, and ShangHua Teng. Higher eigenvalues of graphs. In 50th Annual Symposium on Foundations of Computer Science, pages 735–744, 2009.
 [KM09] Jonathan A. Kelner and Aleksander Madry. Faster generation of random spanning trees. In 50th Annual Symposium on Foundations of Computer Science, pages 13–21, 2009.
 [KMP10] Ioannis Koutis, Gary L. Miller, and Richard Peng. Approaching optimality for solving SDD linear systems. In 51th Annual Symposium on Foundations of Computer Science, pages 235–244, 2010.
 [KMP11] Ioannis Koutis, Gary L. Miller, and Richard Peng. A nearlym log n time solver for SDD linear systems. In 52nd Annual Symposium on Foundations of Computer Science, pages 590–598, 2011.
 [Kol11] Alexandra Kolla. Spectral algorithms for unique games. Computational Complexity, 20(2):177–206, 2011.
 [KOSA13] Jonathan A. Kelner, Lorenzo Orecchia, Aaron Sidford, and Zeyuan Allen Zhu. A simple, combinatorial algorithm for solving SDD systems in nearlylinear time. In 45th Annual Symposium on Theory of Computing, pages 911–920, 2013.
 [KPR93] Philip Klein, Serge A Plotkin, and Satish Rao. Excluded minors, network decomposition, and multicommodity flow. In Proceedings of the twentyfifth annual ACM symposium on Theory of computing, pages 682–690. ACM, 1993.
 [KR93] Douglas J Klein and Milan Randić. Resistance distance. Journal of mathematical chemistry, 12(1):81–95, 1993.
 [KR11] Robert Krauthgamer and Tim Roughgarden. Metric clustering via consistent labeling. Theory of Computing, 7(1):49–74, 2011.
 [KS16] Rasmus Kyng and Sushant Sachdeva. Approximate gaussian elimination for laplacians  fast, sparse, and simple. In 57th Annual Symposium on Foundations of Computer Science, pages 573–582, 2016.
 [KVV04] Ravi Kannan, Santosh Vempala, and Adrian Vetta. On clusterings: Good, bad and spectral. J. ACM, 51(3):497–515, 2004.
 [LOT14] James R. Lee, Shayan Oveis Gharan, and Luca Trevisan. Multiway spectral partitioning and higherorder cheeger inequalities. J. ACM, 61(6):37:1–37:30, 2014.
 [LS93] Nathan Linial and Michael E. Saks. Low diameter graph decompositions. Combinatorica, 13(4):441–454, 1993.
 [LS10] James R. Lee and Anastasios Sidiropoulos. Genus and the geometry of the cut graph. In 21st Annual Symposium on Discrete Algorithms, pages 193–201, 2010.
 [Mat88] Peter Matthews. Covering problems for brownian motion on spheres. The Annals of Probability, pages 189–199, 1988.
 [MST15] Aleksander Madry, Damian Straszak, and Jakub Tarnawski. Fast generation of random spanning trees and the effective resistance metric. In 26th Annual ACMSIAM Symposium on Discrete Algorithms, pages 2019–2036, 2015.
 [OT14] Shayan Oveis Gharan and Luca Trevisan. Partitioning into expanders. In 25th Annual ACMSIAM Symposium on Discrete Algorithms, pages 1256–1266, 2014.
 [SS11] Daniel A. Spielman and Nikhil Srivastava. Graph sparsification by effective resistances. SIAM J. Comput., 40(6):1913–1926, 2011.
 [ST11] Daniel A. Spielman and ShangHua Teng. Spectral sparsification of graphs. SIAM J. Comput., 40(4):981–1025, 2011.
 [ST14] Daniel A. Spielman and ShangHua Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM J. Matrix Analysis Applications, 35(3):835–885, 2014.
Appendix A Robustness of the Proof of Theorem 1
We avoided the issue of picking the accuracy parameter for the Laplacian solver we used in Corollary 2
. Here, we want to show that the proof is robust enough to small perturbations in the potential vector, i.e. using a Laplacian solver to estimate
 potential vector by the vector , additively within an accuracy of , we can still recover our sparse cut.We first start by noting that is implied by the stronger inequality,
(A.1) 
We will show that if is polynomially small in the input data, we can still find a sparse cut. Our plan is as follows.

We will show that using the mildexpansion of the threshold sets of the vector , we can still prove upper bounds on the effective resistance (Theorem 13).
a.1 Eigenvalue Bound
We start with a simple eigenvalue bound that will be used to bound the accurancy needed.
Claim 11.
For any connected weighted graph , we have
(A.2) 
Proof.
For any connected weighted graph , we have the following conductance bound,
which implies
by Cheeger’s inequality. Note that
(A.3) 
where the last equality follows by a change of variables . This implies that
(A.4) 
Using , we obtain . Combining everything, we get
(A.5) 
hence proving the claim. ∎
a.2 Picking the Laplacian Solver Accuracy
For , the Spielman Teng Solver in Lemma 4 produces a vector such that
Letting be the  electric potential vector, this becomes
Using the definition of the norm, we have