1 Introduction
In this paper, we study the (unweighted) minimum cut problem: given an undirected graph and an integer , we want to delete the minimum number of edges to split the graph into at least connected components. Throughout the paper, let denote this minimum number of edges. Note that the cut problem generalizes the global minimum cut problem, which is the special case .
For fixed constant , the first polynomialtime algorithm for this problem is due to Goldschmidt and Hochbaum [GH94], who designed an algorithm running in time. Subsequently, Karger and Stein showed that their (recursive) randomized contraction algorithm solves the problem in time. This was later matched by a deterministic algorithm of Thorup [Tho08] based on tree packing, which runs in time.
These algorithms remained the state of the art until a few years ago, when new progress was established on the problem [GLL18, GLL19, Li19], culminating in the time algorithm of Gupta, Harris, Lee, and Li [GHLL20] which is, surprisingly enough, just the original KargerStein recursive contraction algorithm with an improved analysis. The time algorithm also works for weighted graphs, and they show by a reduction to maxweight clique that their algorithm is asymptotically optimal, assuming the popular conjecture that maxweight clique cannot be solved faster than time. However, whether the algorithm is optimal for unweighted graphs was left open; indeed, the (unweighted) clique problem can be solved in time through fast matrix multiplication.^{4}^{4}4As standard, we define as the smallest constant such that two matrices can be multiplied in time. The best bound known is [AW21], although is widely believed.) Hence, the time complexity of unweighted minimum cut was left open, and it was unclear whether the right answer was , or , or somewhere in between.
In this paper, we make partial progress on this last question by showing that for simple graphs, the right answer is asymptotically bounded away from :
Theorem 1.1.
There is an absolute constant such that the minimum cut problem can be solved in time.
In fact, we give evidence that may indeed be the right answer (assuming the popular conjecture that clique cannot be solved any faster). This is discussed more in the statement of Theorem 1.3.
1.1 Our Techniques
Our highlevel strategy mimics that of Li [Li19], in that we make use of the KawarabayashiThorup graph sparsification technique on simple graphs, but our approach differs by exploiting matrix multiplicationbased methods as well. Below, we describe these two techniques and how we apply them.
KawarabayashiThorup Graph Sparsification
Our first algorithmic ingredient is the (vertex) graph sparsification technique of Kawarabayashi and Thorup [KT18], originally developed to solve the deterministic minimum cut problem on simple graphs. At a high level, the sparsification process contracts the input graph into one that has a much smaller number of edges and vertices such that any nontrivial minimum cut is preserved. Here, nontrivial means that the minimum cut does not have just a singleton vertex on one side. More recently, Li [Li19] has generalized the KawarabayashiThorup sparsification to also preserve nontrivial minimum cuts (those without any singleton vertices as components), which led to an time minimum cut algorithm on simple graphs. The contracted graph has vertices where is the minimum degree of the graph. If is large enough, say, for some constant , then this is vertices and running the algorithm of [GHLL20] already gives time. At the other extreme, if there are many vertices of degree less than , then since we can take of these lowdegree vertices as singleton components of a cut. We then employ an exact algorithm for minimum cut that runs in time (such an algorithm has been shown to exist, as we will discuss further when stating Theorem 1.3), which is time. For the middle ground where but there are a few vertices of degree less than , we can modify the KawarabayashiThorup sparsification in [Li19] to produce a graph of vertices instead, which is enough. This concludes the case when there are no singleton components of the minimum cut.
Matrix Multiplication
What if the minimum cut has components that are singleton vertices? If all but one component is a singleton, then we can use a matrix multiplicationbased algorithm similar to the Nešetril and Poljak’s algorithm for clique [NP85], which runs in time. Thus, the main difficulty is to handle minimum cuts where some components are singletons, but not many. The following definition will be at the core of all our discussions for the rest of this paper.
Definition 1.2 (Border and Islands).
Given a cut with exactly singleton components, we denote the singleton components as and denote the other components . A border of is a cut obtained by merging some singleton components into larger components. More preciously, a border is defined by a subset and a function . Given and , we let , then the border is the cut defined by the components , together with the unmerged singleton components where . The set of vertices , corresponding to the merged singleton components, is called the islands.
Given this definition, our main technical contribution is as follows: we show that if the cut has exactly singleton components, then we can first apply KawarabayashiThorup sparsification to compute a graph of size that preserves some borders of . We then use the algorithm of [GHLL20] on
to discover a border, which will succeed with probability roughly
. Finally, we run a matrix multiplicationbased algorithm to locate the islands in an additional time. Altogether, the runtime becomes , which is as long as , where depends on .We summarize our discussions with the following theorem, which is the real result of this paper.
Theorem 1.3.
Suppose there exists an algorithm that takes in a simple, unweighted graph , and returns its minimum cut in time . Let . Then we can compute a minimum cut of a simple, unweighted graph in .
Recently, Lokshtanov, Saurabh, and Surianarayanan [LSS20] showed an algorithm for exact minimum cut that runs in time . Combining their result with Theorem 1.3, we obtain a minimum cut that runs in time for some constant . We further note that we use their algorithm in a blackbox manner, which means if one could derive an exact algorithm with a better constant , then our algorithm will have an improved runtime up to .
2 Main Algorithm
In this section, we discuss our algorithm in detail. Given a simple, unweighted graph , we first run an approximate cut algorithm to determine the magnitude of . If , then we can run the exact algorithm on and output its result. Otherwise, we apply Lemma 2.2, which is a modified version of KawarabayashiThorup sparsification [KT18] for cuts. These modifications, discussed in Section 3, will give us a graph on vertices that preserves at least one border for every minimum cut of . Now we fix any minimum cut of , and fix its border specified by Lemma 2.2. For every possible value of , we run Lemma 2.3 to discover with high probability.
Once we found the border, locating the islands is simple. In Section 5, we present a slight variant of Nešetril and Poljak’s clique algorithm [NP85] that solves the following problem in time.
Definition 2.1 (Island Problem).
Given a graph , find the optimal cut which has exactly singleton components.
This enables us to recover the minimum cut in by guessing the number of islands in each nonsingleton component specified by the border, and finding them independently. The total runtime is since the number of islands in any nonsingleton component is at most the total number of islands . This proves Theorem 1.3.
Our methods are summarized in the following algorithm. Note that for the initial approximation step, various algorithms can be used.
2.1 Analysis
Our analysis is divided into three parts, each corresponding to one section of the algorithm. The first part concerns the KawarabayashiThorup sparsification, and the following theorem is proved in Section 3.
Lemma 2.2.
For any simple graph, we can compute in time a partition of such that and the following holds:

For any minimum cut with exactly singleton components, there exists and a function such that the border of defined by and , namely , agrees with the partition . In other words, all edges of are between some pair of parts . Moreover, we have .
Contracting each into a single vertex, we obtain a graph on vertices that preserves .
Next, we describe and analyze the algorithm that computes the border. The following lemma is proved in Section 4.
Lemma 2.3.
Fix an integer and a parameter , and consider an cut of size at most . There is an time algorithm that computes a list of cuts such that with high probability, is listed as one of the cuts.
Finally, we present and analyze the algorithm that extends the border by computing the missing islands in each nonsingleton component. The following lemma is proved in Section 5.
Lemma 2.4.
There is a deterministic algorithm that solves the Island problem.
With these three lemmas in hand, we now analyze Algorithm 1.
Fix a minimum cut. The initial KawarabayashiThorup sparsification takes time by Lemma 2.2, and the border is preserved by the partition and has size at most . For the correct guess of , Lemma 2.3 detects with high probability among a collection of many cuts. Finally, for the cut , the Island Discovery Algorithm extends it to a minimum cut in time . The total running time is therefore
The term is at most , which is negligible. The running time is dominated by either or , depending on which of and is greater. This concludes the analysis of Algorithm 1 and the proof of Theorem 1.3.
3 KawarabayashiThorup Sparsification
In this section, we prove the following KawarabayashiThorup sparsification theorem of any simple graph. Rather than view it as a vertex sparsification process where groups of vertices are contracted, we work with the grouping of vertices itself, which is a partition of the vertex set. We use parts to denote the vertex sets of the partition to distinguish them from the components of a cut.
Most of the arguments in this section originate from Kawarabayashi and Thorup’s original paper [KT18], though we find it more convenient to follow the presentations of [GLL21] and [Li19].
See 2.2
3.1 Regularization Step
We first “regularize” the graph to obey a few natural conditions, which is done at no asymptotic cost to the number of clusters. In particular, we ensure that , i.e., there are not too many edges, and , i.e., the minimum degree is comparable to the size of the cut.
NagamochiIbaraki sparsification.
First, we show that we can freely assume through an initial graph sparsification step due to Nagamochi and Ibaraki; the specific theorem statement here is from [Li19].
Theorem 3.1 (Nagamochi and Ibaraki [Ni92], Theorem 3.3 in [Li19]).
Given a simple graph and parameter , there is a polynomialtime algorithm that computes a subgraph with at most edges such that all cuts of size at most are preserved. More formally, for all cuts satisfying , we have .
Compute a approximation in time [LSS20], apply Theorem 3.1 with parameter , and replace with the returned graph . This allows us to assume henceforth.
Lower bound the minimum degree.
Next, we would like to ensure that the graph has minimum degree comparable to . While there exists a vertex of degree less than , declare that vertex as a trivial part in the final partition, and remove it from . We claim that we can remove at most such vertices; otherwise, the vertices together form a cut of size less than , contradicting the value of the minimum cut. We have thus removed at most vertices. The remaining task is to compute a partition of the remaining graph which has minimum degree at least . We then add a singleton set for each of the singleton vertices removed, which is at most extra parts, which is negligible since we aim for many parts in total.
3.2 KawarabayashiThorup Sparsification
It remains to prove the following lemma, which is Lemma 2.2 with the additional assumptions and .
Lemma 3.2.
Suppose we are given a simple graph with and . Then, we can compute a partition of such that and the following holds:

For any minimum cut with exactly singleton components, there exists and a function such that the border of defined by and , namely , agrees with the partition . In other words, all edges of are between some pair of parts . Moreover, we have .
Our treatment follows closely from Appendix B of [GLL21].
Expander decomposition preliminaries.
We first introduce the concept of the conductance of a graph, as well as an expander, defined below.
Definition 3.3 (Conductance).
Given a graph , a set has conductance
in the graph , where . The conductance of the graph is the minimum conductance of a set in .
Definition 3.4.
For any parameter , a graph is a expander if its conductance is at least .
The following is a wellknown result about decomposing a graph into expanders, for which we provide an easy proof below for convenience.
Theorem 3.5 (Expander Decomposition).
For any graph with edges and a parameter , there exists a partition of such that:

For all , is a expander.

.
The partitioning algorithm.
To compute the partition , we execute the same algorithm from Section B of [GLL21], except we add an additional step 4. Throughout the algorithm, we fix parameter .

Compute an expander decomposition with parameter , and let be the resulting partition of .

Initialize the set , and initialize for each . While there exists some and a vertex satisfying , i.e., vertex loses at least fraction of its degree when restricted to the current , remove from and add it to . The set is called the set of singleton vertices. Note that some can become empty after this procedure. At this point, we call each a cluster of the graph. This procedure is called the trimming step in [KT18].

Initialize the set , i.e., for each and vertex that loses at least fraction of its degree when restricted to , add to (but do not remove it from yet). Then, add to the singletons (i.e., update ) and define the core of a cluster as . For a given core , let denote the cluster whose core is . This procedure is called the shaving step in [KT18].

For each core with at most vertices, we shatter the core by adding to the singletons (i.e., update ) and updating . This is the only additional step relative to [GLL21].

Suppose there are nonempty cores . Let us reorder the cores so that are precisely the nonempty cores. The final partition of is . In other words, we take each nonempty core as its own set in the partition, and add each vertex as a singleton set. We call each nonempty core a core in the partition, and each vertex as a singleton in the partition.
The lemmas below are stated identically to those in [GLL21], so we omit the proofs and direct interested readers to [GLL21].
Lemma 3.6 (Lemma B.11 of [Gll21]).
Fix a parameter that satisfies . For each nonempty cluster and a subset satisfying , we have either or .
The lemma below from [GLL21] is true for the algorithm without step 4.
Lemma 3.7 (Corollary B.9 of [Gll21]).
Suppose we skip step 4 of the algorithm. Then, there are many sets in the partition .
Clearly, adding step 4 increases the number of parts by a factor of at most , so the we obtain the following corollary.
Corollary 3.8.
There are many sets in the partition .
Since and by the assumption of Lemma 3.2, this fulfills the bound of Lemma 3.2. For the rest of this section, we prove property .
The following lemma is a combination of Lemma B.12 of [GLL21] and Lemma 16 of [Li19], and we provide a proof for completeness.
Lemma 3.9.
Fix a parameter that satisfies . For any nonempty core and any minimum cut of size at most , there is exactly one component satisfying , and any other component that is nonsingleton must be disjoint from . Moreover, each vertex has at least neighbors in .
Proof.
We first show that . Since is nonempty, each vertex has at least neighbors in , so by the assumption .
By Lemma 3.6, each component must satisfy or , and the latter implies that , which only one side can satisfy. Moreover, one such component must exist since otherwise, , a contradiction. Therefore, all but one component satisfy .
Next, each vertex has at least neighbors in , and at most of them can go to for any component . This leaves at least neighbors in , which is at least since and .
We now show that if is nonsingleton and , then is disjoint from . Suppose otherwise; then, any vertex has at least neighbors in as before. If we move from to , then the result is still a cut since is nonsingleton. Moreover, the edges from to are newly cut, and the edges from to are saved. The former is at most , and the latter at least . Since and , the new cut is smaller than the old one, a contradiction. ∎
Finally, we prove property of Lemma 3.2.
Lemma 3.10.
For any minimum cut with exactly singleton components, there exists and a function such that the border of defined by and , namely , agrees with the partition . In other words, all edges of are between some pair of parts . Moreover, we have .
Proof.
Enumerate the singleton components as . Let be the set of singleton components such that is contained in a part that has more vertices than just (i.e., ). For every such component , since , we must have , since otherwise it would have been shattered into singletons on step 4 of the algorithm. So there must be a nonsingleton component of the minimum cut intersecting (which is unique by Lemma 3.9). This component must be the from Lemma 3.9. We define .
As we’ve argued in the previous paragraph, the border defined as agrees with the partition . It remains to show that . For each component with , by Lemma 3.9, the vertex has at least neighbors in , so merging with decreases the cut value by at least . It follows that the border has size at most , which meets the bound since and by assumption. ∎
With Lemma 3.10, this concludes the proof of Lemma 3.2.
4 Finding the Border
In this section, we develop an algorithm to compute the border. The main lemma is the following, where represents the border we wish to find. See 2.3
Our algorithm follows Karger’s contraction algorithm, stated below, and its analysis from [GHLL20].
The key lemma we use is the following from [GHLL20].
Lemma 4.1 (Lemma 17 of [Ghll20]).
The algorithm sets , and by Lemma 4.1, any cut of size for some survives lines 1 to 4 of the Contraction Algorithm with probability . The algorithm sets for the parameter , and is output with probability . Overall, the probability of outputting is . Repeating the algorithm times, we can output a list of cuts that contains with high probability.
5 Finding the Islands
In this section, we prove the following lemma. See 2.4
We present an algorithm for island which is a variant of Nešetril and Poljak’s clique algorithm [NP85]. Given an input graph , we want to find the optimal vertices to cut off from . Note that this is similar to finding the minimum clique in , except that we need to take into account the edges from the islands to the remaining giant component in . We first consider the case where is divisible by .
Claim 5.1.
Algorithm 3 returns an optimal cut with islands with probability at least .
Proof.
We first note that given the nine parameters and , the weight of the returned cut would be . In other words, the nine parameters precisely specify the weight of the returned cut. Therefore, if we guess the parameters correctly, our algorithm will return vertices that gives the minimum cut with islands. Note that and each have possible values, while each have possible values. Therefore there are at most possible combination of values for the nine parameters, which means we guess correctly with probability as least . The rest of the algorithm is a standard triangle detection algorithm using matrix multiplication, which has runtime . ∎
If is not divisible by , we can add up to two isolated vertices into the graph and reduce to the case where is divisible by . This increase the runtime by a factor of . Now note that our above algorithm can be easily made deterministic by going over all possible combinations of the nine parameters instead of guessing them. This proves Lemma 2.4.
Acknowledgements
The authors would like to thank Anupam Gupta for many constructive discussions and comments.
References
 [AW21] Josh Alman and Virginia Vassilevska Williams. A refined laser method and faster matrix multiplication. In Proceedings of the 2021 ACMSIAM Symposium on Discrete Algorithms (SODA), pages 522–539. SIAM, 2021.
 [GH94] Olivier Goldschmidt and Dorit S. Hochbaum. A polynomial algorithm for the cut problem for fixed . Math. Oper. Res., 19(1):24–37, 1994.
 [GHLL20] Anupam Gupta, David G Harris, Euiwoong Lee, and Jason Li. Optimal bounds for the cut problem. arXiv preprint arXiv:2005.08301, 2020.
 [GLL18] Anupam Gupta, Euiwoong Lee, and Jason Li. Faster exact and approximate algorithms for kcut. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 113–123. IEEE, 2018.

[GLL19]
Anupam Gupta, Euiwoong Lee, and Jason Li.
The number of minimum kcuts: Improving the kargerstein bound.
In
Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC)
, pages 229–240, 2019.  [GLL21] Anupam Gupta, Euiwoong Lee, and Jason Li. The connectivity threshold for dense graphs. In Proceedings of the 2021 ACMSIAM Symposium on Discrete Algorithms (SODA), pages 89–105. SIAM, 2021.
 [KS96] David R Karger and Clifford Stein. A new approach to the minimum cut problem. Journal of the ACM (JACM), 43(4):601–640, 1996.
 [KT18] Kenichi Kawarabayashi and Mikkel Thorup. Deterministic edge connectivity in nearlinear time. Journal of the ACM (JACM), 66(1):1–50, 2018.
 [Li19] Jason Li. Faster minimum kcut of a simple graph. In 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), pages 1056–1077. IEEE, 2019.
 [LSS20] Daniel Lokshtanov, Saket Saurabh, and Vaishali Surianarayanan. A parameterized approximation scheme for min cut. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 798–809. IEEE, 2020.
 [NI92] Hiroshi Nagamochi and Toshihide Ibaraki. Computing edgeconnectivity in multigraphs and capacitated graphs. SIAM J. Discrete Math., 5(1):54–66, 1992.
 [NP85] Jaroslav Nešetřil and Svatopluk Poljak. On the complexity of the subgraph problem. Commentationes Mathematicae Universitatis Carolinae, 26(2):415–419, 1985.
 [SV95] Huzur Saran and Vijay V. Vazirani. Finding cuts within twice the optimal. SIAM Journal on Computing, 24(1):101–108, 1995.
 [Tho08] Mikkel Thorup. Minimum way cuts via deterministic greedy tree packing. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages 159–166. ACM, 2008.
Comments
There are no comments yet.