In his seminal work in 1996, Karger  showed how to find the minimum cut of an edge-weighted undirected graph in time. The first step of his algorithm is a procedure that, given an undirected edge-weighted graph , produces in time a collection of spanning trees of such that w.h.p the minimum cut 1- or 2-respects some tree in the collection. That is, one of the trees is such that at most two of its edges cross the minimum cut (these edges are said to determine the cut). The minimum cut is then found by examining each tree of the trees and finding the minimum cut that 1- or 2-respects . Since the minimum cut that 1-respects can be easily found in time [8, Lemma 5.1], the main challenge is to find the minimum cut that 2-respects .
Karger showed that the minimum cut that 2-respects a given tree can be found in time. This was very recently improved in two independent works: In  we obtained an deterministic algorithm,111We also showed in  that the first step of producing the collection of spanning trees can be performed (using a randomized algorithm) in time, leading to an time randomized algorithm for min cut. and in  Mukhopadhyay and Nanongkai obtained an randomized algorithm. These two results use different techniques. Even though  dominates  for the entire range of graphs densities, the approach of  has two notable advantages: (1) it can be extended to other models, namely to find the minimum cut using cut queries, or using space and passes in a streaming algorithm, and (2) the approach of  can be seen as a reduction from min cut to geometric two-dimensional orthogonal range counting/sampling/reporting data structures . Therefore, special cases or future improvements of such data structures will imply improvements to min cut. For example, for the special case of unweighted undirected graphs,  use an improved data structure for orthogonal range counting  and range rank/select  (that are then used to design an improved data structure for orthogonal range reporting and, finally, orthogonal range sampling). This yields an time randomized algorithm for finding a 2-respecting min cut in unweighted graphs, and hence an time algorithm for min cut in such graphs.
In this paper we show how to simplify the algorithm of Mukhopadhyay and Nanongkai, improve its running time to , and turn it deterministic. By Karger’s reasoning, this then implies a randomized min cut algorithm working in time. In fact, one can also apply his speedup that exploits the fact that 1-respecting cuts can be found faster than 2-respecting cuts. As explained in [5, Section 4], by appropriately tweaking the parameters we can obtain a randomized min cut algorithm working in time. Interestingly, with our improvement, the reduction to the geometric data structure is now a clean black box reduction to just orthogonal range counting (no sampling/reporting is required). This allows us to obtain the following new results: (1) an -time randomized algorithm for min cut in weighted graphs for any fixed (this dominates all previous results for ), and (2) an time randomized algorithm for min cut in unweighted graphs.
In Section 2 we describe the algorithm of Mukhopadhyay and Nanongkai . Our description uses slightly different terminology than , but all the ideas described in Section 2 are taken from . In Section 3 we describe our simplification of  and in Section 4 we show how to use it to achieve faster algorithms for unweighted graphs and for dense graphs.
2 The Algorithm of Mukhopadhyay and Nanongkai 
In this section we describe the algorithm of Mukhopadhyay and Nanongkai  for finding the pair of edges determining the minimum cut (observe that the cut determined by is unique and consists of all edges such that the -to- path in contains exactly one of ).
The algorithm begins by partitioning the tree into a set of edge-disjoint paths (called heavy paths ) such that any root-to-leaf path in intersects at most heavy paths.
2.1 Two edges in the same heavy path
Consider first the case where the minimum cut is determined by two edges of the same path . Finding these two edges then boils down to finding the smallest element in the matrix where is the length of and is the weight of the cut determined by the ’th and the ’th edges of . An important contribution of Mukhopadhyay and Nanongkai is in observing that the matrix is a Partial Monge matrix. That is, for any , it holds that .222Mukhopadhyay and Nanongkai reversed the order of rows so in their presentation the condition was . They then describe an algorithm that finds the smallest element in by inspecting only entries of . Instead, one could use the faster algorithm by Klawe and Kleitman  that requires only inspections (where is the inverse-Ackermann function). Lemma 1 below shows that each inspection can be done in time. Thus, in time one can find the minimum cut determined by two edges of . Since paths in are disjoint, doing this for all paths in this takes overall time.
2.2 Two edges in different heavy paths
Now consider the case where the minimum cut is determined by two edges belonging to different paths in . Another significant insight of Mukhopadhyay and Nanongkai is that there is no need to check every pair of paths but only a small subset of interesting path pairs as explained next. Let (,) denote the weight of the cut determined by edges . Let denote the subtree of rooted at the lower (i.e., further from the root) endpoint of . If then we say that is a descendant of . If is not a descendant of and is not a descendant of then we say that and are independent.
An edge is said to be cross-interested in an edge if
where is the total weight of edges between and and is the total weight of edges between and . That is, is cross-interested in if more than half the edge weight going out of goes into . Observe that if the minimum cut is determined by independent edges then must be cross-interested in (and vice versa) because otherwise (,) = (i.e. the cut determined by the single edge has smaller weight, a contradiction). This means that there is no need to check every pair of independent edges, only ones that are cross-interested. It is easy to see that for any tree-edge , all the edges that is cross-interested in form a single path in going down from the root to some node .
An edge is said to be down-interested in an edge if
where is the total weight of edges between and . That is, is down-interested in if more than half the edge weight going out of originates in . Observe that if the minimum cut is determined by edges and where is a descendant of , then must be down-interested in because otherwise (,) = (again, a contradiction). For convenience, define that is down-interested in all of its ancestor edges. This means that we only need to check pairs of descendant edges that are down-interested in each other. Furthermore, for any tree-edge , all the edges that is down-interested in form a single path in going down from the root to some node .
A third important realization of Mukhopadhyay and Nanongkai is that a geometric range searching data structure of Chazelle  can be used to efficiently determine whether an edge is interested in an edge . This is described in the following lemma.
Given a graph and a spanning tree , we can construct in time a data structure that, given any two edges , can report in time (1) the value (,), (2) whether is cross-interested in , and (3) whether is down-interested in .
In time we construct a data structure that can answer lowest common ancestor queries on in constant time . For every node , let denote the visiting time of in a postorder traversal of and let denote the minimum visiting time of a node in the subtree of rooted at . Let be the total weight of edges with exactly one endpoint in the subtree of rooted at . As also done by Karger , in a bottom up fashion (in linear time) we compute , , and for every . We map each edge to the point in the two-dimensional plane. On this set of points we construct Chazelle’s 2D orthogonal range searching data structure . This data structure is constructed in time and can report in time the total weight of all points in any given axis-aligned rectangle.
Consider any two edges and . Let and be the lower endpoints of and , respectively. Note that and . Consider first the case that and are independent. Deciding whether is cross-interested in reduces to computing which is obtained by a range query to the rectangle . The value (,) is computed as .
Now consider the case that is a descendant of . Then deciding whether is down-interested in reduces to computing which is obtained as the sum of the answers to the rectangles and . The value (,) is computed as .
Finally, if is a descendant of , then we always report that is down-interested in . The value (,) is computed (symmetrically to the above) as . ∎
Interesting path pairs.
Recall that the goal is two find the two tree-edges that determine the minimum cut and we know that these edges belong to different heavy paths . A tree-edge is said to be interested in a path in if it is cross-interested or down-interested in some edge of . Notice that by the above, any tree-edge is interested in only paths. Define a pair of paths to be an interesting pair if has an edge interested in and has an edge interested in .
Notice that the number of interesting pairs is only
. However, Mukhopadhyay and Nanongkai do not identify all the interesting pairs. Instead, they apply a complicated random sampling scheme in order to find the best pair with high probability. This sampling makes their algorithm randomized and its running time. In Section 3.1 we show how to replace the random sampling step with a much simpler deterministic algorithm that finds all interesting pairs of paths. Our algorithm is also faster, taking time. Then, for each interesting pair , we (conceptually) contract all tree-edges except those in that are interested in and those in that are interested in , and run the solution from Section 2.1 on the resulting paths. This last step is very similar to the corresponding step in . We explain it in detail in Section 3.2.
3 The Simplification
For every edge , let () denote the path in consisting of all the edges that is cross-interested (down-interested) in. The path () starts at the root and terminates at some node denoted (). For every , we compute and in time. In contrast to , we do this deterministically by using a centroid decomposition.
3.1 Finding interesting path pairs
A node is a centroid if every connected component of consists of at most nodes. The centroid decomposition of is defined recursively by first choosing a centroid and then recursing on every connected component of . We assume is a binary tree (we can replace a node of degree with a binary tree of size where internal edges have weight and edges incident to leaves have their original weight). We also assume we have a centroid decomposition of (we can compute a centroid decomposition of every tree in time so overall in time). To compute , consider the (at most) three edges incident to the centroid node. Using Lemma 1, we check in time whether is cross-interested in , in , and in . From this we can deduce in which connected component lies, and we continue recursively there. Since the recursion depth is , we find after time so overall we spend time. We compute similarly (querying Lemma 1 for down-interested rather than cross-interested).
3.2 Checking interesting path pairs
For each interesting pair of heavy paths , we will store a list of the edges of that are interested in and vice versa. Recall that, since each edge is interested in heavy paths, the number of interesting pairs of heavy paths is only . Moreover, the total length of all the lists is also .
We first show how to compute a list of all interesting pairs of heavy paths. By going over all the edges of we prepare for each heavy path a list of all the heavy paths s.t. an edge of is interested in an edge of . The total size of all these lists is and they can be computed in time using Lemma 1. We then sort these lists (according to some canonical order on the heavy paths). Then, for every we go over all heavy paths that is interested in. For each such we determine in time whether is also interested in using binary search on the list of . Thus we construct the lists of all interesting pairs in total time.
For each interesting pair of paths , we construct a list of the edges of that are interested in and vice versa as follows. We go over the edges of . Let be the heavy path containing . For each heavy path that intersects or , if is an interesting pair, we add to the list of the pair . This takes time since there are such paths , so the total time to construct all these lists is . Finally, we sort the edges on each list in total time.
For each interesting pair of paths , let () denote the set of edges of () that are interested in (). We find the minimum cut determined by pairs of edges such that and in a single batch as follows. We assume that either is a descendant of (i.e. all edges in are descendants of all edges in ) or that is independent of (i.e. no edge in is a descendant of an edge in ). Otherwise, if is a descendant of one part of and independent of another part then we just split into two parts and handle each separately. We think of as being oriented root-wards. If is a descendant of then we orient root-wards, and if is independent of then we orient leaf-wards. Let be the matrix where is the weight of the cut determined by the ’th edge of and the ’th edge of . We observe that the matrix is a Monge matrix (rather than Partial Monge). That is, the Monge condition holds for any (and not only for ). This means that instead of the Klawe-Kleitman algorithm  we can use the SMAWK algorithm  that finds the maximum entry in by inspecting only a linear number of entries of (i.e. without the additional inverse-Ackermann term). Using Lemma 1 for each inspection, this takes time. Since the sum over all interesting pairs of paths is , the overall time is .
The proof that is Monge appears in [10, Claim 3.5]. We give one here for completeness.
for any .
Recall that the order of edges in is root-wards and that that the order of edges in is root-wards if is a descendant of and leaf-wards if is independent of .
With this order in mind, let and denote the ’th and ’th edges of and let and denote the ’th and ’th edges of (it is possible that when is even).
Let denote the five connected components obtained from after removing these four edges, where , , , , . See Figure 1.
Let denote the total weight of all edges of between and . Notice that
and since we get that . ∎
4 Unweighted Graphs and Dense Graphs
The main advantage of the approach of Mukhopadhyay and Nanongkai  is that for restricted graph families they can plug in range counting/reporting structures with faster construction time.
4.1 Unweighted graphs
For unweighted graphs (with parallel edges),  used a two dimensional orthogonal range counting structure with faster preprocessing  and a data structure of  to devise a two dimensional orthogonal range sampling/reporting data structure with faster preprocessing. They plugged these improved data structures into their algorithm for 2-respecting min cut, to obtain a running time of (multiply this by another factor for the running time of the resulting min cut algorithm). We show that an analogous speedup can be applied to our simplification (leading to an time algorithm). In fact, we only need the following range counting structure :
Lemma 3 ().
Given points in the 2D plane, we can construct in time a range counting structure with query time.
We use Lemma 3 instead of Chazelle’s structure in the proof of Lemma 1. This decreases the overall running time to . If , this is . Otherwise, we replace the unweighted graph by a new weighted graph with only edges (by collapsing parallel unweighted edges into a single weighted edge), and run the previous algorithm in time. This gives us an time deterministic algorithm for 2-respecting min cut, and an time randomized algorithm for min cut for unweighted undirected graphs.
4.2 Dense weighted graphs
We now present another speedup that can be applied to dense (weighted) graphs with , for any . For such graphs, we obtain an -time algorithm for min cut. We need the following structure:
For any , given weighted points in the grid, we can construct in time a data structure that reports the total weight of all points in any given rectangle in time.
It is enough to construct a structure capable of reporting the total weight of all points in . We use the standard approach of decomposing a 2D query into a number of 1D queries.
We start by designing a 1D structure storing a set of weighted numbers (weighted points in 1D) from that can be constructed in time and returns the total weight of all numbers in in time. Consider a complete tree of degree over the set of leaves . Note that the depth of is . We construct and store the subtree of induced by the leaves that belong to . This takes time and space and, assuming that is given sorted, we can assume that the children of each node of are sorted. Each node of stores the total weight of all numbers corresponding to its leaves. Then, to find the total weight of numbers in , we traverse starting from the root. Let be the current node of . We scan the (at most) children of from right to left and, as long as all leaves in their subtrees correspond to numbers from , we add their stored total weight to the answer. Then, if the interval of the leaves corresponding to the next child intersects (but not entirely contained in ), we recurse on that child. Overall, there are at most steps, each taking time, so overall.
Our 2D structure for a set of weighted points from uses the same idea. We consider a complete tree of degree on the coordinates. We construct and store the subtree of induced by the coordinates of the points. At each node of we store a 1D structure responsible for all the points whose coordinate corresponds to a leaf in the subtree of . Overall, each point is stored at nodes of . By first sorting all the points in time with radix sort we can assume that the points received by every 1D structure are sorted, and construct all the 1D structures in total time.
Then, a query is decomposed into queries to the 1D structures by proceeding as above: we descend from the root, scanning the children of the current node from right to left, issuing a 1D query to every child corresponding to a interval completely contained in , and then possibly recursing on the next child if its interval intersects . Each 1D query takes time, and there are queries, so overall the query time is . By adjusting we obtain the lemma. ∎
By replacing Chazelle’s structure with Lemma 4, we obtain an algorithm with running time . Because , by adjusting , this implies an -time algorithm for min cut for any constant .
-  Alok Aggarwal, Maria M. Klawe, Shlomo Moran, Peter W. Shor, and Robert E. Wilber. Geometric applications of a matrix-searching algorithm. Algorithmica, 2(1):195–208, 1987.
-  Maxim A. Babenko, Pawel Gawrychowski, Tomasz Kociumaka, and Tatiana Starikovskaya. Wavelet trees meet suffix trees. In 26th SODA, pages 572–591, 2015.
-  Timothy M. Chan and Mihai Patrascu. Counting inversions, offline orthogonal range counting, and related problems. In 21st SODA, pages 161–173, 2010.
-  Bernard Chazelle. A functional approach to data structures and its use in multidimensional searching. SIAM J. Comput., 17(3):427–462, 1988.
-  Pawel Gawrychowski, Shay Mozes, and Oren Weimann. Minimum cut in time. CoRR, abs/1911.01145, 2019.
-  Pawel Gawrychowski, Shay Mozes, and Oren Weimann. Minimum cut in time. In 47th ICALP, pages 57:1–57:15, 2020.
-  Dov Harel and Robert E. Tarjan. Fast algorithms for finding nearest common ancestors. SIAM J. Comput., 13(2):338–355, 1984.
-  David R. Karger. Minimum cuts in near-linear time. J. ACM, 47(1):46–76, 2000. Announced at STOC 1996.
-  Maria M. Klawe and Daniel J. Kleitman. An almost linear time algorithm for generalized matrix searching. SIAM Journal Discret. Math., 3(1):81–97, 1990.
-  Sagnik Mukhopadhyay and Danupon Nanongkai. Weighted min-cut: Sequential, cut-query and streaming algorithms. In 52nd STOC, pages 496–509, 2020.
-  Daniel D. Sleator and Robert Endre Tarjan. A data structure for dynamic trees. J. Comput. Syst. Sci., 26(3):362–391, 1983.