1 Introduction
Connectivityrelated problems are some of the most wellstudied problems in graph theory and algorithms, and have been thoroughly investigated in the literature. Given a directed graph with vertices and edges,^{9}^{9}9We sometimes use arcs when referring to directed edges, or use nodes instead of vertices. perhaps the most fundamental such problem is to compute a minimum  cut, i.e., a set of edges of minimumcardinality such that is not reachable from in . This minimum  cut problem is wellknown to be equivalent to maximum  flow, as they have the exact same value [FF62]. Currently, the fastest algorithms for this problem run in time [LS14] and (faster for sparse graphs) [Mąd16], where is the maximum edge capacity (aka weight).^{10}^{10}10The notation hides polylogarithmic factors.
The central problem of study in this paper is AllPairs MinCut (also known as AllPairs MaxFlow), where the input is a digraph and the goal is to compute the minimum  cut value for all . All our graphs will have unit edge/vertex capacities (aka uncapacitated), in which case the value of the minimum  cut is just the maximum number of disjoint paths from to (aka edge/vertex connectivity), by [Men27]. We will consider a few variants: vertex capacities vs. edge capacities,^{11}^{11}11The folklore reduction where each vertex is replaced by two vertices connected by an edge shows that in all our problems, vertex capacities are no harder (and perhaps easier) than edge capacities. Notice that this is only true for directed graphs. reporting only the value vs. the cut itself (a witness), or a general digraph vs. a directed acyclic graph (DAG). For all these variants, we will be interested in the bounded version (aka bounded mincuts, hence the title of the paper) where the algorithm needs to find which minimum  cuts have value less than a given parameter , and report only those. Put differently, the goal is to compute, for every , the minimum between and the actual minimum  cut value. Nonetheless, some of our results (the lower bounds) are of interest even without this restriction.
The time complexity of these problems should be compared against the fundamental special case that lies at their core — the Transitive Closure problem (aka AllPairs Reachability), which is known to be timeequivalent to Boolean Matrix Multiplication, and in some sense, to Triangle Detection [WW18]. This is the case , and it can be solved in time , where is the matrixmultiplication exponent [CW90, LG14, Vas12]; the latter term is asymptotically better for dense graphs, but it is not combinatorial.^{12}^{12}12Combinatorial is an informal term to describe algorithms that do not rely on fast matrixmultiplication algorithms, which are infamous for being impractical. See [AW14, ABW15] for further discussions. This time bound is conjectured to be optimal for Transitive Closure, which can be viewed as a conditional lower bound for AllPairs MinCut; but can we achieve this time bound algorithmically, or is AllPairs MinCut a harder problem?
The naive strategy for solving AllPairs MinCut is to execute a minimum  cut algorithm times, with total running time [Mąd16] or [LS14]. For nottoodense graphs, there is a faster randomized algorithm of Cheung, Lau, and Leung [CLL13] that runs in time . For smaller , some better bounds are known. First, observe that a minimum  cut can be found via iterations of the FordFulkerson algorithm [FF62] in time , which gives a total bound of . Another randomized algorithm of [CLL13] runs in better time but it works only in DAGs. Notice that the latter bound matches the running time of Transitive Closure if the graphs are sparse enough. For the case , Georgiadis et al. [GGI17] achieved the same running time as Transitive Closure up to subpolynomial factor in all settings, by devising two deterministic algorithms, whose running times are and .
Other than the lower bound from Transitive Closure, the main previously known result is from [KT18], which showed that under the Strong Exponential Time Hypothesis (SETH),^{13}^{13}13These lower bounds hold even under the weaker assumption that the
Orthogonal Vectors problem requires
time. AllPairs MinCut requires, up to subpolynomial factors, time in uncapacitated digraphs of any edge density, and even in the simpler case of (unit) vertex capacities and of DAGs. As a function of their lower bound becomes [KT18]. Combining the two, we have a conditional lower bound of .Related Work.
There are many other results related to our problem, let us mention a few. Other than DAGs, the problem has also been considered in the special cases of planar digraphs [ACZ98, ŁNSWN12], sparse digraphs and digraphs with bounded treewidth [ACZ98].
In undirected graphs, the problem was studied extensively following the seminal work of Gomory and Hu [GH61] in 1961, which introduced a representation of AllPairs MinCuts via a weighted tree, commonly called a GomoryHu tree, and further showed how to compute it using executions of maximum  flow. Bhalgat et al. [BHKP07] designed an algorithm that computes a GomoryHu tree in uncapacitated undirected graphs in time, and this upper bound was recently improved [AKT19]. The case of bounded mincuts (small ) in undirected graphs was studied by Hariharan et al. [HKP07], motivated in part by applications in practical scenarios. The fastest running time for this problem is [Pan16], achieved by combining results from [HKP07] and [BHKP07]. On the negative side, there is an lower bound for AllPairs MinCut in sparse capacitated digraphs [KT18], and very recently, a similar lower bound was shown for undirected graphs with vertex capacities [AKT19].
1.1 Our Contribution
The goal of this work is to reduce the gaps in our understanding of the AllPairs MinCut problem (see Table 1 for a list of known and new results). In particular, we are motivated by three highlevel questions. First, how large can be while keeping the time complexity the same as Transitive Closure? Second, could the problem be solved in cubic time (or faster) in all settings? Currently no lower bound is known even in the hardest settings of the problem (capacitated, dense, general graphs). And third, can the actual cuts (witnesses) be reported in the same amount of time it takes to only report their values? Some of the previous techniques, such as those of [CLL13], cannot do that.
New Algorithms.
Our first result is a randomized algorithm that solves the bounded version of AllPairs MinCut in a digraph with unit vertex capacities in time . This upper bound is only a factor away from that of Transitive Closure, and thus matches it up to polynomial factors for any . Moreover, any factor improvement over our upper bound would imply a breakthrough for Transitive Closure (and many other problems). Our algorithm builds on the networkcoding method of [CLL13], and in effect adapts this method to the easier setting of vertex capacities, to achieve a better running time than what is known for unit edge capacities. This algorithm is actually more general: Given a digraph with unit vertex capacities, two subsets and , it computes for all the minimum  cut value if this value is less than , all in time . We overview these results in Section 3.1, with full details in Section 5.
Three weaknesses of this algorithm and the ones by Cheung et al. [CLL13] are that they do not return the actual cuts, they are randomized, and they are not combinatorial. Our next set of algorithmic results deals with these issues. More specifically, we present two deterministic algorithms for DAGs with unit edge (or vertex) capacities that compute, for every , an actual minimum  cut if its value is less than . The first algorithm is combinatorial (i.e., it does not involve matrix multiplication) and runs in time . The second algorithm can be faster on dense DAGs and runs in time . These algorithms extend the results of Georgiadis et al. [GGI17], which matched the running time of Transitive Closure up to factors, from just to any (in the first case) and (in the second case). We give an overview of these algorithms in Section 3.2, and the formal results are Theorems 7.3 and 7.9.
New Lower Bounds.
Finally, we present conditional lower bounds for our problem, the bounded version of AllPairs MinCut. As a result, we identify new settings where the problem is harder than Transitive Closure, and provide the first evidence that the problem cannot be solved in cubic time. Technically, the main novelty here is a reduction from the Clique problem. It implies lower bounds that apply to the basic setting of DAGs with unit vertex capacities, and therefore immediately apply also to more general settings, such as edge capacities, capacitated inputs, and general digraphs, and they in fact improve over previous lower bounds [AWY18, KT18] in all these settings.^{14}^{14}14It is unclear if our new reduction can be combined with the ideas in [AKT19] to improve the lower bounds in the seemingly easier case of undirected graphs with vertex capacities. We prove the following theorem in Section 4.
Theorem 1.1.
If for some fixed and any , the bounded version of AllPairs MinCut can be solved on DAGs with unit vertex capacities in time , then Clique can be solved in time for some .
Moreover, if for some fixed and any that version of AllPairs MinCut can be solved combinatorially in time , then Clique can be solved combinatorially in time for some .
To appreciate the new bounds, consider first the case , which is equivalent to not restricting . The previous lower bound, under SETH, is and ours is larger by a factor of . For combinatorial algorithms, our lower bound is , which is essentially the largest possible lower bound one can prove without a major breakthrough in finegrained complexity. This is because the naive algorithm for AllPairs MinCuts is to invoke an algorithm for MaxFlow times, hence a lower bound larger than for our problem would imply the first nontrivial lower bound for minimum  cut. The latter is perhaps the biggest open question in finegrained complexity, and in fact many experts believe that nearlinear time algorithms for minimum  cut do exist, and can even be considered “combinatorial” in the sense that they do not involve the infamous inefficiencies of fast matrix multiplication. If such algorithms for minimum  cut do exist, then our lower bound is tight.
Our lower bound shows that as exceeds , the time complexity of bounded of AllPairs MinCut exceeds that of Transitive Closure by polynomial factors. The lower bound is supercubic whenever .
Time  Input  Output  Reference  
deterministic  digraphs  cuts, only  [GGI17]  
deterministic  digraphs  cuts  [FF62]  
randomized  digraphs  cut values  [CLL13]  
randomized  digraphs  cut values  [CLL13]  
randomized, vertex capacities  digraphs  cut values  Theorem 5.2  
deterministic  DAGs  cuts  Theorem 7.3  
deterministic  DAGs  cuts  Theorem 7.9  
based on Transitive Closure  DAGs  cut values  
based on SETH  DAGs  cut values  [KT18]  
based on 4Clique  DAGs  cut values  Theorem 1.1 
2 Preliminaries
We start with some terminology and wellknown results on graphs and cuts. Next we will briefly introduce the main algebraic tools that will be used throughout the paper. We note that although we are interested in solving the bounded AllPairs MinCut problem, where we wish to find the allpairs mincuts of size at most , for the sake of using simpler notation we compute the mincuts of size at most (instead of less than ) solving this way the ()bounded AllPairs MinCut problem.
Directed graphs.
The input of our problem consists of an integer and a directed graph, digraph for short, with vertices and arcs. Every arc consists of a tail and a head . By , we denote the subgraph of induced by the set of vertices , formally . By , we denote the outneighborhood of consisting of all the heads of the arcs leaving . We denote by the number of outgoing arcs from . All our results extend to multidigraphs, where each pair of vertices can be connected with multiple (parallel) arcs. For parallel arcs, we always refer to each arc individually, as if each arc had a unique identifier. So whenever we refer to a set of arcs, we refer to the set of their unique identifiers, i.e., without collapsing parallel arcs, like in a multiset.
Flows and cuts.
We follow the notation used by Ford and Fulkerson [FF62]. Let be a digraph, where each arc has a nonnegative capacity . For a pair of vertices and , an  flow of is a function on such that , and for every vertex the incoming flow is equal to outgoing flow, i.e., . If has vertex capacities as well, then must also satisfy for every , where is the capacity of . The value of the flow is defined as . We denote the existence of a path from to by and by the lack of such a path. Any set is an cut if in . is a minimal cut if no proper subset of is cut. For an cut , we say that its source side is and its target side is . We also refer to the source side and the target side as reachable and reaching, respectively. An  cut is a minimal cut of size . A set of  cuts of size at most is called a set of  cuts. We can define vertex cuts analogously.
Order of cuts.
An  cut is later (respectively earlier) than an  cut if and only if (resp. ), and we denote it (resp. ). Note that those relations are not necessarily complementary if the cuts are not minimal (see Figure 1 for an example). We make these inequalities strict (i.e., ‘’ or ‘’) whenever the inclusions are proper. We compare a cut and an arc by defining whenever both endpoints of are in . Additionally, includes the case where . Definitions of the relations ‘’ and ‘’ follow by symmetry. We refer to Figure 2 for illustrations. This partial order of cuts also allows us to define cuts that are extremal with respect to all other  cuts in the following sense:
Definition 2.1 (latest cuts [Mar06]).
An  cut is latest (resp. earliest) if and only if there is no later (resp. earlier)  cut of smaller or equal size.
Informally speaking, a cut is latest if we would have to cut through more arcs whenever we would like to cut off fewer vertices. This naturally extends the definition of an latest mincut as used by Ford and Fulkerson [FF62, Section 5]. The notion of latest cuts has first been introduced by Marx [Mar06] (under the name of important cuts) in the context of fixedparameter tractable algorithms for multi(way) cut problems. Since we need both earliest and latest cuts, we do not refer to latest cuts as important cuts. Additionally, we use the term extremal cuts to refer to the union of earliest and latest cuts.
We will now briefly recap the framework of Cheung et al. [CLL13] as we will modify them later for our purposes.
3 Overview of Our Algorithmic Approach
3.1 Randomized Algorithms on General Graphs
In the framework of [CLL13] edges are encoded as vectors, so that the vector of each edge is a randomized linear combination of the vectors correspond to edges incoming to , the source of . One can compute all these vectors for the whole graph, simultaneously, using some matrix manipulations. The bottleneck is that one has to invert a certain matrix with an entry for each pair of edges. Just reading the matrix that is output by the inversion requires
time, since most entries in the inverted matrix are expected to be nonzero even if the graph is sparse.
To overcome this barrier, while using the same framework, we define the encoding vectors on the nodes rather than the edges. We show that this is sufficient for the vertexcapacitated setting. Then, instead of inverting a large matrix, we need to compute the rank of certain submatrices which becomes the new bottleneck. When is small enough, this turns out to lead to a significant speed up compared to the running time in [CLL13].
3.2 Deterministic Algorithms with Witnesses on DAGs
Here we deal with the problem of computing certificates for the bounded AllPairs MinCut problem. Our contribution here is twofold. We first prove some properties of the structure of the latest cuts and of the latest cuts, which might be of independent interest. This gives us some crucial insights on the structure of the cuts, and allows us to develop an algorithmic framework which is used to solve the kbounded AllPairs MinCut problem. As a second contribution, we exploit our new algorithmic framework in two different ways, leading to two new algorithms which run in time for and in time for .
Let be a DAG. Consider some arbitrary pair of vertices and , and any cut . For every intermediate vertex , must be either a cut, or a cut. The knowledge of all  and all  mincuts does not allow us to convey enough information for computing an  mincut of size at most quickly, as illustrated in Figure 3. However, we are able to compute an  mincut by processing all the earliest cuts and all the latest cuts, of size at most . We build our approach around this insight. We note that the characterization that we develop is particularly useful, as it has been shown that the number of all earliest/latest  cuts can be upper bounded by , independently of the size of the graph.
For a more precise formulation on how to recover a mincut (or extremal cuts) from cuts to and from intermediate vertices, consider the following. Let be an arc split, that is a partition of the arc set with the property that any path in consists of a (possibly empty) sequence of arcs from followed by a (possibly empty) sequence of arcs from (see Definition 6.6). Assume that for each vertex we know all the earliest cuts in and all the latest cuts in . We show that a set of arcs that contains as a subset one earliest cut in , or one latest cut in for every , is a cut. Moreover, we show that all the cuts of arcs with the above property include all the latest cuts. Hence, in order to identify all latest cuts, it is sufficient to identify all sets with that property. We next describe how we use these structural properties to compute all extremal cuts.
We formulate the following combinatorial problem over families of sets, which is independent of graphs and cuts, that we can use to compute all extremal cuts. The input to our problem is families of sets , where each family consists of at most sets, and each set contains at most elements from a universe . The goal is to compute all minimal subsets , for which there exists a set such that , for all . We refer to this problem as Witness Superset. To create an instance of the Witness Superset problem, we set and to be all earliest cuts in and all latest cuts in . Informally speaking, the solution to the instance of the Witness Superset problem picks all sets of arcs that cover at least one earliest or one latest cut for every vertex. In a postprocessing step, we filter the solution to the Witness Superset problem on the instance in order to extract all the latest cuts. We follow an analogous process to compute all the earliest cuts.
Algorithmic framework.
We next define a common algorithmic framework for solving the kbounded AllPairs MinCut problem, as follows. We pick a partition of the vertices , such that there is no arc in . Such a partition can be trivially computed from a topological order of the input DAG. Let be the sets of arcs in , in , and in .

First, we recursively solve the problem in and in . The recursion returns without doing any work whenever the graph is a singleton vertex.

Second, for each pair of vertices , such that has an outgoing arc from and , we solve the instance of Witness Superset. Notice that the only nonempty earliest cuts in for the pair are the arcs .

Finally, for each pair of vertices , such that , we solve the instance of .
The Witness Superset problem can be solved naively as follows. Let be the set of all earliest cuts and all latest cuts. Assume we have , for all vertices that are both reachable from in and that reach in . Each of these sets contains cuts. We can identify all sets of arcs that contain at least one cut from each , in time . This yields an algorithm with superpolynomial running time. However, we speed up this naive procedure by applying some judicious pruning, achieving a better running time of , which is polynomial for . In the following, we sketch the two algorithms that we develop for solving efficiently the kbounded AllPairs MinCut problem.
Iterative division.
For the first algorithm, we process the vertices in reverse topological order. When processing a vertex , we define and to be the set of vertices that appear after in the topological order. Notice that has a trivial structure, and we already know all latest cuts in . In this case, we present an algorithm for solving the instance of the Witness Superset problem in time , where is the number of arcs leaving . We invoke this algorithm for each  pair such that . For this gives an algorithm that runs in time for processing , and in total.
Recursive division.
For the second algorithm, we recursively partition the set of vertices evenly into sets and at each level of the recursion. We first recursively solve the problem in and in . Second, we solve the instances and of Witness Superset for all pairs of vertices from . Notice that the number of vertices that are both reachable from in and reach in can be as high as . This implies that even constructing all instances of the Witness Superset problem, for all , takes time. To overcome this barrier, we take advantage of the power of fast matrix multiplications by applying it into suitably defined matrices of binary codes (codewords). At a very highlevel, this approach was used by Fischer and Meyer [FM71] in their time algorithm for transitive closure in DAGs – there the binary codes where of size 1 indicating whether there exists an arc between two vertices.
Algebraic framework.
In order to use coordinatewise boolean matrix multiplication with the entries of the matrices being codewords we first encode all earliest and all latest cuts using binary codes. The bitwise boolean multiplication of such matrices with binary codes in its entries allows a serial combination of both  cuts and  cuts based on AND operations, and thus allows us to construct a solution based on the OR operation of pairwise AND operations. We show that superimposed codes are suitable in our case, i.e., binary codes where sets are represented as bitwiseOR of codewords of objects, and small sets are guaranteed to be encoded uniquely. Superimposed codes provide a unique representation for sets of elements from a universe of size with codewords of length . In this setting, the union of sets translates naturally to bitwiseOR of their codewords.
Tensor product of codes.
To achieve our bounds, we compose several identical superimposed codes into a new binary code, so that encoding set families with it enables us to solve the corresponding instances of Witness Superset. Our composition has the cost of an exponential increase in the length of the code. Let be the set family that we wish to encode, and let be their superimposed codes in the form of vectors. We construct a dimensional array where iff , for each
. In other words, the resulting code is the tensor product of all superimposed codes. This construction creates enough redundancy so that enough information on the structure of the set families is preserved. Furthermore, we can extract the encoded information from the bitwiseOR of several codewords. The resulting code is of length
, where is the upperbound on the allowed number of sets in each encoded set family. In our case , which results to only a logarithmic dependency on at the price of a doublyexponential dependency on , thus making the problem tractable for small values of .From slices to Witness Superset.
Finally, we show how the Witness Superset can be solved using tensor product of superimposed codes. Consider the notion of cutting the code of dimension
with an axisparallel hyperplane of dimension
. We call this resulting shorter codeword a slice of the original codeword. A slice of a tensor product is a tensor product of one dimension less, or an empty set, and a slice of a bitwiseOR of tensor products is as well a bitwiseOR of tensor products (of one dimension less). Thus, taking a slice of the bitwiseOR of the encoding of families of sets is equivalent to removing a particular set from some families and to dropping some other families completely and then encoding these remaining, reduced families. Thus, we can design a nondeterministic algorithm, which at each step of the recursion picks slices, one slice for each element of the solution we want to output, and then recurses on the bitwiseOR of those slices, reducing the dimension by one in the process. This is always possible, since each element that belongs to a particular solution of Witness Superset satisfies one of the following: it either has a witnessing slice and thus it is preserved in the solution to the recursive call; or it is dense enough in the input so that it is a member of each solution and we can detect this situation from scanning the diagonal of the input codeword. This described nondeterministic approach is then made deterministic by simply considering every possible choice of slices at each of the steps of the recursion. This does not increase substantially the complexity of the decoding procedure, since for is still only doublyexponential in .4 Reducing Clique to AllPairs MinCut
In this section we prove Theorem 1.1 by showing new reductions from the Clique problem to bounded AllPairs MinCut with unit vertex capacities. These reductions yield conditional lower bounds that are much higher than previous ones, which are based on SETH, in addition to always producing DAGs. Throughout this section, we will often use the term nodes for vertices.
Definition 4.1 (The Clique Problem).
Given a partite graph , where with , decide whether there are four nodes , , , that form a clique.
This problem is equivalent to the standard formulation of Clique (without the restriction to partite graphs). The currently known running times are using matrix multiplication [EG04], and combinatorially [Yu18]. The Clique Conjecture [ABW15] hypothesizes that current clique algorithms are optimal. Usually when the Clique Conjecture is used, it is enough to assume that the current algorithms are optimal for every that is a multiple of , where the known running times are [NP85] and combinatorially [Vas09], see e.g. [ABBK17, ABW15, BW17, Cha15, LWW18]. However, we will need the stronger assumption that one cannot improve the current algorithms for by any polynomial factor. This stronger form was previously used by Bringmann, Grønlund, and Larsen [BGL17].
4.1 Reduction to the Unbounded Case
We start with a reduction to the unbounded case (equivalent to ), that is, we reduce to AllPairs MinCut with unit node capacities (abbreviated APMVC, for AllPairs Minimum VertexCut). Later (in Section 4.1) we will enhance the construction in order to bound .
Lemma 4.2.
Suppose APMVC on node DAGs with unit node capacities can be solved in time . Then Clique on node graphs can be solved in time , where is the time to multiply two matrices from .
To illustrate the usage of this lemma, observe that an time combinatorial algorithm for APMVC would imply a combinatorial algorithm with similar running time for Clique.
Proof.
Given a partite graph as input for the Clique problem, the graph is constructed as follows. The node set of is the same as , and we abuse notation and refer also to as if it is partitioned into ,,, and . Thinking of as the set of sources and as the set of sinks, the proof will focus on the number of nodedisjoint paths from nodes to nodes . The edges of are defined in a more special way, see also Figure 4 for illustration.

(A to B) For every such that , add to a directed edge .

(B to C) For every such that , add to a directed edge .

(C to D) For every such that , add to a directed edge .
The definition of the edges of will continue shortly. So far, edges in correspond to edges in , and there is a (directed) path if and only if the three (undirected) edges exist in . In the rest of the construction, our goal is to make this hop path contribute to the final flow if and only if is a clique in (i.e., all six edges exist, not only those three). Towards this end, additional edges are introduced, that make this hop path useless in case or are not also edges in . This allows “checking” for five of the six edges in the clique, rather than just three. The sixth edge is easy to “check”.

(A to C) For every such that , add to a directed edge .

(B to D) For every such that in , add to a directed edge .
This completes the construction of . Note that these additional edges imply that there is a path in iff and , and similarly, there is a path in iff and . Let us introduce notations to capture these paths. For nodes denote:
We now argue that if an APMVC algorithm is run on , enough information is received to be able to solve Clique on by spending only an additional postprocessing stage of time.
Claim 4.3.
Let be nodes with . If the edge does not participate in a clique in , then the node connectivity from to in , denoted , is exactly
and otherwise is strictly larger.
Proof of Claim 4.3.
We start by observing that all paths from to in have either two or three hops.
Assume now that there is a clique in , and let us exhibit a set of nodedisjoint paths from to of size . For all nodes , add to the hop path . For all nodes , add to the hop path . So far, all these paths are clearly nodedisjoint. Then, add the hop path to . This path is nodedisjoint from the rest because (because ) and (because ).
Next, assume that no nodes complete a clique with . Then for every set of nodedisjoint paths from to , there is a set of hop nodedisjoint paths from to that has the same size. To see this, let be some hop path in . Since is not a clique in and are edges in , we conclude that either or . If then is an edge in and the hop path can be replaced with the hop path (by skipping ) and one is remained with a set of nodedisjoint paths of the same size. Similarly, if then is an edge in and the hop path can be replaced with the hop path . This can be done for all hop paths and result in . Finally, note that the number of hop paths from to is exactly , and this completes the proof of Claim 4.3. ∎
Computing the estimates.
To complete the reduction, observe that the values can be computed for all pairs using two matrix multiplications. To compute the values, multiply the two matrices which have entries from , with iff and iff . Observe that is exactly . To compute , multiply over where iff and iff .
After having these estimates and computing
APMVC on , it can be decided whether contains a clique in time as follows. Go through all edges and decide whether the edge participates in a clique by comparing to the node connectivity in . By the above claim, an edge with is found if and only if there is a clique in . The total running time is , which completes the proof of Lemma 4.2. ∎4.2 Reduction to the Bounded Case
Next, we exploit a certain versatility of the reduction and adapt it to ask only about mincut values (aka node connectivities) that are smaller than . In other words, we will reduce to the bounded version of AllPairs MinCut with unit node capacities (abbreviated kAPMVC, for bounded AllPairs Minimum VertexCut). Our lower bound improves on the conjectured lower bound for Transitive Closure as long as .
Lemma 4.4.
Suppose kAPMVC on node DAGs with unit node capacities can be solved in time . Then Clique on node graphs can be solved in time , where is the time to multiply two matrices from .
Proof of Lemma 4.4.
Given a partite graph as in the definition of the Clique problem, graphs are constructed in a way that is similar to the previous reduction, and an algorithm for kAPMVC is called on each of these graphs. Assume w.l.o.g. that divides and partition the sets arbitrarily to sets and of size each. For each pair of integers , generate one graph by restricting the attention to the nodes of in and looking for a clique only there.
Let us fix a pair and describe the construction of . To simplify the description, let us omit the subscripts , referring to this graph as , and think of as having four parts , where and are in fact and are therefore smaller: and .
The nodes in are partitioned into four sets , where the sets are the same as in . For the nodes in in , multiple copies are created in . For all integers and node in , add a node to in . Similarly, for all and node , add a node to . Note that contains nodes.
To define the edges, partition the nodes in and arbitrarily to sets and of size . Now, the edges are defined in a similar way to the previous proof, except each is connected only to nodes in , and each is connected only to nodes in . More formally:

(A to B) For every such that , add to a directed edge .

(B to C) For every such that , add to a directed edge .

(C to D) For every such that , add to a directed edge .

(A to C) For every such that , add to a directed edge .

(B to D) For every such that , add to a directed edge .
This completes the construction of . The arguments for correctness follow the same lines as in the previous proof. For nodes denote:
Claim 4.5.
Let be nodes with . If the edge does not participate in a clique in together with any nodes in , then the node connectivity from to in , denoted , is exactly
and otherwise is strictly larger.
Proof of Claim 4.5.
The proof is very similar to the one in the previous reduction.
We start by observing that all paths from to in can have either two or three hops.
For the first direction, assuming that there is a clique in with , we show a set of nodedisjoint paths from to of size . For all nodes , add the hop path to . For all nodes , add the hop path to . So far, all these paths are clearly nodedisjoint. Then, add the hop path to . This path is nodedisjoint from the rest because (because ) and (because ).
For the second direction, assume that there do not exist nodes that complete a clique with . In this case, for every set of nodedisjoint paths from to , there is a set of hop nodedisjoint paths from to that has the same size. To see this, let be some hop path in . Since is not a clique in and are edges in , it follows that either or . If then is an edge in and the hop path can be replaced with the hop path (by skipping ) and one is remained with a set of nodedisjoint paths of the same size. Similarly, if then is an edge in and the hop path can be replaced with the hop path . This can be done for all hop paths and result in . Finally, note that the number of hop paths from to is exactly , and this completes the proof of Claim 4.5. ∎
This claim implies that in order to determine whether a pair participate in a clique in is it is enough to check whether is equal to . Note that the latter is equal to according to the notation in the previous reduction:
Computing the estimates
To complete the reduction, observe that the values
Comments
There are no comments yet.