1 Introduction
The bottleneck path problem is a graph optimization problem finding a path between two vertices with the maximum flow, in which the flow of a path is defined as the minimum capacity of edges on that path. The bottleneck problem can be seen as a mathematical formulation of many network routing problems, e.g. finding the route with the maximum transmission speed between two nodes in a network, and it has many other applications such as digital compositing [FGA98]. It is also the important building block of other algorithms, such as the improved FordFulkerson algorithm [EK72, FF56], and ksplittable flow algorithm [BKS02]. The minimax path problem which finds the path that minimizes the maximum weight on it is symmetric to the bottleneck path problem, thus has the same time complexity.
1.1 Our Results
In a directed graph (), we consider the singlesource bottleneck path (SSBP) problem, which finds the bottleneck paths from a given source node to all other vertices. In the comparisonbased model, the previous best time bound for SSBP is the traditional Dijkstra’s algorithm [Dij59] with Fibonacci heap [FT87], which runs in time. Some progress has been made for slight variants of the SSBP problem: When the graph is undirected, SSBP is reducible to minimum spanning tree [Hu61], thus can be solved in randomized linear time [KKT95]; for the singlesource singledestination bottleneck path ( BP) problem in directed graphs, Gabow and Tarjan [GT88] showed that it can be solved in time, and this bound was subsequently improved by Chechik et al [CKT16] to . However, until now no algorithm is known to be better than Dijkstra’s algorithm for SSBP in directed graphs. And as noted in [FT87], Dijkstra’s algorithm can be used to sort numbers, so a “sorting barrier”, , prevents us from finding a more efficient implementation of Dijkstra’s algorithm.
In this paper, we present a breakthrough algorithm for SSBP that overcomes the sorting barrier. Our main result is shown in the following theorem:
Theorem 1.
Let be a directed graph with edge weights . In comparisonbased model, SSBP can be solved in expected time.
1.2 Related Works
A “sorting barrier” seemed to exist for the the Minimum Spanning Tree problem (MST) for many years [Bor, Jar30, Kru56], but it was eventually broken by [Yao75, CT76]. Fredman and Tarjan [FT87] gave an time algorithm by introducing Fibonacci heap. The current best time bounds for MST include randomized linear time algorithm by Karger et al [KKT95], Chazelle’s time deterministic algorithm [Cha00] and Pettie and Ramachandran’s optimal approach [PR02].
The singlesource singledestination version of the bottleneck path (st BP) problem is proved to be equivalent to the Bottleneck Spanning Tree (BST) problem (see [CKT16]). In the bottleneck spanning tree problem, we want to find a spanning tree rooted at source node minimizing the maximum edge weight in it. For undirected graph, the st BP can be reduced to the MST problem. For directed graph, Dijkstra’s algorithm [Dij59] gave an time solution using Fibonacci heap [FT87]. Then Gabow and Tarjan [GT88] gave an time algorithm based on recursively splitting edges into levels. Recently, Chechik et al. [CKT16] improved the time complexity of BST and BP to randomized time, where . All these algorithms are under comparisonbased model. For word RAM model, an time algorithm has been found by Chechik et al. [CKT16].
For the allpairs version of the bottleneck path (APBP) problem, we can sort all the edges and use Dijkstra search to obtain an time bound. For dense graphs, it has been shown that APBP can be solved in truly subcubic time. Shapira et al. [SYZ07] gave an time APBP algorithm on vertexweighted graphs. Then Vassilevska et al. [VWY07] showed that APBP for edgeweighted graphs can be solved in time based on computing product of real matrices, which was then improved by Duan and Pettie [DP09] to . Here is the exponent of time bound for fast matrix multiplication [CW90, Wil12].
2 Preliminaries
For a directed graph , we denote to be the edge weight of . Without additional explanation, we use the symbol to denote the number of nodes and to denote the number of edges in .
2.1 Bottleneck Path Problems
The capacity of a path is defined to be the minimum weight among traversed edges, i.e., if a path traverses , then the capacity of the path is . For any , a path from to with maximum capacity is called a bottleneck path from to , and we denote this maximum capacity by .
Definition 2.
The SingleSource Bottleneck Path (SSBP) problem is: Given a directed graph with weight function and a source , output for every , which is the maximum path capacity among all the paths from to .
We use the triple to denote a SSBP instance with graph , weight function and source node .
It is more convenient to present our algorithm on a slight variant of the SSBP problem. We shall call it ArbitrarySource Bottleneck Path with Initial Capacity (ASBPIC) problem. We assume that the edge weight of an edge is either a real number or infinitely large (). We say an edge is unrestricted if ; otherwise we say the edge is restricted. In the ASBPIC problem, an initial capacity is given for every node , and the capacity of a path is redefined to be the minimum between the initial capacity of the starting node and the minimum edge weights in the path, i.e., if the path starts with the node and traverses , then its capacity is . For any , a path ended with with maximum capacity is called a bottleneck path ended with , and we denote this maximum capacity as .
Definition 3.
The ArbitrarySource Bottleneck Path with Initial Capacity (ASBPIC) problem is: Given a directed graph with weight function and initial capacity function , output for every , which is the maximum path capacity among all the paths ended with .
We use the triple to denote an ASBPIC instance with graph , weight function , and inital capacity function .
Note that ASBPIC and SSBP are equivalent under lineartime reductions. Given an ASBPIC instance, we construct a new graph from by adding a new node which has outgoing edges with weight to all the nodes having , then it suffices to find bottleneck paths from to all other nodes in . On the other hand, a SSBP instance can be easily reduced to the ASBPIC instance , where and for all .
2.2 Dijkstra’s Algorithm for SSBP and ASBPIC
SSBP can be easily solved using a variant of Dijkstra’s algorithm [Dij59]. In this algorithm, each node is in one of the three status: unsearched, active, or scanned. We associate each node with a label , which is the maximum path capacity among all the paths from to that only traverse scanned nodes or .
Initially, all the nodes are unsearched except is active, and we set and for all . We repeat the following step, which we call the Dijkstra step, until none of nodes is active:

Select an active node with maximum label and mark as scanned. For every outgoing edge of , update the label of by
(1) and mark as active if is unsearched.
We use priority queue to maintain the order of labels for active nodes. This algorithm runs in time when Fibonacci heap [FT87] is used.
The algorithm we introduced above can also be adapted for solving ASBPIC. The only thing we need to change is that in the initial stage all nodes are active and for every . The resulting algorithm again runs in time. We shall call these two algorithms as Dijkstra’s algorithm for SSBP and Dijkstra’s algorithm for ASBPIC, or simply call any of them Dijkstra’s algorithm when no confusion can arise.
2.3 Weak and Strong Connectivity in Directed Graph
We also need some definitions about connectivity in graph theory in this paper. A directed graph is said to be weaklyconnected if it turns to be a connected undirected graph when changing all of its directed edges to undirected edges. A directed graph is said to be stronglyconnected if every pair of nodes can be reached from each other. A weakly (or strongly) connected components is defined to be a maximal weakly (or strongly) connected subgraph.
3 Intuitions for SSBP
If all the edge weights are integers and are restricted in , then SSBP can be solved in time using Dijkstra’s algorithm with bucket queue. If the edge weights are not necessarily small integers but all the edges given to us are already sorted by weights, then we can replace the edge weights by their ranks and use Dijkstra’s algorithm with bucket queue to solve the problem in time. However, edges are not sorted in general. If we sort the edges directly, then a cost of time is unavoidable in a comparisonbased model, which is more expensive than the running time of Dijkstra’s algorithm.
Our algorithm is inspired by previous works on the singlesource singledestination bottleneck path problem ( BP): the time algorithm by Gabow and Tarjan [GT88] and the time algorithm by Chechik et al [CKT16]. Gabow and Tarjan’s algorithm for  BP consists of several stages. Let be the capacity of a bottleneck path from to . Initially, we know that is in the interval . In each stage, we narrow the interval of possible values of . Assume that is known to be in the range . Let be the the number of edges with weights in the range and be a parameter. By applying the medianfinding algorithm [BFP73] repeatedly, we choose thresholds to split into subintervals such that for each subinterval, there are edges of weight in it. Gabow and Tarjan then show that locating which subinterval contains can be done in time by incremental search. Finally, the running time bound is achieved by setting appropriately at each stage.
The algorithm by Chechik et al. is based on the framework of Gabow and Tarjan’s algorithm, but instead of selecting the thresholds by medianfinding repeatedly in time, in this algorithm we select the thresholds by randomly sampling in edge weights, and sort them in
time. These thresholds partition the edges evenly with high probability, but it requires
time to compute the partition explicitly. Then they show that actually we can locate which subinterval contains in (or ) time, without computing the explicit partition. The time bound for the overall algorithm is again obtained by setting appropriately at each stage.We adapt Chechik et al.’s framework for the  BP problem to the SSBP problem. Our SSBP algorithm actually works on an equvialent problem called ASBPIC. In ASBPIC, there is no fixed source but every node has an initial capacity, and for all destination we need to compute the capacity of a bottleneck path ended with (See Section 2.1 for details). Instead of locating the subinterval for a single destination , our algorithm locates the subintervals for all destinations . Thus we adopt a divideandconquer methodology. At each recursion, we follow the idea from Chechik et al. [CKT16] to randomly sample thresholds. Then we split the nodes into levels , where the th level contains the nodes that have in the th subinterval (). For each level of nodes, we compute for every by reducing to solve the SSBP on a subgraph consisting of all the nodes in and some of the edges connecting them. We set to be fixed in all recursive calls, and the maximum recursion depth is with high probability.
The split algorithm becomes the key part of our algorithm. Note that at each recursion, we should reduce or avoid the use of operations that cost time per node or per edge (e.g., binary searching for the subinterval containing a particular edge weight). This is because that, for example, if we execute an time operation for each edge at each recursion, then the overall time cost is , which means no improvement comparing with the previous time Dijkstra’s algorithm. Surprisingly, we can design an algorithm such that whenever we execute an time operation, we can always find one edge that does not appear in any subsequent recursive calls. Thus total time complexity for such operations is , which gives us some room to obtain a better time complexity by adjusting the parameter .
4 Our Algorithm
Our algorithm for SSBP is as follows: Given a SSBP instance , we first reduce the SSBP problem to an ASBPIC instance , and then use a recursive algorithm (Figure 1) to solve the ASBPIC problem. The reduction is done by setting and for all as described in the preliminaries.
For convenience, we assume that in the original graph, all edge weights are distinct. This assumption can be easily removed.
A highlevel description of our recursive algorithm for ASBPIC is shown in Figure 1. For two set and , stands for the union of and with the assumption that . We use to denote the set of restricted edges in , and similarly we use to denote the set of restricted edges in for each ASBPIC instance . When the thresholds are presence, we define the index of for every to be the unique index such that , and we denote it as . For , we define . Note that all the subgraphs at Line 12 are disjoint. We denote to be the total number of edges in that do not appear in any recursive calls of . For an edge , if and belong to different levels, then we say that is crosslevel. If and belong to the same level and , then we say that is belowlevel; conversely, if then we say that is abovelevel.
Besides the problem instance of ASBPIC, our algorithm requires an additional integral parameter . We set the parameter throughout our algorithm. The value of the parameter does not affect the correctness of our algorithm, but it controls the number of recursive calls at each recursion.
At each recursion, our algorithm first checks if contains only one weaklyconnected component. If not, then our algorithm calls itself to compute in each weaklyconnected component recursively. Now we can assume that is weaklyconnected (so ).
If the number of restricted edges is no more than , we claim that we can compute for all in linear time. The specific algorithm will be introduced in Section 4.2.
Lemma 4.
ASBPIC can be solved in time if there is at most one restricted edge.
If the number of restricted edges is more than , then our algorithm first sample distinct edges from uniformly randomly and sort them by weights, that is, if the number of restricted edges is more than , then we sample distinct restricted edges and sort them; otherwise we just sort all the restricted edges. Let be the weight of the edge with rank () and , .
Next, we split into levels of nodes , where the th level of nodes is . The basic idea of the split algorithm is: we run Dijkstra’s algorithm for ASBPIC on the graph produced by mapping every edge weight and initial capacity in to their indices and , and we obtain the final label value for each node (Remember that is the label of in Dijkstra’s algorithm). It is easy to show that the final label value equals , so the nodes can be easily split into levels according to their final labels. The specific split algorithm will be introduced in Section 4.3. The time complexity for a single splitting is given below. In Theorem 9 we show that this implies that the total time cost for splitting is .
Lemma 5.
Splitting into levels at Line 11 can be done in (Recall that is the number of edges that do not appear in any subsequent recursive calls).
Finally, for every level , we compute for nodes in this level by reducing to a new ASBPIC instance , where is a subgraph of consisting of all the nodes in and some of edges that connect them. We solve each new instance by a recursive call. The construction of is as follows:

, where is the nodes at level in , and is the set of edges which connect two nodes at level and are not belowlevel, i.e., ;

For any , if is abovelevel; otherwise ;

For any , .
Lemma 6.
We can construct all the new ASBPIC instances in time. For , the value of in the instance exactly equals to the value of in the instance .
Proof.
We can construct all these instances for all by linearly scanning all the nodes and edges, which runs in time. We prove the correctness by transforming to step by step, while preserving the values of for all .
By definition, for all . We can delete all the nodes at level less than and delete all the edges with weight less than , since no bottleneck path ended with a node in can traverse them. Also, for every edge with weight , we can replace the edge weight with since is certainly not the minimum in any path ended with a node in .
For every edge where and , the edge weight must be less than , otherwise leads to a contradiction. Thus contracting all the nodes in to a single node with infinite initial capacity is a transformation preserving the values of for all . Finally, our construction follows by taking to be the maximum between the weight of incoming edges from and the initial capacity for every . ∎
Remark 4.1.
In any subsequent recursive calls of , neither crosslevel nor belowlevel edges will appear, and all the abovelevel edges will become unrestricted. Also, it is easy to see that is just the total number of crosslevel and belowlevel edges (Recall that is the number of edges that do not appear in any subsequent recursive calls).
4.1 Running Time
First we analyze the maximum recursion depth. The following lemma shows that randomly sampled thresholds evenly partition the restricted edges with high probability.
Lemma 7.
Let be restricted edges sorted by their weights in . Let be random edges sampled from such that . Let for , and , . Let . Then for every , holds with probability .
Proof.
Let . If , then there exists an edge such that is chosen but for any , is not chosen. Note that when is given, this event happens with probability . By the union bound for all possible , we have
which completes the proof. ∎
For our purposes it is enough to analyze the case that . The following lemma gives a bound for the maximum recursion depth using Lemma 7.
Lemma 8.
For where is the number of nodes at the top level of recursion, the maximum recursion depth is with probability .
Proof.
It is not hard to see that the total number of recursive calls of our main algorithm is . Applying Lemma 7 with and the union bound for all recursive calls, we know that with probability at least , after every split with , the number of restricted edges in is less than for every . Thus after levels of recursion, every ASBPIC instance has , and this means that in any recursive call of , the graph has at most one restricted edge, which will be directly solved at Line 7. ∎
The overall time complexity of our algorithm is given by the following theorem:
Theorem 9.
For , with probability , our main algorithm shown in Figure 1 runs in time.
Proof.
Let , . First we show that the running time in each recursive call is .
In each recursive call of our algorithm, the time cost for sorting at Line 9 is . For the sample edge with rank , either is not empty, or this edge is crosslevel, belowlevel, or abovelevel. Let be the number of edges in the former case, and be the number of edges in the latter case. For the former case, note that we only run the split algorithm for weaklyconnected graphs, so there are at least crosslevel edges, which implies . For the latter case, becomes unrestricted or does not appear for every , so . Thus and the time cost for sorting is .
By Lemma 5, the split algorithm runs in time in each recursive call. All other parts of our algorithm run in linear time. Thus the running time for each recursion is . Note that the recursion depth is with probability . We can complete the proof by adding the time cost for all the recursive calls together. ∎
Finally, we can get our main result by setting .
Theorem 10.
SSBP can be solved in time with high probability.
Remark 4.2.
The above time bound is also true for expected running time. It can be easily derived from the fact that the worstcase running time is at most .
In the rest of this section, we introduce the algorithm for ASBPIC with at most one restricted edge and the split algorithm.
4.2 Algorithm for the Graph with at most One Restricted Edge
We introduce our algorithm for the graph with at most one restricted edge in the following two lemmas.
Lemma 11.
For a given ASBPIC instance , if there is no restricted edge in , then the values of for all can be computed in linear time.
Proof.
contains only unrestricted edges, so for every , is just equal to the maximum value of among all the nodes that can reach . If and are in the same stronglyconnected component, then . Thus we can use Tarjan’s algorithm [Tar72] to contract every stronglyconnected component to a node. The initial capacity of a node is the maximum for all in the component, and the capacity of an edge between nodes is the maximum among edges connecting components. Then Dijkstra approach on DAG takes linear time. ∎
Lemma 12.
For a given ASBPIC instance , if there is exactly one restricted edge in , then the values of for all can be computed in linear time.
Proof.
Let be the only restricted edge in . There are two kinds of paths in :

Paths that do not traverse . We remove from and use the algorithm in Lemma 11 to get for every node .

Paths that traverse . Note that got in the previous step is the maximum capacity to through only unrestricted edges. Then we update by for every node that can be reached from .
We output the values of after these two kinds of updates, then all the paths should have been taken into account. ∎
4.3 Split
Now we introduce the split algorithm at Line 11 in our main algorithm. As before, we use the notation for the index of a value and is the label of in Dijkstra’s algorithm. The goal of this procedure is to split into levels, , where . We need to show that this can be done in time, where is the total number of edges in that do not appear in any .
A straightforward approach to achieve this goal is to use Dijkstra’s algorithm as described in Section 2.2. We map all the edge weights and initial capacities to their indices using binary searches, and run Dijkstra’s algorithm for ASBPIC. The output should be exactly for every . However, this approach is rather inefficient. Evaluating the index for a given requires time in a comparisonbased model, thus in total, this algorithm needs time to compute indices, and this does not meet our requirement for the running time.
The major bottleneck of the above algorithm is the inefficient index evaluations. Our algorithm overcomes this bottleneck by reducing the number of index evaluations for both edge weights and initial capacities to be at most .
4.3.1 Index Evaluation for Edge Weights
First we introduce our idea to reduce the number of index evaluations for edge weights. Recall that in Dijkstra’s algorithm for ASBPIC we maintain a label for every . In every Dijkstra step, we extract an active node with the maximum label, and for all edges we compute to update . In the straightforward approach we evaluate every using binary search, but actually this is a big waste:

If , then , so there is no need to evaluate .

If , then , so we do need to evaluate . However, it can be shown that is either a crosslevel edge or a belowlevel edge, so will not appear in any subsequent recursive calls of .
Using the method discussed above, we can reduce the number of index evaluations for edge weights to be at most in Dijkstra’s algorithm. Lemma 14 gives a formal proof for this.
4.3.2 Index Evaluation for Initial Capacities
Now we introduce our idea to reduce the number of index evaluations for initial capacities. Recall that in Dijkstra’s algorithm, we need to initialize the label to be for each , and maintain a priority queue for the labels of all active nodes. If we evaluate every directly, we have to pay a time cost of .
In our split algorithm, we first find a spanning tree of after replacing all the edges with undirected edges. Then we partition the tree into edgedisjoint (but not necessarily nodedisjoint) subtrees, , each of size . Theorem 13 shows that this partition can be found in time, and the proof is given in Appendix A.
Theorem 13.
Given a tree with nodes and given an integer , there exists a linear time algorithm that can partition into edgedisjoint subtrees, , such that the number of nodes in each subtree is in the range .
We form groups of nodes, , where is the group of nodes that are in the th subtree . In the running of Dijkstra’s algorithm, we divide the active nodes in into two kinds:

Updated node. This is the kind of node that has already been updated by Dijkstra’s update rule (1), which means for some after a previous update. The value of is evaluated according to Section 4.3.1, so the value of can be easily known. We can store such nodes in buckets to maintain the order of their labels.

Initializing node. This is the kind of node whose has not been updated by Dijkstra’s update rule (1), so . However, we do not store the value of explicitly. For each group , we maintain the set of initializing nodes in . We only compute the value of when is the maximum in its group, and use buckets to maintain the maximum values from all groups. If each group has size , the maximum can be found in time by bruteforce search.
At each iteration in Dijkstra’s algorithm, we extract the active node with the maximum label among the updated nodes and initializing nodes and mark it as scanned. For the case that the maximum node is an initializing node, we remove from , and compute for the new maximum node . However, if we compute this value directly using an index evaluation for , then we will suffer a total cost , which is rather inefficient.
The idea to speed up is to check whether before performing an index evaluation. This can be done in time since we can know the corresponding interval of from the value of if . We only actually evaluate if after the Dijkstra step scans all nodes with level . In this way, we can always ensure that the number of index evaluations in a group never exceeds the number of different final label values in this group. Indeed, it can be shown that if there are different final label values in a group , then there must be at least crosslevel edges in , which implies that the number of index evaluations for initial capacities should be no greater than . (Remember is the number of groups in the partition.) Lemma 15 gives a formal proof for this.
4.3.3 The time Split Algorithm
Now we are ready to introduce our split algorithm in full details. A pseudocode is shown in Figure 2. During the search at Line 5  21, may contain groups with maximum nodes not at the th level, e.g., when the maximum node in a group is deleted at Line 17 and of the new maximum node in has not been evaluated yet. We have the following observations:

For all , is nondecreasing, and at the end we have .

Only at Line 3, 13 and 21 we need to evaluate the index of of a node or the index of an edge. Each index evaluation costs time.

The numbers of executions of the while loops at Line 10  18, Line 19  21 are bounded by .

The number of times entering the for loop at Line 7  9 can be bounded by the number of index evaluations at Line 3 and 21. Each loop costs time.
Our algorithm is an implementation of Dijkstra’s algorithm, so the correctness is obvious. The running time analysis is based on the following lemmas:
Lemma 14.
If we evaluate at Line 13, then the edge will not be in any recursive calls of .
Proof.
We evaluate only if , so right after Line 13. Since has already been scanned, here is just its final value . Note that . If finally , then is smaller than both and , thus is a belowlevel edge or crosslevel edge. If , then and is a crosslevel edge. ∎
Lemma 15.
At Line 3 and 21, if we evaluate for nodes in some group , then the number of different final label values in is at least . Thus, the number of edges in the subtree corresponding to that do not appear in any recursive calls of is at least .
Proof.
At line 21, must be less than ; otherwise, should have been extracted at Line 11 before extracting at Line 20, which is impossible. Also note that , so for all . Thus, if is evaluated at Line 3 and there are nodes extracted from at Line 21, then should be in distinct ranges: , which implies that the number of different final values of in is at least .
Suppose we remove all the crosslevel edges in , i.e., remove all the edges in whose final values of and differ. Then the tree should be decomposed into at least components since there are at least different final values of in . Thus there are at least crosslevel edges in . ∎
Finally, we can derive the time bound for our split algorithm.
Proof for Lemma 5.
By Lemma 14, the number of index evaluations at Line 13 is at most . Let be the number of index evaluations at Line 3 and 21 for nodes in the group . Then by Lemma 15, . Thus the total number of index evaluations in our split algorithm can be bounded by , which costs time.
At Line 3, Line 79, Line 21, we need to do a bruteforce search in a given group, and each search costs time. Note that the number of the bruteforce searches can be bounded by twice the number of index evaluations at Line 3 and 21, so the total time cost for bruteforce search is .
It can be easily seen that all other parts of our split algorithm runs in linear time, so the overall time complexity is . ∎
5 Discussion
We give an improved algorithm for solving SSBP in time which is faster than sorting for sparse graphs. This algorithm is a breakthrough compared to traditional Dijkstra’s algorithm using Fibonacci heap. There are some open questions remained to be solved. Can our algorithm for SSBP be further improved to , which is the time complexity for the currently best algorithm for  BP? Can we use our idea to obtain an algorithm for SSBP that runs faster than Dijkstra in word RAM model?
References
 [BFP73] Manuel Blum, Robert W. Floyd, Vaughan R. Pratt, Ronald L. Rivest, and Robert Endre Tarjan. Time bounds for selection. J. Comput. Syst. Sci., 7(4):448–461, 1973.
 [BKS02] Georg Baier, Ekkehard Köhler, and Martin Skutella. On the ksplittable flow problem. In European Symposium on Algorithms, pages 101–113. Springer, 2002.
 [Bor] O Boruvka. O jistem problemu minimalnim, praca moravske prirodovedecke spolecnosti 3, 1926.
 [Cha00] B. Chazelle. A minimum spanning tree algorithm with inverseAckermann type complexity. J. ACM, 47(6):1028–1047, 2000.
 [CKT16] Shiri Chechik, Haim Kaplan, Mikkel Thorup, Or Zamir, and Uri Zwick. Bottleneck paths and trees and deterministic graphical games. In LIPIcsLeibniz International Proceedings in Informatics, volume 47. Schloss DagstuhlLeibnizZentrum fuer Informatik, 2016.
 [CT76] D. Cheriton and R. E. Tarjan. Finding minimum spanning trees. SIAM J. Comput., 5:724–742, 1976.
 [CW90] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation, 9(3):251 – 280, 1990. Computational algebraic complexity editorial.
 [Dij59] Edsger W Dijkstra. A note on two problems in connexion with graphs. Numerische mathematik, 1(1):269–271, 1959.
 [DP09] Ran Duan and Seth Pettie. Fast algorithms for (max, min)matrix multiplication and bottleneck shortest paths. In Proceedings of the twentieth annual ACMSIAM symposium on Discrete algorithms, pages 384–391. Society for Industrial and Applied Mathematics, 2009.
 [EK72] Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM), 19(2):248–264, 1972.
 [FF56] L. R. Ford and D. R. Fulkerson. Maximal flow through a network. Canadian Journal of Mathematics, 8(0):399–404, January 1956.
 [FGA98] Elena Fernandez, Robert Garfinkel, and Roman Arbiol. Mosaicking of aerial photographic maps via seams defined by bottleneck shortest paths. Oper. Res., 46(3):293–304, March 1998.
 [Fre85] Greg N Frederickson. Data structures for online updating of minimum spanning trees, with applications. SIAM Journal on Computing, 14(4):781–798, 1985.
 [FT87] Michael L Fredman and Robert Endre Tarjan. Fibonacci heaps and their uses in improved network optimization algorithms. Journal of the ACM (JACM), 34(3):596–615, 1987.
 [GT88] Harold N Gabow and Robert E Tarjan. Algorithms for two bottleneck optimization problems. Journal of Algorithms, 9(3):411–417, 1988.
 [Hu61] T. C. Hu. The maximum capacity route problem. Operations Research, 9(6):898–900, 1961.
 [Jar30] Vojtech Jarnık. O jistém problému minimálnım. Práca Moravské Prırodovedecké Spolecnosti, 6:57–63, 1930.
 [KKT95] David R Karger, Philip N Klein, and Robert E Tarjan. A randomized lineartime algorithm to find minimum spanning trees. Journal of the ACM (JACM), 42(2):321–328, 1995.
 [Kru56] Joseph B Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical society, 7(1):48–50, 1956.
 [PR02] S. Pettie and V. Ramachandran. An optimal minimum spanning tree algorithm. J. ACM, 49(1):16–34, 2002.
 [SYZ07] Asaf Shapira, Raphael Yuster, and Uri Zwick. Allpairs bottleneck paths in vertex weighted graphs. In Proceedings of the eighteenth annual ACMSIAM symposium on Discrete algorithms, pages 978–985. Society for Industrial and Applied Mathematics, 2007.
 [Tar72] Robert Tarjan. Depthfirst search and linear graph algorithms. SIAM journal on computing, 1(2):146–160, 1972.

[VWY07]
Virginia Vassilevska, Ryan Williams, and Raphael Yuster.
Allpairs bottleneck paths for general graphs in truly subcubic
time.
In
Proceedings of the thirtyninth annual ACM symposium on Theory of computing
, pages 585–589. ACM, 2007.  [Wil12] Virginia Vassilevska Williams. Multiplying matrices faster than CoppersmithWinograd. In Proceedings of the 44th symposium on Theory of Computing, STOC ’12, pages 887–898, New York, NY, USA, 2012. ACM.
 [Yao75] A. C. Yao. An algorithm for finding minimum spanning trees. Info. Proc. Lett., 4(1):21–23, 1975.
Appendix A Tree Partition Algorithm
Now we introduce the tree partition algorithm used in our ASBPIC algorithm. There are many ways to do that. The tree partition algorithm we introduced here is a slight variant of the topological partition algorithm used in Frederickson’s algorithm [Fre85].
Proof of Theorem 13
Proof.
The core procedure in our tree partition algorithm is a recursive function Partition as shown in Figure 3. Partition takes a tree as the input, then it partitions the tree by calling itself for the subtrees. Using simple induction it can be shown that at any time from Line 3 to 9, can always induce a connected subgraph in , thus the induced subgraph is indeed a subtree. Every time Partition reports a group at Line 7 during the running, we say that the subtree induced by in is chosen as a subtree candidate. The return value of Partition is a set of nodes which can induce a subtree in whose edges do not appear in any subtree candidate.
To produce a tree partition for the spanning tree of , we pass as the input to Partition(). We collect the groups reported by Partition() in order, and then merge the last group with the set of nodes returned by Partition() or regard as a new group if no group has been reported before. For the former case, let be the root node when is reported, then must be contained in both and , thus after the merge can still induce a subtree in .
The running time of this algorithm is obviously . Now we turn to analyze the correctness. Since the subtrees are clearly edgedisjoint and every edge is contained in exactly one subtree, we only need to show that the size of each group is in . It is easy to see that the set of nodes returned at Line 9 has the size in the range , so the size of each subtree candidate is in the range . If in the end there is no subtree candidate, then nodes will be returned and regarded as the only group in the partition. If there exist at least one subtree candidates, then the size of should be in the range . Thus we complete our proof. ∎
Comments
There are no comments yet.