1 Introduction
There are many classical problems in P whose time complexities have not been improved over the past decades. For some of such problems, recent studies of “Hardness in P” have provided evidence of why obtaining faster algorithms is difficult. For instance, Vassilevska Williams and Williams [33] and Abboud, Grandoni and Vassilevska Williams [1] showed that many problems including Minimum Weight Cycle, Replacement Paths, and Radius are equivalent to All Pair Shortest Paths (APSP) under subcubic reductions; that is, if one of them admits a subcubictime algorithm, then all of them do.
One of the approaches to bypass this difficulty is to analyze the running time by introducing another measure, called a parameter, in addition to the input size. In the theory of parameterized complexity, a problem with a parameter is called fixed parameter tractable (FPT) if it can be solved in time for some function that does not depend on the input size . While the main aim of this theory is to provide finegrained analysis of NPhard problems, it is also useful for problems in P. For instance, a simple dynamic programming can solve Maximum Matching in time, where is the number of edges and is a famous graph parameter called treewidth which intuitively measures how much a graph looks like a tree (see Section 2 for the definition). Therefore, it runs in linear time for any graph of constant treewidth, which is faster than the current best time for the general case [5, 31, 15].
When working on NPhard problems, we can only expect superpolynomial (or usually exponential) function in the running time of FPT algorithms (unless is exponential in the input size). On the other hand, for problems in P, it might be possible to obtain a time FPT algorithm. Such an algorithm is called fully polynomial FPT. For instance, Fomin, Lokshtanov, Pilipczuk, Saurabh and Wrochna [11] obtained an time (randomized) algorithm for Maximum Matching and left as an open problem whether a similar running time is possible for Weighted Matching. In contrast to the time dynamic programming, this algorithm is faster than the current best generalcase algorithm already for graphs of . In general, for a problem with the current best time complexity , the goal is to design an algorithm running in time for some small constants and . Such an algorithm is faster than the current best generalcase algorithm already for inputs of . On the negative side, Abboud, Vassilevska Williams and Wang [2] showed that Diameter and Radius do not admit time algorithms under some plausible assumptions. In this paper, we give new or improved fully polynomial FPT algorithms for several classical graph problems. Especially, we solve the above open problem for Weighted Matching.
Our approach.
Before describing our results, we first give a short review of existing work on fully polynomial FPT algorithms parameterized by treewidth and explain our approach. There are roughly three types of approaches in the literature. The first approach is to use a polynomialtime dynamic programming on a treedecomposition, which has been mainly used for problems related to shortest paths [7, 27, 4, 32]. The second approach is to use an time Gaussian elimination of matrices of small treewidth developed by Fomin et al. [11]. The abovementioned time algorithm for Maximum Matching was obtained by this approach. The third approach is to apply a divideandconquer method exploiting the existence of small balanced separators. This approach was first used for planar graphs by Lipton and Tarjan [21]. Using the existence of size balanced separators, they obtained an time algorithm for Maximum Matching and an time algorithm for Weighted Matching for planar graphs. For graphs of bounded treewidth, Akiba, Iwata and Yoshida [3] obtained an time algorithm for 2hop Cover, which is a problem of constructing a distance oracle, and Fomin et al. [11] obtained an time^{7}^{7}7While the running time shown in [11] is , we can easily see that it also runs in time. Because holds for any graphs of treewidth , the latter is never worse than the former. Note that in the running time of other algorithms cannot be replaced by in general; e.g., we cannot bound the running time of the Gaussian elimination by , where is the number of nonzero elements. algorithm for Vertexdisjoint Paths. We obtain fully polynomial FPT algorithms for a wide range of problems by using this approach. Our key observation is that, when using the divideandconquer approach, another graph parameter called treedepth is more powerful than the treewidth.
A graph of treewidth admits a set of vertices, called a balanced separator, such that each connected component of contains at most vertices. In both of the abovementioned divideandconquer algorithms for graphs of bounded treewidth by Akiba et al. [3] and Fomin et al. [11], after the algorithm recursively computes a solution for each connected component of , it constructs a solution for in time or time, respectively. Because the depth of the recursive calls is bounded by , the total running time becomes or , respectively.
Here, we observe that, by using treedepth, this kind of divideandconquer algorithm can be simplified and the analysis can be improved. Treedepth is a graph parameter which has been studied under various names [29, 19, 6, 25]. A graph has treedepth if and only if there exists an elimination forest of depth . See Section 2 for the precise definition of the treedepth and the elimination forest. An important property of treedepth is that any connected graph of treedepth can be divided into connected components of treedepth at most by removing a single vertex . Therefore, if there exists an time or time incremental algorithm, which constructs a solution for from a solution for , we can solve the problem in time or time, respectively. Now, the only thing to do is to develop such an incremental algorithm for each problem. We present a detailed discussion of this framework in Section 3. Because any graph of treewidth has treedepth at most [24], the running time can also be bounded by or . Therefore, our analysis using treedepth is never worse than the existing results directly using treewidth. On the other hand, there are infinitely many graphs whose treedepth has asymptotically the same bound as treewidth. For instance, if every vertex subgraph admits a balanced separator of size for some constant (e.g., for minor free graphs), both treewidth and treedepth are . Hence, for such graphs, the time complexity using treedepth is truly better than that using treewidth.
Our results.
Problem  Previous result  Our result 

Maximum Matching  [11]  
Weighted Matching  Open problem [11]  
Negative Cycle Detection  [27]  
Minimum Weight Cycle  —  
Replacement Paths  —  
2hop Cover  [3] 
Table 1 shows our results and the comparison to the existing results on fully polynomial FPT algorithms parameterized by treewidth. The formal definition of each problem is given in Section 4. Because obtaining an elimination forest of the lowest depth is NPhard, we assume that an elimination forest is given as an input and the parameter for our results is the depth of the given elimination forest. Similarly, for the existing results, the parameter is the width of the given treedecomposition. Note that, because a treedecomposition of width can be converted into an elimination forest of depth in linear time [29], we can always replace the factor in our running time by .
The first polynomialtime algorithms for Maximum Matching and Weighted Matching were obtained by Edmonds [10], and the current fastest algorithms run in time [5, 31, 15] and time [5], respectively. Fomin et al. [11] obtained the time randomized algorithm for Maximum Matching by using an algebraic method and the fast computation of Gaussian elimination. They left as an open problem whether a similar running time is possible for Weighted Matching. The generalcase algorithms for these problems compute a maximum matching by iteratively finding an augmenting path, and therefore, they are already incremental. Thus, we can easily obtain an time algorithm for Maximum Matching and an time algorithm for Weighted Matching. Note that the divideandconquer algorithms for planar matching by Lipton and Tarjan [21] also uses this augmentingpath approach, and our result can be seen as extension to bounded treedepth graphs. Our algorithm for Maximum Matching is always faster^{8}^{8}8Note that for any graph of treewidth or treedepth , we have . than the one by Fomin et al. and is faster than the generalcase algorithm already when . Our algorithm for Weighted Matching settles the open problem and is faster than the generalcase algorithm already when .
The current fastest algorithm for Negative Cycle Detection is the classical time BellmanFord algorithm. Planken et al. [27] obtained an time algorithm by using a FloydWarshalllike dynamic programming. In this paper, we give an time algorithm. While the algorithm by Planken et al. is faster than the generalcase algorithm only when , our algorithm achieves a faster running time already when .
Both Minimum Weight Cycle (or Girth) and Replacement Paths are subcubicequivalent to APSP [33]. A naive algorithm can solve both problems in time or time. For Minimum Weight Cycle of directed graphs, an improved time algorithm was recently obtained by Orlin and SedeñoNoda [26]. For Replacement Paths, Malik et al. [22] obtained an time algorithm for undirected graphs, and Roditty and Zwick [28] obtained an time algorithm for unweighted graphs. For the general case, Gotthilf and Lewenstein [16] obtained an time algorithm, and there exists an time lower bound in the pathcomparison model [18] (whenever ) [17]. In this paper, we give an time algorithm for each of these problems, which is faster than the generalcase algorithm already when . This result shows the following contrast to the known result of “Hardness in P”: Radius is also subcubicequivalent to APSP [1] but it cannot be solved in a similar running time under some plausible assumptions [2].
2hop cover [8] is a data structure for efficiently answering distance queries. Akiba et al. [3] obtained an time algorithm for constructing a 2hop cover answering each distance query in time. In this paper, we give an time algorithm for constructing a 2hop cover answering each distance query in time.
Related work.
Coudert, Ducoffe and Popa [9] have developed fully polynomial FPT algorithms using several other graph parameters including cliquewidth. In contrast to the treedepth, their parameters are not polynomially bounded by treewidth, and therefore, their results do not imply fully polynomial FPT algorithms parameterized by treewidth. Mertzios, Nichterlein and Niedermeier [23] have obtained an time algorithm for Maximum Matching parameterized by feedback edge number ( when the graph is connected) by giving a lineartime kernel.
2 Preliminaries
Let be a directed or undirected graph, where is a set of vertices of and is a set of edges of . When the graph is clear from the context, we use to denote the number of vertices and to denote the number of edges. All the graphs in this paper are simple (i.e., they have no multiple edges nor selfloops). Let be a subset of vertices. We denote by the set of edges whose endpoints are both in and denote by the subgraph induced by (i.e., ).
A tree decomposition of a graph is a pair of a tree and a collection of bags satisfying the following two conditions.

For each edge , there exists some such that .

For each vertex , the set induces a connected subtree in .
The width of is the maximum of and the treewidth of is the minimum width among all possible tree decompositions.
An elimination forest of a graph is a rooted forest on the same vertex set such that, for every edge , one of and is an ancestor of the other. The depth of is the maximum number of vertices on a path from a root to a leaf in . The treedepth of a graph is the minimum depth among all possible elimination forests. Treewidth and treedepth are strongly related as the following lemma shows.
3 Divideandconquer framework
In this section, we propose a divideandconquer framework that can be applicable to a wide range of problems parameterized by treedepth.
Theorem 1.
Let be a graph and let be a function defined on subsets of . Suppose that can be computed in a constant time and we have the following two algorithms Increment and Union with time complexity .

. Given a set , its value , and a vertex , this algorithm computes the value in time.

. Given disjoint sets such that has no edges between and for any , and their values , this algorithm computes the value in time.
Then, for a given elimination forest of of depth , we can compute the value in time.
Proof.
Algorithm 1 describes our divideandconquer algorithm. We prove that for any set and any elimination forest of of depth , the procedure correctly computes the value in time by induction on the size of .
The claim trivially holds when . For a set , let be the connected trees of ( if is connected). For each , let be the set of vertices of . From the definition of the elimination forest, has no edges between and for any . For each , we compute the value as follows. Let be the root of . By removing from , we obtain an elimination forest of of depth at most . Therefore, by the induction hypothesis, we can correctly compute the value in time. Then, by applying , we obtain the value in time. Because and hold, the total running time of these computations is . Finally, by applying the algorithm Union, we obtain the value in time. ∎
Note that the algorithm Union is trivial in most applications. We have only one nontrivial case in Section 4.5 in this paper. From the relation between treedepth and treewidth (Lemma 1), we obtain the following corollary.
Corollary 1.
Under the same assumption as in Theorem 1, for a given tree decomposition of of width , we can compute the value in time.
4 Applications
4.1 Maximum matching
For an undirected graph , a matching of is a subset of such that no edges in share a vertex. In this section, we prove the following theorem.
Theorem 2.
Given an undirected graph and its elimination forest of depth , we can compute a maximumsize matching in time.
As mentioned in the introduction, we use the augmentingpath approach, which is also used for planar matching [21]. Let be a matching. A vertex not incident to is called exposed. An alternating path is a (simple) path whose edges are alternately out of and in . An alternating path connecting two different exposed vertices is called an augmenting path. If there exists an augmenting path , by taking the symmetric difference , where is the set of edges in , we can construct a matching of size . In fact, is the maximumsize matching if and only if there exist no augmenting paths. Edmonds [10] developed the first polynomialtime algorithm for computing an augmenting path by introducing the notion of blossom, and an time algorithm was given by Gabow and Tarjan [14].
Lemma 2 ([14]).
Given an undirected graph and its matching , we can either compute a matching of size or correctly conclude that is a maximumsize matching in time.
For , we define as a function that returns a maximumsize matching of . We now give algorithms Increment and Union.
.
Because the size of the maximum matching of is at most the size of the maximum matching of plus one, we can compute a maximum matching of in time by a single application of Lemma 2.
.
Because there exist no edges between and for any , we can construct a maximum matching of just by taking the union of .
4.2 Weighted matching
Let be an undirected graph with an edgeweight function . A weight of a matching , denoted by , is simply defined as the total weight of edges in . A matching of is called perfect if has no exposed vertices (or equivalently ). A perfect matching is called a maximumweight perfect matching if it has the maximum weight among all perfect matchings of . We can easily see that other variants of weighted matching problems can be reduced to the problem of finding a maximumweight perfect matching even when parameterized by treedepth (see Appendix A). In this section, we prove the following theorem.
Theorem 3.
Given an edgeweighted undirected graph admitting at least one perfect matching and its elimination forest of depth , we can compute a maximumweight perfect matching in time.
In our algorithm, we use an time primaldual algorithm by Gabow [12]. In this primaldual algorithm, we keep a pair of a matching and dual variables , where is a laminar^{9}^{9}9A collection of subsets of a ground set is called laminar if for any , one of , , or holds. When is laminar, we have .
collection of oddsize subsets of
and and are functions and , satisfying the following conditions:(1)  
(2)  
(3) 
From the duality theory (see e.g. [13]), a perfect matching is a maximumweight perfect matching if and only if there exist dual variables satisfying the above conditions. Gabow [12] obtained the time algorithm by iteratively applying the following lemma.
Lemma 3 ([12]).
Given an edgeweighted undirected graph and a pair of a matching and dual variables satisfying the conditions (1)–(3), we can either compute a pair of a matching of cardinality and dual variables satisfying the conditions (1)–(3) or correctly conclude that is a maximumsize matching^{10}^{10}10Note that when is not a perfect matching, this does not imply that has the maximum weight among all the maximumsize matchings. in time.
For , we define as a function that returns a pair of a maximumsize matching of and dual variables satisfying the conditions (1)–(3). We now give algorithms Increment and Union.
.
Let be a value satisfying for every . Let be a function defined as and for . In the subgraph , a pair of the matching and dual variables satisfies the conditions (1)–(3). Therefore, we can apply Lemma 3. If is a maximumsize matching of , we return and . Otherwise, we obtain a matching of size and dual variables satisfying the conditions (1)–(3). Because the cardinality of maximumsize matching of is at most the cardinality of maximumsize matching of plus one, the obtained is a maximumsize matching of . Therefore, we can return and .
.
Because there exist no edges between and for any , we can simply return a pair of a maximumsize matching obtained by taking the union and dual variables such that , for , and for .
4.3 Negative cycle detection and potentials
Let be a directed graph with an edgeweight function . For a function , we define an edgeweight function as . If becomes nonnegative for all edges, is called a potential on .
Lemma 4 ([30]).
There exists a potential on if and only if has no negative cycles.
In this section, we prove the following theorem.
Theorem 4.
Given an edgeweighted directed graph and its elimination forest of depth , we can compute either a potential or a negative cycle in time.
Suppose that we have a potential . Because is nonnegative, we can compute a shortestpath tree rooted at a given vertex under in time by Dijkstra’s algorithm. For any path, its length under is exactly the length under plus a constant . Therefore, the obtained tree is also a shortestpath tree under . Thus, we obtain the following corollary.
Corollary 2.
Given an edgeweighted directed graph without negative cycles, a vertex , and its elimination forest of depth , we can compute a shortestpath tree rooted at in time.
For , we define as a function that returns either a potential on or a negative cycle contained in . We now give algorithms Increment and Union.
.
If is a negative cycle, we return it. Otherwise, let be the graph obtained from by removing all the edges incoming to . Let be a value satisfying for every . Let be a function defined as and for . Because has no incoming edges in , is a potential on . Therefore, we can compute a shortestpath tree rooted at under in time by Dijkstra’s algorithm. Let be the set of vertices reachable from in and let be the shortestpath distance from under . If there exists an edge such that and , contains a negative cycle starting from , going to along the shortestpath tree, and coming back to via the edge . Otherwise, let be a value satisfying for every with and . Then, we return a function defined as if and if .
Claim 1.
is a potential on .
Proof.
For every edge , we have
Note that there are no edges from to . ∎
.
If at least one of is a negative cycle, we return it. Otherwise, we return a potential defined as for .
4.4 Minimum weight cycle
In this section, we prove the following theorem.
Theorem 5.
Given a nonnegative edgeweighted undirected or directed graph and its elimination forest of depth , we can compute a minimumweight cycle in time.
Note that when the graph is undirected, a closed walk of length two using the same edge twice is not considered as a cycle. Therefore, we cannot simply reduce the undirected version into the directed version by replacing each undirected edge by two directed edges of both directions.
Let be the input graph with an edgeweight function . For , we define as a function that returns a minimumweight cycle of . We describe Increment and Union below.
.
Because we already have a minimumweight cycle of , we only need to find a minimumweight cycle passing through . First, we construct a shortestpath tree of rooted at and let be the shortestpath distance.
When the graph is undirected, we find an edge not contained in the shortestpath tree minimizing . If this weight is at least the weight of , we return . Otherwise, we return the cycle starting from , going to along the shortestpath tree, jumping to through the edge , and coming back to along the shortestpath tree. Note that this always forms a cycle because otherwise, it induces a cycle contained in that has a smaller weight than , which is a contradiction.
We can prove the correctness of this algorithm as follows. Let be the weight of the cycle obtained by the algorithm and let be a cycle passing through . Let the vertices on in order. Because a tree contains no cycles, there exists an edge not contained in the shortestpath tree. Therefore, the weight of is .
When the graph is directed, we find an edge with the minimum . If this weight is at least the weight of , we return . Otherwise, we return the cycle starting from , going to along the shortestpath tree, and coming back to through the edge .
.
We return a cycle of the minimum weight among .
4.5 Replacement paths
Fix two vertices and . For an edgeweighed directed graph and an edge , we denote the length of the shortest path avoiding by . In this section, we prove the following theorem.
Theorem 6.
Given an edgeweighted directed graph , a shortest path , and its elimination forest of depth , we can compute for all edges on in time.
Let be the vertices on the given shortest path in order. For , we denote the length of the prefix by and the length of the suffix by . These can be precomputed in linear time.
For , we define as a graph consisting of vertices and edges , and define as a graph consisting of vertices and edges . We denote the shortestpath length from to in by . For convenience, we define when or . We use the following lemma.
Lemma 5.
For any and any , is the minimum of for .
Proof.
Any path avoiding in can be written as, for some , a concatenation of path , path that is contained in , and path . Because is a shortest path in , we can replace by the prefix , by the shortest path in , and by the suffix without increasing the length. Therefore, the lemma holds. ∎
We want to define as a function that returns a list of for all ; however, we cannot do so because the length of this list is not bounded by . Instead, we define as a function that returns a list of for all with . This succinct representation has enough information because, for any , we have (or when ). We describe Increment and Union below.
.
By running Dijkstra’s algorithm twice, we can compute and for all in time. For , we define and . By a standard dynamic programming, we can compute and for all with in time.
From Lemma 5, holds for some . If holds, we have , and otherwise, we have . Therefore, we can compute by taking the minimum of and .
.
Let . Because there exist no edges between and for any , we have for any . Therefore, from Lemma 5, we have . For efficiently computing for all with , we do as follows in increasing order of .
For each , we maintain a value so that always holds. Initially, these values are set to . We use a heap for computing and updating in
Comments
There are no comments yet.