# On the Power of Tree-Depth for Fully Polynomial FPT Algorithms

There are many classical problems in P whose time complexities have not been improved over the past decades. Recent studies of "Hardness in P" have revealed that, for several of such problems, the current fastest algorithm is the best possible under some complexity assumptions. To bypass this difficulty, Fomin et al. (SODA 2017) introduced the concept of fully polynomial FPT algorithms. For a problem with the current best time complexity O(n^c), the goal is to design an algorithm running in k^O(1)n^c' time for a parameter k and a constant c'<c. In this paper, we investigate the complexity of graph problems in P parameterized by tree-depth, a graph parameter related to tree-width. We show that a simple divide-and-conquer method can solve many graph problems, including Weighted Matching, Negative Cycle Detection, Minimum Weight Cycle, Replacement Paths, and 2-hop Cover, in O(td· m) time or O(td· (m+n n)) time, where td is the tree-depth of the input graph. Because any graph of tree-width tw has tree-depth at most (tw+1)_2 n, our algorithms also run in O(tw· m n) time or O(tw· (m+n n) n) time. These results match or improve the previous best algorithms parameterized by tree-width. Especially, we solve an open problem of fully polynomial FPT algorithm for Weighted Matching parameterized by tree-width posed by Fomin et al.

## Authors

• 3 publications
• 1 publication
• 7 publications
• ### Efficient and adaptive parameterized algorithms on modular decompositions

We study the influence of a graph parameter called modular-width on the ...
04/26/2018 ∙ by Stefan Kratsch, et al. ∙ 0

• ### Fully polynomial FPT algorithms for some classes of bounded clique-width graphs

Parameterized complexity theory has enabled a refined classification of ...
07/17/2017 ∙ by David Coudert, et al. ∙ 0

• ### A Space-efficient Parameterized Algorithm for the Hamiltonian Cycle Problem by Dynamic Algebraziation

An NP-hard graph problem may be intractable for general graphs but it co...
01/21/2019 ∙ by Mahdi Belbasi, et al. ∙ 0

• ### Nearly ETH-Tight Algorithms for Planar Steiner Tree with Terminals on Few Faces

The Planar Steiner Tree problem is one of the most fundamental NP-comple...
11/16/2018 ∙ by Sándor Kisfaludi-Bak, et al. ∙ 0

• ### Rank Based Approach on Graphs with Structured Neighborhood

In this paper, we combine the rank-based approach and the neighbor-equiv...
05/29/2018 ∙ by Benjamin Bergougnoux, et al. ∙ 0

• ### On the parameterized complexity of the Minimum Path Cover problem in DAGs

A minimum path cover (MPC) of a directed acyclic graph (DAG) G = (V,E) i...
07/15/2020 ∙ by Manuel Cáceres, et al. ∙ 0

• ### Rectilinear Steiner Trees in Narrow Strips

A rectilinear Steiner tree for a set P of points in ℝ^2 is a tree that c...
03/15/2021 ∙ by Henk Alkema, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

There are many classical problems in P whose time complexities have not been improved over the past decades. For some of such problems, recent studies of “Hardness in P” have provided evidence of why obtaining faster algorithms is difficult. For instance, Vassilevska Williams and Williams [33] and Abboud, Grandoni and Vassilevska Williams [1] showed that many problems including Minimum Weight Cycle, Replacement Paths, and Radius are equivalent to All Pair Shortest Paths (APSP) under subcubic reductions; that is, if one of them admits a subcubic-time algorithm, then all of them do.

One of the approaches to bypass this difficulty is to analyze the running time by introducing another measure, called a parameter, in addition to the input size. In the theory of parameterized complexity, a problem with a parameter is called fixed parameter tractable (FPT) if it can be solved in time for some function that does not depend on the input size . While the main aim of this theory is to provide fine-grained analysis of NP-hard problems, it is also useful for problems in P. For instance, a simple dynamic programming can solve Maximum Matching in time, where is the number of edges and is a famous graph parameter called tree-width which intuitively measures how much a graph looks like a tree (see Section 2 for the definition). Therefore, it runs in linear time for any graph of constant tree-width, which is faster than the current best time for the general case [5, 31, 15].

When working on NP-hard problems, we can only expect superpolynomial (or usually exponential) function in the running time of FPT algorithms (unless is exponential in the input size). On the other hand, for problems in P, it might be possible to obtain a -time FPT algorithm. Such an algorithm is called fully polynomial FPT. For instance, Fomin, Lokshtanov, Pilipczuk, Saurabh and Wrochna [11] obtained an -time (randomized) algorithm for Maximum Matching and left as an open problem whether a similar running time is possible for Weighted Matching. In contrast to the -time dynamic programming, this algorithm is faster than the current best general-case algorithm already for graphs of . In general, for a problem with the current best time complexity , the goal is to design an algorithm running in time for some small constants and . Such an algorithm is faster than the current best general-case algorithm already for inputs of . On the negative side, Abboud, Vassilevska Williams and Wang [2] showed that Diameter and Radius do not admit -time algorithms under some plausible assumptions. In this paper, we give new or improved fully polynomial FPT algorithms for several classical graph problems. Especially, we solve the above open problem for Weighted Matching.

##### Our approach.

Before describing our results, we first give a short review of existing work on fully polynomial FPT algorithms parameterized by tree-width and explain our approach. There are roughly three types of approaches in the literature. The first approach is to use a polynomial-time dynamic programming on a tree-decomposition, which has been mainly used for problems related to shortest paths [7, 27, 4, 32]. The second approach is to use an -time Gaussian elimination of matrices of small tree-width developed by Fomin et al. [11]. The above-mentioned -time algorithm for Maximum Matching was obtained by this approach. The third approach is to apply a divide-and-conquer method exploiting the existence of small balanced separators. This approach was first used for planar graphs by Lipton and Tarjan [21]. Using the existence of -size balanced separators, they obtained an -time algorithm for Maximum Matching and an -time algorithm for Weighted Matching for planar graphs. For graphs of bounded tree-width, Akiba, Iwata and Yoshida [3] obtained an -time algorithm for 2-hop Cover, which is a problem of constructing a distance oracle, and Fomin et al. [11] obtained an -time777While the running time shown in [11] is , we can easily see that it also runs in time. Because holds for any graphs of tree-width , the latter is never worse than the former. Note that in the running time of other algorithms cannot be replaced by in general; e.g., we cannot bound the running time of the Gaussian elimination by , where is the number of non-zero elements. algorithm for Vertex-disjoint Paths. We obtain fully polynomial FPT algorithms for a wide range of problems by using this approach. Our key observation is that, when using the divide-and-conquer approach, another graph parameter called tree-depth is more powerful than the tree-width.

A graph of tree-width admits a set of vertices, called a balanced separator, such that each connected component of contains at most vertices. In both of the above-mentioned divide-and-conquer algorithms for graphs of bounded tree-width by Akiba et al. [3] and Fomin et al. [11], after the algorithm recursively computes a solution for each connected component of , it constructs a solution for in time or time, respectively. Because the depth of the recursive calls is bounded by , the total running time becomes or , respectively.

Here, we observe that, by using tree-depth, this kind of divide-and-conquer algorithm can be simplified and the analysis can be improved. Tree-depth is a graph parameter which has been studied under various names [29, 19, 6, 25]. A graph has tree-depth if and only if there exists an elimination forest of depth . See Section 2 for the precise definition of the tree-depth and the elimination forest. An important property of tree-depth is that any connected graph of tree-depth can be divided into connected components of tree-depth at most by removing a single vertex . Therefore, if there exists an -time or -time incremental algorithm, which constructs a solution for from a solution for , we can solve the problem in time or time, respectively. Now, the only thing to do is to develop such an incremental algorithm for each problem. We present a detailed discussion of this framework in Section 3. Because any graph of tree-width has tree-depth at most  [24], the running time can also be bounded by or . Therefore, our analysis using tree-depth is never worse than the existing results directly using tree-width. On the other hand, there are infinitely many graphs whose tree-depth has asymptotically the same bound as tree-width. For instance, if every -vertex subgraph admits a balanced separator of size for some constant (e.g., for -minor free graphs), both tree-width and tree-depth are . Hence, for such graphs, the time complexity using tree-depth is truly better than that using tree-width.

##### Our results.

Table 1 shows our results and the comparison to the existing results on fully polynomial FPT algorithms parameterized by tree-width. The formal definition of each problem is given in Section 4. Because obtaining an elimination forest of the lowest depth is NP-hard, we assume that an elimination forest is given as an input and the parameter for our results is the depth of the given elimination forest. Similarly, for the existing results, the parameter is the width of the given tree-decomposition. Note that, because a tree-decomposition of width can be converted into an elimination forest of depth in linear time [29], we can always replace the factor in our running time by .

The first polynomial-time algorithms for Maximum Matching and Weighted Matching were obtained by Edmonds [10], and the current fastest algorithms run in time [5, 31, 15] and time [5], respectively. Fomin et al. [11] obtained the -time randomized algorithm for Maximum Matching by using an algebraic method and the fast computation of Gaussian elimination. They left as an open problem whether a similar running time is possible for Weighted Matching. The general-case algorithms for these problems compute a maximum matching by iteratively finding an augmenting path, and therefore, they are already incremental. Thus, we can easily obtain an -time algorithm for Maximum Matching and an -time algorithm for Weighted Matching. Note that the divide-and-conquer algorithms for planar matching by Lipton and Tarjan [21] also uses this augmenting-path approach, and our result can be seen as extension to bounded tree-depth graphs. Our algorithm for Maximum Matching is always faster888Note that for any graph of tree-width or tree-depth , we have . than the one by Fomin et al. and is faster than the general-case algorithm already when . Our algorithm for Weighted Matching settles the open problem and is faster than the general-case algorithm already when .

The current fastest algorithm for Negative Cycle Detection is the classical -time Bellman-Ford algorithm. Planken et al. [27] obtained an -time algorithm by using a Floyd-Warshall-like dynamic programming. In this paper, we give an -time algorithm. While the algorithm by Planken et al. is faster than the general-case algorithm only when , our algorithm achieves a faster running time already when .

Both Minimum Weight Cycle (or Girth) and Replacement Paths are subcubic-equivalent to APSP [33]. A naive algorithm can solve both problems in time or time. For Minimum Weight Cycle of directed graphs, an improved -time algorithm was recently obtained by Orlin and Sedeño-Noda [26]. For Replacement Paths, Malik et al. [22] obtained an -time algorithm for undirected graphs, and Roditty and Zwick [28] obtained an -time algorithm for unweighted graphs. For the general case, Gotthilf and Lewenstein [16] obtained an -time algorithm, and there exists an -time lower bound in the path-comparison model [18] (whenever [17]. In this paper, we give an -time algorithm for each of these problems, which is faster than the general-case algorithm already when . This result shows the following contrast to the known result of “Hardness in P”: Radius is also subcubic-equivalent to APSP [1] but it cannot be solved in a similar running time under some plausible assumptions [2].

2-hop cover [8] is a data structure for efficiently answering distance queries. Akiba et al. [3] obtained an -time algorithm for constructing a 2-hop cover answering each distance query in time. In this paper, we give an -time algorithm for constructing a 2-hop cover answering each distance query in time.

##### Related work.

Coudert, Ducoffe and Popa [9] have developed fully polynomial FPT algorithms using several other graph parameters including clique-width. In contrast to the tree-depth, their parameters are not polynomially bounded by tree-width, and therefore, their results do not imply fully polynomial FPT algorithms parameterized by tree-width. Mertzios, Nichterlein and Niedermeier [23] have obtained an -time algorithm for Maximum Matching parameterized by feedback edge number ( when the graph is connected) by giving a linear-time kernel.

## 2 Preliminaries

Let be a directed or undirected graph, where is a set of vertices of and is a set of edges of . When the graph is clear from the context, we use to denote the number of vertices and to denote the number of edges. All the graphs in this paper are simple (i.e., they have no multiple edges nor self-loops). Let be a subset of vertices. We denote by the set of edges whose endpoints are both in and denote by the subgraph induced by (i.e., ).

A tree decomposition of a graph is a pair of a tree and a collection of bags satisfying the following two conditions.

• For each edge , there exists some such that .

• For each vertex , the set induces a connected subtree in .

The width of is the maximum of and the tree-width of is the minimum width among all possible tree decompositions.

An elimination forest of a graph is a rooted forest on the same vertex set such that, for every edge , one of and is an ancestor of the other. The depth of is the maximum number of vertices on a path from a root to a leaf in . The tree-depth of a graph is the minimum depth among all possible elimination forests. Tree-width and tree-depth are strongly related as the following lemma shows.

###### Lemma 1 ([24, 29]).

For any graph , the following holds.

 tw(G)+1≤td(G)≤(tw(G)+1)log2n.

Moreover, given a tree decomposition of width , we can construct an elimination forest of depth in linear time.

## 3 Divide-and-conquer framework

In this section, we propose a divide-and-conquer framework that can be applicable to a wide range of problems parameterized by tree-depth.

###### Theorem 1.

Let be a graph and let be a function defined on subsets of . Suppose that can be computed in a constant time and we have the following two algorithms Increment and Union with time complexity .

• . Given a set , its value , and a vertex , this algorithm computes the value in time.

• . Given disjoint sets such that has no edges between and for any , and their values , this algorithm computes the value in time.

Then, for a given elimination forest of of depth , we can compute the value in time.

###### Proof.

Algorithm 1 describes our divide-and-conquer algorithm. We prove that for any set and any elimination forest of of depth , the procedure correctly computes the value in time by induction on the size of .

The claim trivially holds when . For a set , let be the connected trees of ( if is connected). For each , let be the set of vertices of . From the definition of the elimination forest, has no edges between and for any . For each , we compute the value as follows. Let be the root of . By removing from , we obtain an elimination forest of of depth at most . Therefore, by the induction hypothesis, we can correctly compute the value in time. Then, by applying , we obtain the value in time. Because and hold, the total running time of these computations is . Finally, by applying the algorithm Union, we obtain the value in time. ∎

Note that the algorithm Union is trivial in most applications. We have only one non-trivial case in Section 4.5 in this paper. From the relation between tree-depth and tree-width (Lemma 1), we obtain the following corollary.

###### Corollary 1.

Under the same assumption as in Theorem 1, for a given tree decomposition of of width , we can compute the value in time.

## 4 Applications

### 4.1 Maximum matching

For an undirected graph , a matching of is a subset of such that no edges in share a vertex. In this section, we prove the following theorem.

###### Theorem 2.

Given an undirected graph and its elimination forest of depth , we can compute a maximum-size matching in time.

As mentioned in the introduction, we use the augmenting-path approach, which is also used for planar matching [21]. Let be a matching. A vertex not incident to is called exposed. An -alternating path is a (simple) path whose edges are alternately out of and in . An -alternating path connecting two different exposed vertices is called an -augmenting path. If there exists an -augmenting path , by taking the symmetric difference , where is the set of edges in , we can construct a matching of size . In fact, is the maximum-size matching if and only if there exist no -augmenting paths. Edmonds [10] developed the first polynomial-time algorithm for computing an -augmenting path by introducing the notion of blossom, and an -time algorithm was given by Gabow and Tarjan [14].

###### Lemma 2 ([14]).

Given an undirected graph and its matching , we can either compute a matching of size or correctly conclude that is a maximum-size matching in time.

For , we define as a function that returns a maximum-size matching of . We now give algorithms Increment and Union.

#### \textscIncrement(X,f(X),x).

Because the size of the maximum matching of is at most the size of the maximum matching of plus one, we can compute a maximum matching of in time by a single application of Lemma 2.

#### \textscUnion((X1,f(X1)),…,(Xc,f(Xc))).

Because there exist no edges between and for any , we can construct a maximum matching of just by taking the union of .

###### Proof of Theorem 2.

The algorithm correctly computes in time and the algorithm correctly computes in time. Therefore, from Theorem 1, we can compute a maximum-size matching of in time. ∎

### 4.2 Weighted matching

Let be an undirected graph with an edge-weight function . A weight of a matching , denoted by , is simply defined as the total weight of edges in . A matching of is called perfect if has no exposed vertices (or equivalently ). A perfect matching is called a maximum-weight perfect matching if it has the maximum weight among all perfect matchings of . We can easily see that other variants of weighted matching problems can be reduced to the problem of finding a maximum-weight perfect matching even when parameterized by tree-depth (see Appendix A). In this section, we prove the following theorem.

###### Theorem 3.

Given an edge-weighted undirected graph admitting at least one perfect matching and its elimination forest of depth , we can compute a maximum-weight perfect matching in time.

In our algorithm, we use an -time primal-dual algorithm by Gabow [12]. In this primal-dual algorithm, we keep a pair of a matching and dual variables , where is a laminar999A collection of subsets of a ground set is called laminar if for any , one of , , or holds. When is laminar, we have .

collection of odd-size subsets of

and and are functions and , satisfying the following conditions:

 ˆyz(uv):=y(u)+y(v)+∑B∈Ω:u,v∈Bz(B)≥w(uv) for every uv∈E, (1) ˆyz(uv)=w(uv) for every uv∈M, (2) |{uv∈M∣u,v∈B}|=⌊|B|2⌋ for every B∈Ω. (3)

From the duality theory (see e.g. [13]), a perfect matching is a maximum-weight perfect matching if and only if there exist dual variables satisfying the above conditions. Gabow [12] obtained the -time algorithm by iteratively applying the following lemma.

###### Lemma 3 ([12]).

Given an edge-weighted undirected graph and a pair of a matching and dual variables satisfying the conditions (1)–(3), we can either compute a pair of a matching of cardinality and dual variables satisfying the conditions (1)–(3) or correctly conclude that is a maximum-size matching101010Note that when is not a perfect matching, this does not imply that has the maximum weight among all the maximum-size matchings. in time.

For , we define as a function that returns a pair of a maximum-size matching of and dual variables satisfying the conditions (1)–(3). We now give algorithms Increment and Union.

#### \textscIncrement(X,f(X),x).

Let be a value satisfying for every . Let be a function defined as and for . In the subgraph , a pair of the matching and dual variables satisfies the conditions (1)–(3). Therefore, we can apply Lemma 3. If is a maximum-size matching of , we return and . Otherwise, we obtain a matching of size and dual variables satisfying the conditions (1)–(3). Because the cardinality of maximum-size matching of is at most the cardinality of maximum-size matching of plus one, the obtained is a maximum-size matching of . Therefore, we can return and .

#### \textscUnion((X1,f(X1)),…,(Xc,f(Xc))).

Because there exist no edges between and for any , we can simply return a pair of a maximum-size matching obtained by taking the union and dual variables such that , for , and for .

###### Proof of Theorem 3.

The algorithm runs in time and the algorithm runs in time. Therefore, from Theorem 1, we can compute in time. From the duality theory, the perfect matching obtained by computing is a maximum-weight perfect matching of . ∎

### 4.3 Negative cycle detection and potentials

Let be a directed graph with an edge-weight function . For a function , we define an edge-weight function as . If becomes non-negative for all edges, is called a potential on .

###### Lemma 4 ([30]).

There exists a potential on if and only if has no negative cycles.

In this section, we prove the following theorem.

###### Theorem 4.

Given an edge-weighted directed graph and its elimination forest of depth , we can compute either a potential or a negative cycle in time.

Suppose that we have a potential . Because is non-negative, we can compute a shortest-path tree rooted at a given vertex under in time by Dijkstra’s algorithm. For any path, its length under is exactly the length under plus a constant . Therefore, the obtained tree is also a shortest-path tree under . Thus, we obtain the following corollary.

###### Corollary 2.

Given an edge-weighted directed graph without negative cycles, a vertex , and its elimination forest of depth , we can compute a shortest-path tree rooted at in time.

For , we define as a function that returns either a potential on or a negative cycle contained in . We now give algorithms Increment and Union.

#### \textscIncrement(X,f(X),x).

If is a negative cycle, we return it. Otherwise, let be the graph obtained from by removing all the edges incoming to . Let be a value satisfying for every . Let be a function defined as and for . Because has no incoming edges in , is a potential on . Therefore, we can compute a shortest-path tree rooted at under in time by Dijkstra’s algorithm. Let be the set of vertices reachable from in and let be the shortest-path distance from under . If there exists an edge such that and , contains a negative cycle starting from , going to along the shortest-path tree, and coming back to via the edge . Otherwise, let be a value satisfying for every with and . Then, we return a function defined as if and if .

###### Claim 1.

is a potential on .

###### Proof.

For every edge , we have

 wp(uv)=⎧⎪⎨⎪⎩wp′(uv)+d(u)−d(v)≥0if u,v∈R,wp′(uv)+D−d(v)≥0if u∈X∖R,v∈R,wp′(uv)+D−D≥0if u∈X∖R,v∈X∖R.

Note that there are no edges from to . ∎

#### \textscUnion((X1,f(X1)),…,(Xc,f(Xc))).

If at least one of is a negative cycle, we return it. Otherwise, we return a potential defined as for .

###### Proof of Theorem 4.

The algorithm correctly computes in time and the algorithm correctly computes in time. Therefore, from Theorem 1, we can compute , i.e., either a potential on or a negative cycle contained in , in time. ∎

### 4.4 Minimum weight cycle

In this section, we prove the following theorem.

###### Theorem 5.

Given a non-negative edge-weighted undirected or directed graph and its elimination forest of depth , we can compute a minimum-weight cycle in time.

Note that when the graph is undirected, a closed walk of length two using the same edge twice is not considered as a cycle. Therefore, we cannot simply reduce the undirected version into the directed version by replacing each undirected edge by two directed edges of both directions.

Let be the input graph with an edge-weight function . For , we define as a function that returns a minimum-weight cycle of . We describe Increment and Union below.

#### \textscIncrement(X,f(X),x).

Because we already have a minimum-weight cycle of , we only need to find a minimum-weight cycle passing through . First, we construct a shortest-path tree of rooted at and let be the shortest-path distance.

When the graph is undirected, we find an edge not contained in the shortest-path tree minimizing . If this weight is at least the weight of , we return . Otherwise, we return the cycle starting from , going to along the shortest-path tree, jumping to through the edge , and coming back to along the shortest-path tree. Note that this always forms a cycle because otherwise, it induces a cycle contained in that has a smaller weight than , which is a contradiction.

We can prove the correctness of this algorithm as follows. Let be the weight of the cycle obtained by the algorithm and let be a cycle passing through . Let the vertices on in order. Because a tree contains no cycles, there exists an edge not contained in the shortest-path tree. Therefore, the weight of is .

When the graph is directed, we find an edge with the minimum . If this weight is at least the weight of , we return . Otherwise, we return the cycle starting from , going to along the shortest-path tree, and coming back to through the edge .

#### \textscUnion((X1,f(X1)),…,(Xc,f(Xc))).

We return a cycle of the minimum weight among .

###### Proof of Theorem 5.

The algorithm correctly computes in time and the algorithm correctly computes in time. Therefore, from Theorem 1, we can compute a minimum-weight cycle in time. ∎

### 4.5 Replacement paths

Fix two vertices and . For an edge-weighed directed graph and an edge , we denote the length of the shortest path avoiding by . In this section, we prove the following theorem.

###### Theorem 6.

Given an edge-weighted directed graph , a shortest path , and its elimination forest of depth , we can compute for all edges on in time.

Let be the vertices on the given shortest path in order. For , we denote the length of the prefix by and the length of the suffix by . These can be precomputed in linear time.

For , we define as a graph consisting of vertices and edges , and define as a graph consisting of vertices and edges . We denote the shortest-path length from to in by . For convenience, we define when or . We use the following lemma.

###### Lemma 5.

For any and any , is the minimum of for .

###### Proof.

Any path avoiding in can be written as, for some , a concatenation of path , path that is contained in , and path . Because is a shortest path in , we can replace by the prefix , by the shortest path in , and by the suffix without increasing the length. Therefore, the lemma holds. ∎

We want to define as a function that returns a list of for all ; however, we cannot do so because the length of this list is not bounded by . Instead, we define as a function that returns a list of for all with . This succinct representation has enough information because, for any , we have (or when ). We describe Increment and Union below.

#### \textscIncrement(X,f(X),x).

By running Dijkstra’s algorithm twice, we can compute and for all in time. For , we define and . By a standard dynamic programming, we can compute and for all with in time.

From Lemma 5, holds for some . If holds, we have , and otherwise, we have . Therefore, we can compute by taking the minimum of and .

#### \textscUnion((X1,f(X1)),…,(Xc,f(Xc))).

Let . Because there exist no edges between and for any , we have for any . Therefore, from Lemma 5, we have . For efficiently computing for all with , we do as follows in increasing order of .

For each , we maintain a value so that always holds. Initially, these values are set to . We use a heap for computing and updating in