The goal of parameterized complexity is to reign in the combinatorial explosion present in NP-hard problems with the help of a secondary parameter. This leads us to the search for fixed-parameter tractable (FPT) algorithms, i.e., algorithms with running time where is the input size, is the secondary parameter, is a computable function, and is a constant. There are several books giving a broad overview of parameterized complexity [11, 13, 14, 30]. One of the success stories of parameterized complexity is a graph parameter called treewidth. A large swath of graph problems admit FPT-algorithms when parameterized by treewidth as witnessed by, amongst other things, Courcelle’s theorem . However, the function resulting from Courcelle’s theorem is non-elementary . Thus, a natural goal is to find algorithms with a smaller, or ideally minimal, dependence on the treewidth in the running time, i.e. algorithms where is as small as possible. Problems only involving local constraints usually permit a single-exponential dependence on the treewidth () in the running time, i.e. time for some small constant ,111The -notation hides polynomial factors in the input size. by means of dynamic programming on tree decompositions [1, 33, 34, 35]. For many of these problems we also know the optimal base if we assume the strong exponential-time hypothesis (SETH) . For a long time a single-exponential running time seemed to be out of reach for problems involving global constraints, in particular for connectivity constraints. This changed when Cygan et al.  introduced the Cut&Count technique, which allowed them to obtain single-exponential-time algorithms for many graph problems involving connectivity constraints. Again, many of the resulting running times can be shown to be optimal assuming SETH .
The issue with treewidth-based algorithms is that dynamic programming on tree decompositions seems to inherently require exponential space. In particular, Chen et al.  devised a model for single-pass dynamic programming algorithms on tree decompositions and showed that such algorithms require exponential space for Vertex Cover and 3-Coloring. Algorithms requiring exponential time and exponential space usually run out of available space before they hit their time limit . Hence, it is desirable to reduce the space requirement while maintaining the running time. As discussed, this seems implausible for treewidth. Instead, we consider a different, but related, parameter called treedepth. Treedepth is a slightly larger parameter than treewidth and of great importance in the theory of sparse graphs [27, 28, 29]. It has been studied under several names such as minimum elimination tree height , ordered chromatic number , and vertex ranking . Fürer and Yu  established an explicit link between treedepth and tree decompositions, namely that treedepth is obtained by minimizing the maximum number of forget nodes in a root-leaf-path over all nice tree decompositions (see  for a definition). Many problems parameterized by treedepth allow branching algorithms on elimination forests, also called treedepth decompositions, that match the running time of the treewidth-algorithms, but replacing the dependence on treewidth by treedepth, while only requiring polynomial space [9, 18, 31].
The Cut&Count technique reduces problems with connectivity constraints to counting problems of certain cuts, called consistent cuts. We show that for several connectivity problems the associated problem implied by the Cut&Count technique can be solved in time and polynomial space, where is a constant and is the depth of a given elimination forest. Furthermore, the base matches the base in the running time of the corresponding treewidth-algorithm. Concretely, given an elimination forest of depth for a graph we prove the following results:
Connected Vertex Cover, Feedback Vertex Set, and Steiner Tree can be solved in time and polynomial space.
Connected Dominating Set and Connected Odd Cycle Transversal can be solved in time and polynomial space.
The Cut&Count technique leads to randomized algorithms as it relies on the Isolation Lemma. At the cost of a worse base in the running time, Bodlaender et al.  present a generic method, called the rank-based approach, to obtain deterministic single-exponential-time algorithms for connectivity problems parameterized by treewidth; the rank-based approach is also able to solve counting variants of several connectivity problems. Fomin et al.  use matroid tools to, amongst other results, reobtain the deterministic running times of the rank-based approach. In a follow-up paper, Fomin et al.  manage to improve several of the deterministic running times using their matroid tools. Multiple papers adapt the Cut&Count technique and rank-based approach to graph parameters different from treewidth. Bergougnoux and Kanté  apply the rank-based approach to obtain single-exponential-time algorithms for connectivity problems parameterized by cliquewidth. The same authors  generalize, incurring a loss in the running time, this approach to a wider range of parameters including rankwidth and mim-width. Pino et al.  use the Cut&Count technique and rank-based approach to obtain fast deterministic and randomized algorithms for connectivity problems parameterized by branchwidth.
Lokshtanov and Nederlof  present a framework using algebraic techniques, such as Fourier, Möbius, and Zeta transforms, to reduce the space usage of certain dynamic programming algorithms from exponential to polynomial. Fürer and Yu  adapt this framework to the setting where the underlying set (or graph) is dynamic instead of static, in particular for performing dynamic programming along the bags of a tree decomposition, and obtain a -time, where is the depth of a given elimination forest, and polynomial-space algorithm for counting perfect matchings. Using the same approach, Belbasi and Fürer  design an algorithm counting the number of Hamiltonian cycles in time , where is the width and the depth of a given tree decomposition, and polynomial space. Furthermore, they also present an algorithm for the traveling salesman problem with the same running time, but requiring pseudopolynomial space.
We describe the preliminary definitions and notations in Section 2. In Section 3 we first discuss the Cut&Count setup and give a detailed exposition for Connected Vertex Cover. Afterwards, we explain what general changes can occur for the other problems and then discuss the remaining problems Feedback Vertex Set, Connected Dominating Set, Steiner Tree, and Connected Odd Cycle Transversal. We conclude in Section 4.
Let be an undirected graph. We denote the number of vertices by and the number of edges by . For a vertex set , we denote by the subgraph of that is induced by . The open neighborhood of a vertex is given by , whereas the closed neighborhood is given by . We extend these notations to sets by setting and . Furthermore, we denote by the number of connected components of .
A cut of a set is a pair with and , we also use the notation . We refer to and as the left and right side of the cut, respectively. Note that either side may be empty, although usually the left side is nonempty.
For two integers we write to indicate equality modulo 2, i.e., is even if and only if is even. We use Iverson’s bracket notation: for a predicate , we have that is if is true and otherwise. For a function we denote by the function . By we denote the field of two elements. For a field or ring we denote by the ring of polynomials in the indeterminates with coefficients in . With we hide polynomial factors, i.e. . For a natural number , we denote by the set of integers from to .
An elimination forest of an undirected graph is a rooted forest such that for every edge either is an ancestor of in or is an ancestor of in . The depth of a rooted forest is the largest number of nodes on a path from a root to a leaf. The treedepth of is the minimum depth over all elimination forests of and is denoted by .
We slightly extend the notation for elimination forests used by Pilipczuk and Wrochna . For a rooted forest and a node we denote by the set of nodes in the subtree rooted at , including . By we denote the set of all ancestors of , including . Furthermore, we define , , and . By we denote the children of .
Note that an elimination forest of a connected graph consists only of a single tree.
2.3 Isolation Lemma
A function isolates a set family if there is a unique with , where for subsets of we define .
Lemma 2.3 (Isolation Lemma, ).
Let be a nonempty set family over a universe . Let and for each choose a weight uniformly and independently at random. Then .
When counting objects modulo 2 the Isolation Lemma allows us to avoid unwanted cancellations by ensuring with high probability that there is a unique solution. In our applications, we will chooseso that we obtain an error probability of less than .
In this section always refers to a connected undirected graph. For the sake of a self-contained presentation, we state the required results for the Cut&Count technique again, mostly following the presentation of Cygan et al. . Our approach only differs from that of Cygan et al.  in the counting sub-procedure.
We begin by describing the Cut&Count setup and then present the counting sub-procedure for Connected Vertex Cover. Afterwards we explain how to adapt the counting sub-procedure for the other problems. Our exposition is the most detailed for Connected Vertex Cover, whereas the analogous parts of the other problems will not be discussed in such detail.
Suppose that we want to solve a problem on involving connectivity constraints, then we can make the following general definitions. The solutions to our problem are subsets of a universe which is related to . Let denote the set of solutions and we want to determine whether is empty or not. The Cut&Count technique consists of two parts:
The Cut part: We relax the connectivity constraints to obtain a set of possibly connected solutions. The set will contain pairs consisting of a candidate solution and a consistent cut of , which is defined in Definition 3.1.
The Count part: We compute modulo 2 using a sub-procedure. The consistent cuts are defined so that non-connected candidate solutions cancel, because they are consistent with an even number of cuts. Hence, only connected candidates remain.
If is even, then this approach does not work, because the connected solutions would cancel out as well when counting modulo 2. To circumvent this difficulty, we employ the Isolation Lemma (Lemma 2.3). By sampling a weight function , we can instead count pairs with a fixed weight and it is likely that there is a weight with a unique solution if a solution exists at all. Formally, we compute modulo 2 for every possible weight , where , instead of computing modulo 2.
Definition 3.1 ().
A cut of an undirected graph is consistent if and implies . A consistently cut subgraph of is a pair such that and is a consistent cut of . For , we denote the set of consistently cut subgraphs of by .
To ensure that connected solutions are not compatible with an even number of consistent cuts, we will usually force a single vertex to the left side of the consistent cut. This results in the following fundamental property of consistent cuts.
Lemma 3.2 ().
Let be a subset of vertices such that . The number of consistently cut subgraphs such that is equal to .
By the definition of a consistently cut subgraph we have for every connected component of that either or . The connected component that contains must satisfy and for all other connected components we have 2 choices. Hence, we obtain different consistently cut subgraphs with . ∎
With Lemma 3.2 we can distinguish disconnected candidates from connected candidates by determining the parity of the number of consistent cuts for the respective candidate. We determine this number not for a single candidate but we determine the total for all candidates with a fixed weight. Corollary 3.3 encapsulates the Cut&Count technique for treedepth.
Let and such that the following two properties hold for every weight function and target weight :
There is an algorithm accepting weights , a target weight , and an elimination forest , such that .
Then Algorithm 1 returns if is empty and with probability at least otherwise.
If is empty, then for all choices of , , and by property 1. and 2., hence Algorithm 1 returns . ∎
We will use the same definitions as Cygan et al.  for and , hence it follows from their proofs that Condition 1 in Corollary 3.3 is satisfied. Our contribution is to provide the counting procedure for problems parameterized by treedepth.
Given the sets , , and , and a weight function , we will define for every weight the sets , , and .
3.2 Connected Vertex Cover
|Connected Vertex Cover|
|Input:||An undirected graph and an integer .|
|Question:||Is there a set , , such that is connected and is a vertex cover of , i.e., for all ?|
In the considered problems, one usually seeks a solution of size at most . For convenience we choose to look for a solution of size exactly and solve the other case in the obvious way. We define the objects needed for Cut&Count in the setting of Connected Vertex Cover. We let and define the candidate solutions by , and the solutions are given by .
To ensure that a connected solution is consistent with an odd number of cuts, we choose a vertex that is always forced to the left side of the cut (cf. Lemma 3.2). As we cannot be sure that there is a minimum connected vertex cover containing , we take an edge and run Algorithm 1 once for and once for . Hence, for a fixed choice of we define the candidate-cut-pairs by . We must check that these definitions satisfy the requirements of Corollary 3.3.
Lemma 3.4 ().
Let be a weight function, and let and be as defined above. Then we have for every that .
Lemma 3.2 implies that . Hence, . ∎
Next, we describe the procedure for Connected Vertex Cover.
Given a connected graph , a vertex , an integer , a weight function , and an elimination forest of of depth , we can determine modulo 2 for every in time and polynomial space. In particular, Algorithm 2 determines modulo 2 for a specified target weight in the same time and space.
For the discussion of the algorithm, it is convenient to drop the cardinality constraint in and and to define these sets for every induced subgraph of . Hence, we define for every the set and the set .
Similar to Pilipczuk and Wrochna , our algorithm will compute a multivariate polynomial in the formal variables and , where the coefficient of is the cardinality of modulo 2, i.e., the formal variables track the weight and size of candidate solutions. In particular, we have that for every . Polynomials act as an appropriate data structure, because addition and multiplication of polynomials naturally updates the weight and size trackers correctly.
The output polynomial is computed by a branching algorithm (see Algorithm 2) that starts at the root of the elimination forest and proceeds downwards to the leaves. At every vertex we branch into several states, denoted . The interpretation of the states and is that the vertex is inside the vertex cover and the subscript denotes to which side of the consistent cut it belongs. Vertices that do not belong to the vertex cover have state .
For each vertex there are multiple subproblems on . When solving a subproblem, we need to take into account the choices that we have already made, i.e., the branching decisions for the ancestors of . At each vertex we compute two different types of polynomials, which correspond to two different kinds of partial solutions. Those that are subsets of and respect the choices made on and those that are subsets of and respect the choices made on . Distinguishing these two types of partial solutions is important when has multiple children in . Formally, the previous branching decisions are described by assignments or from or to respectively.
For every vertex and assignment we define the partial solutions at , but excluding , that respect by
So, consists of consistently cut subgraphs of that are extended by to valid candidate-cut-pairs for , meaning that is a vertex cover of and is a consistent cut of .
Very similarly, for every vertex and assignment we define the partial solutions at , possibly including , that respect by
Thus, for the root of we have .
We keep track of the partial solutions and using polynomials which we define now. For every vertex and assignment we will compute a polynomial where and
Similarly, for every vertex and assignment we will compute a polynomial where and
Algorithm 2 computes the polynomial , where is the root of , and extracts the appropriate coefficient of . To compute we employ recurrences for and . We proceed by describing the recurrence for .
In the case that is a leaf node in , i.e., , we can compute by
which checks whether the assignment induces a valid partial solution. This is the only step in which we explicitly ensure that we are computing only vertex covers; in all other steps this will not be required. If is not a leaf, then is computed by the recurrence
We will now prove the correctness of equations Equation 1 through Equation 3. First of all, observe that when but then we must have that ; similarly, we must have when for . This property is ensured by equation Equation 1 and preserved by the recurrences Equation 2 and Equation 3. To see that equation Equation 1 is correct, notice that when is a leaf node in we have that and hence the only consistently cut subgraph of is . Therefore, we only need to verify whether this is a valid partial solution in , which reduces to the predicate on the right-hand side of Equation 1.
For equations Equation 2 and Equation 3, we have to establish bijections between the objects counted on either side of the respective equation and argue that size and weight are updated correctly. We proceed by proving the correctness of equation Equation 2, which is the only equation where the proof of correctness requires the special properties of elimination forests. We consider any . We can uniquely partition into subsets of for by setting . Furthermore, by setting and we obtain , because we are only restricting the vertex cover and consistent cut to the induced subgraph of . Vice versa, any combination of partial solutions for each yields a partial solution as there are no edges in between and for by the properties of an elimination forest. Since the sets partition , we obtain the size and weight of by summing over the sizes and weights of the sets respectively. Hence, these values are updated correctly by polynomial multiplication.
It remains to prove the correctness of Equation 3. This time, consider any . Now, there are three possible cases depending on the state of in this partial solution.
If , then we claim that , where . This is true due to the identities , and , and , which mean that this implicitly defined mapping preserves the definition of and in the predicates of and . Vice versa, any partial solution in can be extended to such a partial solution in by adding to . Since and , multiplication by updates size and weight correctly.
If , the proof is analogous to case 1.
If , then we have that , where . Vice versa, any must also be in . Since does not change, we do not need to update size or weight and do not multiply by further formal variables in this case.
If , then equation Equation 3 simplifies to , because and hence only the first case occurs. Note that by establishing these bijections in the proofs of correctness, we have actually shown that equations Equation 1 through Equation 3 are also correct when working in instead of .
Time and Space Analysis.
We finish the proof by discussing the time and space requirement. Observe that the coefficients of our polynomials are in and hence can be added and multiplied in constant time. Furthermore, all considered polynomials consist of at most polynomially many monomials as the weight and size of a candidate solution are polynomial in . Therefore, we can add and multiply the polynomials in polynomial time and hence compute recurrences Equation 1, Equation 2, and Equation 3 in polynomial time. Every polynomial and is computed at most once, because is only called by where is an extension of , i.e., for some , and is only called by where is the parent of . Hence, the recurrences only make disjoint calls and no polynomial is computed more than once. For a fixed vertex there are at most choices for and . Thus, Algorithm 2 runs in time for elimination forests of depth . Finally, Algorithm 2 requires only polynomial space, because it has a recursion depth of and every recursive call needs to store at most a constant number of polynomials, which require by the previous discussion only polynomial space each. ∎
There is a Monte-Carlo algorithm that given an elimination forest of depth for a graph solves Connected Vertex Cover on in time and polynomial space. The algorithm cannot give false positives and may give false negatives with probability at most 1/2.
We remark that calling Algorithm 2 for each target weight (as in Algorithm 1) would redundantly compute the polynomial several times, although it suffices to compute once and then look up the appropriate coefficient depending on .
If one is interested in solving Weighted Connected Vertex Cover, then it is straightforward to adapt our approach to polynomially-sized weights: instead of using to track the size of the vertex covers, we let it track their cost and change recurrence Equation 3 accordingly.
3.3 Adapting to Other Problems
The high-level structure of the counting procedure for the other problems is very similar to that of Algorithm 2 for Connected Vertex Cover. One possible difference is that we might have to consider the solutions over a more complicated universe than just the vertex set . Also, we might want to keep track of more data of the partial solutions and hence use more than just two formal variables for the polynomials. Both of these changes occur for Feedback Vertex Set, which is presented in the next section. The equation for the base case (cf. equation Equation 1) and the recurrence for (cf. equation Equation 3) are also problem-dependent.
Time and Space Analysis.
The properties that we require of the polynomials and equations in the time and space analysis, namely that the equations can be evaluated in polynomial time and every polynomial is computed at most once, remain true by the same arguments as for Connected Vertex Cover. The running time essentially results from the number of computed polynomials, which increases when we use more states for the vertices. Again denoting the set of states by , we obtain a running time of on elimination forests of depth . The space analysis also remains valid, because the recursion depth remains and for each call we need to store only a constant number of polynomials each using at most polynomial space.
3.4 Feedback Vertex Set
|Feedback Vertex Set|
|Input:||An undirected graph and an integer .|
|Question:||Is there a set , , such that is a forest?|
Feedback Vertex Set differs from the other problems in that we do not have a positive connectivity requirement, but a negative connectivity requirement, i.e., we need to ensure that the remaining graph is badly connected in the sense that it contains no cycles. Cygan et al.  approach this via the well-known Lemma 3.7.
A graph with vertices and edges is a forest if and only if it has at most connected components.
Applying Lemma 3.7 requires that we count how many vertices and edges remain after deleting a set from . We do not need to count exactly how many connected components remain, it suffices to enforce that there are not too many connected components. We will achieve this, like Cygan et al. , by the use of marker vertices. In this case, our solutions are pairs with , where we interpret as the forest that remains after removing a feedback vertex set and the marked vertices are represented by the set . To bound the number of connected components, we want that every connected component of contains at least one marked vertex. By forcing the marked vertices to the left side of the cut, we ensure that candidates where has a connected component not containing a marked vertex, in particular those with more than connected components, cancel modulo 2. The formal definitions are , and , and .
Since our solutions are pairs of two vertex sets, we need a larger universe to make the Isolation Lemma, Lemma 2.3, work. We use , hence a weight function assigns two different weights and to a vertex depending on whether is marked or not. To make these definitions compatible with Corollary 3.3 we associate to each pair the set , which also allows us to extend the weight function to such pairs , i.e. .
Lemma 3.8 ().
Let be such that . The number of consistently cut subgraphs such that is equal to , where is the number of connected components of that do not contain any vertex from .
For a consistently cut subgraph with any connected component of that contains a vertex of must be completely contained in . For all other connected components of , namely those counted by , we have that either or . Thus, we arrive at the claimed number of consistently cut subgraphs with . ∎
To apply Lemma 3.7, we need to distinguish candidates by the number of edges, and markers, in addition to the weight, hence we make the following definitions for :
Lemma 3.9 ().
Let be a weight function, and and as defined above. Then we have for every and that .
Lemma 3.8 implies that for all . Hence, we have for all that
We certainly have by definition of . To see the other direction of inclusion for , observe that for all , and hence a pair with must satisfy
Finally, Lemma 3.7 implies that is a forest and this finishes the other direction of inclusion. Thus, we have that . ∎
Note that by Lemma 3.7 a Feedback Vertex Set instance has a solution if and only if there is a choice of and such that .
Given a connected graph , an integer , a weight function and an elimination forest of of depth , we can determine modulo 2 for every , , in time and polynomial space.
Again, we drop the cardinality constraints from and and define for induced subgraphs the variants