    Saving Space by Dynamic Algebraization Based on Tree Decomposition: Minimum Dominating Set

An algorithm is presented that solves the Minimum Dominating Set problem exactly using polynomial space based on dynamic programming for a tree decomposition. A direct application of dynamic programming based on a tree decomposition would result in an exponential space algorithm, but we use zeta transforms to obtain a polynomial space algorithm in exchange for a moderate increase of the time. This framework was pioneered by Lokshtanov and Nederlof 2010 and adapted to a dynamic setting by Fürer and Yu 2017. Our space-efficient algorithm is a parametrized algorithm based on tree-depth and treewidth. The naive algorithm for Minimum Dominating Set runs in O^*(2^n) time. Most of the previous works have focused on time complexity. But space optimization is a crucial aspect of algorithm design, since in several scenarios space is a more valuable resource than time. Our parametrized algorithm runs in O^*(3^d), and its space complexity is O(nk), where d is the depth and k is the width of the given tree decomposition. We observe that Reed's 1992 algorithm constructing a tree decomposition of a graph uses only polynomial space. So, even if the tree decomposition is not given, we still obtain an efficient polynomial space algorithm. There are some other algorithms which use polynomial space for this problem, but they are not efficient for graphs with small tree depth.

Authors

01/21/2019

A Space-efficient Parameterized Algorithm for the Hamiltonian Cycle Problem by Dynamic Algebraziation

An NP-hard graph problem may be intractable for general graphs but it co...
04/07/2019

A Quantum Algorithm for Minimum Steiner Tree Problem

Minimum Steiner tree problem is a well-known NP-hard problem. For the mi...
09/17/2019

A heuristic use of dynamic programming to upperbound treewidth

For a graph G, let Ω(G) denote the set of all potential maximal cliques ...
06/28/2021

The Reward-Penalty-Selection Problem

The Set Cover Problem (SCP) and the Hitting Set Problem (HSP) are well-s...
03/27/2013

Minimum Error Tree Decomposition

This paper describes a generalization of previous methods for constructi...
10/03/2018

Fault Tolerant and Fully Dynamic DFS in Undirected Graphs: Simple Yet Efficient

We present an algorithm for a fault tolerant Depth First Search (DFS) Tr...
03/01/2020

Faster Greedy Consensus Trees

We consider the tree consensus problem, an important problem in bioinfor...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Improving the running time of exponential time algorithms for finding exact solutions of NP-complete problems like Minimum Dominating Set (MDS) have been studied for almost half a century. It seems that in the beginning they were emphasizing the running time but gradually some research started working on space optimization as well. In 1977, Tarjan and Trajonowsky  improved the running time of Maximum Independent Set from to using polynomial space. The Independent Set problem is strongly connected to Dominating Set since an independent set is also a minimal dominating set if and only if it is a maximal independent set. But we should mention that a dominating set is not necessarily an independent set. So we look for the results on the Maximum Independent Set problem as well, since it gives some ideas about the dominating set problem. Then in 1986, Jian improved Tarjan’s algorithm and gave a better running time () again using polynomial space . There is a trade-off between running time and space complexity. In the mentioned paper Jian states that there is a faster algorithm by Berman () but this time using exponential space (based on a private discussion between Jian and Berman).
Going back to the dominating set problem, in 2005, Grandoni  gave an algorithm which runs in using polynomial space. Before this paper, the best algorithm for the dominating set problem was the trivial one which checks all possible subsets. He gives an algorithm with running time for the minimum set cover problem where is the dimension of the problem111In the set cover problem, the dimension is where is the universe and is a collection of subsets of . The goal is to find a minimum-size subset that covers all elements in .. Since the minimum dominating set problem can be converted to a minimum set cover problem by setting , the running time of this algorithm for MDS is . Grandoni  gives an algorithm with better running time for MDS. It is also based on converting the dominating set problem to a vertex cover problem but this time using exponential space. For MSC, the later algorithm runs in and again by setting we have the algorithm for MDS with time complexity. Grandoni uses a simple recursive algorithm for MSC and MDS. He uses dynamic programming which saves the answers for the partial problems. We present a framework to convert a dynamic programming algorithm to an algorithm which uses only polynomial space. Our algorithm do not save the results of the smaller problems and it computes them again whenever they are needed.
In 2005, Fomin, Grandoni and Kratsch  introduced an approach named “measure and conquer” based on the “branch and reduce” approach given by Davis and Putnam  in 1960. In the branch and reduce approach, there are some reduction rules which simplify the problem. Also, there are some branching approaches which are used when we cannot reduce the problem and we have to run it on some other branches. Measure and conquer is mostly used to measure sub-problems more accurately and giving tighter bounds. They chose Grandoni’s algorithm mentioned above (with running time ) and by choosing a better measuring method for sub-problems, they showed that actually its time complexity is using exponential space, and also using polynomial space. This shows that in some cases, exponential recursive algorithms are not measured very well and by measuring them accurately we may gain better running time for the same algorithm. They gave a lower bound for the time complexity of this algorithm (not this problem) which is .
Later, in 2008, Van Rooji and Bodlaender  gave two faster algorithms for the dominating set problem using polynomial space. The running time of these algorithms are and both using polynomial space. Like previous work by Grandoni , they took advantage of the minimum vertex cover problem and formulated MDS as a minimum vertex cover problem. In this paper, they modified the measure and conquer method in a way that it not only a way of analyzes the running time more precisely but also modifies the algorithm by itself. For this purpose, they added a group of rules to the previous rules which tell when the algorithm should be improved.
In 2010, Lokshtanov and Nederlof  gave a framework turning a dynamic programming algorithm for several problems using exponential space to one which uses only polynomial space. They used DFT222

Discrete Fourier transform

and Zeta transform to simplify the complicated convolution operation and convert it to pointwise multiplication which simpler. They gave conditions for turning the dynamic programming algorithms into algorithms with almost the same time complexity using much less space.
Then, in 2017 Fürer and Hu  combined this idea  with tree decompositions. They took Perfect Matching as a case study. Unlike , they handled dynamic underlying set. They introduced the zeta transforms for dynamic underlying sets. We take a similar approach to MDS.

1.1 Previous works

We reviewed some of previous works on MDS in the previous section. Here we gather them in a table and compare their running time and space complexity.

Author(s) Year Time Complexity Space Complexity Comments
polynomial
Tarjan, and Trajonowsky  1977 polynomial MIS
Grandoni  2005 polynomial
Grandoni  2005 exponential
Fomin et. al.  2005 polynomial
Van Rooji, and Bodlaender  2008 polynomial
Van Rooji, and Bodlaender  2008 polynomial

2 Notations

In this section we recall the problem and mention notations we use later. Some of the definitions in this section are based on the notation used in the “Parametrized Algorithms”  book.

Definition 1

Closed Neighbourhood of a Subset of Vertices Let be a subset of vertices of a given graph . Then the closed neighbourhood of is defined as below:

 N[X]=∪v∈XN[v],

where is the closed neighbourhood of .

Definition 2

Dominating Set: A subset of vertices is a dominating set of a given graph if .

2.1 Tree Decomposition

For a given graph , a tree decomposition of is a tree such that each node in is associated with a set (called the bag of ) of vertices in , and has the following properties:

• The union of all bags is equal to . In other words, for each , there exists at least one containing .

• For every edge , there exists a node such that

• For any nodes , and any node belonging to the path connecting and in ,

The width of a tree decomposition is the size of its largest bag minus one. The treewidth of a graph is the minimum width over all tree decompositions of called . We use the letter for the treewidth of the graph we are working on. In 1987, Arnborg et al. showed that constructing a tree decomposition with the minimum treewidth is an NP-hard problem . In our case we don’t need the optimal tree decomposition, as a near optimal tree decomposition also works. A linear time algorithm for finding the minimum treewidth of a given graph with a bounded (by a constant) treewidth is given in . Bodlaender et al.  gave an approximation algorithm. The improved approximation algorithm can be found in [1, 2, 7]. The result has been further improved to in . It is well known that every tree decomposition can be modified in polynomial time into a nice tree decomposition with the same width and size .
We use the notion of a modified nice tree decomposition which is defined in  with a small modification. Instead of introducing all edges in one node, we introduce them one by one. This helps us understand the procedure better. We also require leaf nodes to have empty bags.
We use or simply to denote a nice tree decomposition of a graph .
For a node in , let be the subtree rooted at .
We define to be the graph with being the union of all bags in and being the set of all edges introduced in . With the traditional method of using the subgraphs induced by the already introduced vertices, i.e., by automatically including all edges, we would obtain an running time which is not desired.
We take advantage of solving partial problems with different sizes and then solve the main problem according to partial ones. Inspite of previous works like , here for solving partial problems, we don’t have only two sets. We maintain a partition of into three sets:

• “Dominating (Completed) set”: In each partial problem, this set is the set of vertices which are in the dominating set. We denote this set by (Completed).

• “Dominated set”: In each partial problem, this set is the set of vertices which are not contained in the dominating set , and are dominated by vertices in . We denote this set by (Dominated).

• “Waiting set”: In each partial problem, this set is the set of vertices which are not contained in the dominating set , and are not dominated by it (neither dominating nor dominated “yet”). We denote this set by (Waiting to be dominated).

For each (with the bag ), we define a function . This functions tells for each vertex in , in which of the sets or it is. If , then:

• If , it means that .

• If , it means that .

• If , it means that .

We follow a down-to-top approach. It means we start from the leaves and in each step, first we solve the problem for the child(ren) and then we use this computation for the parent node.
The underlying reason why we have the third option (set ) is that maybe some of the vertices of a bag which are in would be dominated in upper nodes (in down-to-top approach). For a function over , let’s define to be minimum size of set such that:

• .

• All vertices of , should be either in or adjacent to one in . It means, all vertices which don’t have , are dominated by some vertex or dominating some vertex (or vertices).

If there is no minimum set associated with , then we assign . Our goal is to compute , where is the root of our tree decomposition . We start from the leaves and handle each node by handle its child (or children) first. Before introducing recursive formulas for each kind of nodes in , let’s define a specific version of : Suppose , and , we define in this way:

 fv→p(x)={f(x),if x≠vpif x=v (1)

Also, we use the restriction of to some subset like as follows: For a function over , we denote the restriction of to by .

Definition 3

For a universe and a ring , the set is the set of all function .

Now we review the definition of the Zeta and Möbius Transforms.
Zeta Transform: The zeta transform of a function is defined by:

 ζf[Y]=∑X⊆Yf[X]. (2)

Möbius Transform/Inversion : The Möbius transform of a function is defined to be:

 μf[Y]=∑X⊆Y(−1)Y∖Xf[X]. (3)

We will see later that in the recursion formula for join nodes we need the union product .

Definition 4

Given and , the Union Product of and which is shown by is defined as:

 (f∗ug)[X]=∑X1∪X2=Xf(X1)g(X2). (4)

3 Saving Space Using Zeta and Möbius Transforms

In 2010, Lokshtanov and Nederlof  used the zeta and Möbius transforms to save space on some problems like the unweighted Steiner Tree problem. Then in 2017, Fürer and Yu  also used this approach in a dynamic setting, based on tree decompositions for the Perfect Matching problem.
Before going into the depth, let’s review a few theorems.

Theorem 3.1

([17, 18]) The Möbius transform is the inverse transform of the zeta transform, i.e. given and ,

Theorem 3.2

() Applying the zeta transform on a union product operation, will result in a simple point-wise multiplication of zeta transform outputs, i.e. given and ,

 ζ(f∗ug)[X]=(ζf)⊙(ζg)[X], (5)

where the operation is the point-wise multiplication.

All of the previous works which used either DFT or zeta transform (such as , and ), have one common central property that the recursive formula in join nodes, can be expressed as a formula using a union product operation. The union product and the subset convolution are complicated operations in comparison to point-wise multiplication or point-wise addition. That is why taking the zeta transform of a formula having union products makes computations easier. As noted earlier in Theorem 3.2, taking the zeta transform of such a term, will result in a term having point-wise multiplication which are much easier to handle than the union product. After doing a computation over the zeta transformed values, we can apply the Möbius transform on the outcome to get the main result (based on Theorem 3.1). In other words, the direct computation has a mirror image using the zeta transforms of the intermediate values instead of the original ones. While the direct computation keeps track of exponentially many intermediate values, the computation over the zeta transformed values partitions into exponentially many branches and they can be executed one after another. We show later that this approach only moderately affects the exponential factor in the time complexity, while improving the space complexity using only polynomial space instead of exponential space.

3.1 Counting the dominating sets of all sizes

For each node (of any kind) we introduce a polynomial to compute the number of the dominating sets of all sizes as follows:

 PCx[D]=∑jaCx,j[D]yj, (6)

where is the number of the dominating sets of size in with , and where is the set of all dominated vertices (excluding vertices in ) in . We apply the zeta transform on the coefficients of this polynomial.
Now, we show how to compute .

• Leaf node. Assume is a leaf node, then . So, for all since we do not have vertices in the bag and since there is an empty dominating set of size . This implies that:

 (ζa∅x,j)[∅]={0if j≠0,1if j=0. (7)
• Introduce vertex node. Assume is an introduce vertex node and let be the child of such that . Then, if , dominating sets of any size would be the same for both subtrees ( and ). Otherwise (), there would be no dominating set through to dominate since is isolated through which means .

 aCx,j[D]={aC∖{v}x′,j[D]if v∉D,0if v∈D. (8)

Now by applying the zeta transform on the equation above, if , then and otherwise:

 (ζaCx,j)[D]=∑Y⊆DaCx,j[Y]=∑Y⊆DaC∖{v}x′,j[Y]=(ζaC∖{v}x′,j)[D]. (9)
• Forget node. Assume is a forget node and let be the child of such that . Then, should be either in or , so:

 aCx,j[D]=aCx′,j[D∪{v}]+aC∪{v}x′,j[D]. (10)

Now, by applying zeta transform we have:

 (ζaCx,j)[D]=∑Y⊆DaCx,j[Y]=∑Y⊆D(aCx′,j[Y∪{v}]+aC∪{v}x′,j[Y])=((ζaCx′,j)[D∪{v}]−(ζaCx′,j)[D])+((ζaC∪{v}x′,j)[D]). (11)
• Join node. Assume that is a join node and let and be the children of and also we know that .
Then, if a vertex is in dominating set (), then it should be in dominating set in and as well and vice versa (, where and are the dominating sets through and respectively intersecting and ). By the way, if a vertex is dominated through or (or both), then it is dominated through . So, , where (and ) is the dominated set through (and ) intersecting (and ).
In order to compute based on pre-computed and values, we have the recursion below:

 aCx,j[D]=∑j′>1,j′′>1j′+j′′=j+|C|∑D′∪D′′=DaCx′,j′[D′]⋅aCx′′,j′′[D′′]=∑j′>1,j′′>1j′+j′′=j+|C|(aCx′,j′∗uaCx′′,j′′)[D]. (12)

By applying zeta transform, we will have the equation below:

 ζ(aCx,j)[D]=∑j′>1,j′′>1j′+j′′=j+|C|(ζaCx′,j′)[D]⋅(ζaCx′′,j′′)[D] (13)
• Introduce edge node. Assume that is an introduce edge node introducing edge .

Definition 5

An auxiliary leaf node , is a leaf node which has some vertices in its bag(at least ) and is used to introduce an edge (regularly leaf nodes have empty bag). It has only one introduced edge through . We use auxiliary leaf nodes to introduce an edge. This helps us to handle introduce edge nodes.

We convert the introduce edge node to a join node such that has two children and where but edges are different. In auxiliary leaf node , we will have only edge introduced and other vertices are isolated through which has only one node ( itself). On the other hand, all other edges are present in except . We saw how to handle join nodes, so we can handle introduce edge nodes as well. Remember, this branching does not affect our running time since branching is not balanced and in such a branch we have only one auxiliary leaf node which can be done very easily and has no children. Now, let’s see how to handle auxiliary leaf node.

• Auxiliary leaf node. Assume is an auxiliary leaf node with only edge, then we have four cases. In all cases, if , so we do not consider this condition and assume that in all cases:

• If , then,

 aCx,j[∅]=1 (14)

Any given set , dominates an empty set . As a reminder, if , is zero and we do not repeat this case from so on and only consider the cases where .

• If , then since dominating forces to be in the dominating set and on the other hand, dominating forces to be in the dominating set. So it means that but we know that and are disjoint sets so it is impossible. Therefore, there is no dominating set to dominate .

• If is either or , then,

 aCx,j[D]={1if {u,v}∖D⊆C,0otherwise. (15)

Without loss of generality, assume that , then can be any subset of including (the only way to dominate ) and excluding ( and are disjoint sets).

• And finally if , then we cannot dominate elements of because they are isolated and they themselves cannot be in (remember we assumed and are distinctand if they are not distinct, then there no let to be zero). So, .

Now, let us compute the zeta transform of the above cases:

• In case :

 (ζaCx,j)[D]=(ζaCx,j)[∅]=aCx,j[∅]=1 (16)
• In case :

 (ζaCx,j)[D]=aCx,j[D]+aCx,j[{u}]+aCx,j[{v}]+aCx,j[∅]=⎧⎨⎩0+1+0+1=2if C={v},0+0+1+1=2if C={u},0+0+0+1=1otherwise. (17)
• In case :

 (ζaCx,j)[D]=aCx,j[D]+aCx,j[∅]={1+1=2if {u,v}∖D⊆C,0+1=1otherwise. (18)
• In case :

 (ζaCx,j)[D]=(∑Y⊆D:Y∖{u,v}≠∅aCx,j[Y])+(ζaCx,j)[D∩{u,v}]=(ζaCx,j)[D∩{u,v}]. (19)

which is one of the above cases.

By looking at Eq. 13, we see that we have fold branching in join nodes which is not desired. When computing an for a fixed and

, we always want to compute the whole vector for all

and store it. So for all fixed and , we store an array of size , where . Now, even though we have branches but we do not need to recompute them each time. Instead, we can compute them once and store them in array of size which is linear. We have only one copy of this array because we handle any fixed and one after another.

3.2 Finding The Minimum Dominating Set

In this section we are going to find the minimum dominating set for a given graph .
As mentioned before, assume we have a modified nice tree decomposition with the root where . Our goal is to compute

 nargminj=0{a∅r,j[∅]>0},

where .
We start from and increment while .

Theorem 3.3

The Algorithm 1, outputs the size of the minimum dominating set correctly.

Proof

First of all, Algorithm 1 starts from and checks if there a minimum dominating set to dominate everything in the tree by increasing by step by step. In the while loop, it calls Algorithm 2, to compute , where it is an alias .
The while loops will end as soon as the Algorithm 1 finds the first to make non-zero. So as we explained before, (or ), it is a function to compute the number of possible dominating sets of size in the sub-tree rooted at . Here and the sub-tree is our original tree. So if , then it means we have a dominating set for the whole tree of size and the Algorithm 1 will return it.
Now, let’s see how Algorithm 2 computes .
This algorithm gets a subtree rooted at , a set that we want to dominate, a subset to be the intersection of the whole dominating set in (which dominated ) named with the bag of .

As we explained before, keeping track of the number of such s of size will lead us a complicated convolution operation in join nodes. Therefor, we will keep track of the zeta transform of all computation instead on the original ones. This will save space for us. Assume the tree decomposition is given, we will make a copy of it to save instead of . The procedure of computing zeta transforms , is excatly as shown in previous section.

In this step, the algorithm checks the type of . Here we explain the cases:

• is a leaf node: Then the only possible is an empty set and will dominate it. So there is one way to dominate it and .

• is an introduce vertex node: Suppose is the only child of and . As we showed previously if . Otherwise it would be zero.

• is a forget node: Suppose is the only child of and As shown before, .

• is a join node: Suppose and are the children of and As shown before, .

• is a auxiliary leaf node: There are four cases (In all cases if , then ):

• Case 1:

 (ζaCx,j)[D]=1. (20)
• Case 2:

 (ζaCx,j)[D]={2if C={v} or {u},1Otherwise. (21)
• Case 3: D = {u} or {v}:

 (ζaCx,j)[D]={2if {u,v}∖D⊆C,1Otherwise. (22)
• Case 4:

 (ζaCx,j)[D]=(ζaCx,j)[D∩{u,v}]. (23)

Starting from leaf nodes, we compute only one strand at a time and continue the path to the root and we showed how parents can be computed based on their child(ren).

4 Analyzing Time and Space Complexity of the Algorithm

In the previous sections, we talked about our algorithm. Now, it is the time to explain the time and the space complexity of the algorithm given above. As we explained beforehand, we have an exact algorithm using only polynomial space.

Theorem 4.1

Given a tree decomposition of a graph , the Algorithm 1, outputs the size of minimum dominating set in time using space, where is the treewidth and is the tree-depth.

Proof

We know that the size of any dominating set of a Graph is at most . So, the while loop in Algorithm 1 runs at most times ( is the size of minimum dominating set). In each iteration, we call function recursively. This function is computed by Algorithm 2. So, we need to look at the running time of the Algorithm 2:
In each node of the tree decomposition, we have at most three branches and it happens at forget nodes. On the other hand, we have forget nodes in the tree. Thus, the total running time of Algorithm 2 is . So finally, the total running time of Algorithm 1 is .
Space complexity: We do not store the intermediate values and only compute one strand at a time, thus the space complexity is .

5 Extension

So far, we saw how to find a minimum dominating set when the tree decomposition is given. Now, we describe how to find the size of a minimum dominating set using polynomial space when the graph itself is given.

Remark 1

() In 1992, B. Reed gave an approximated vertex separator algorithm that gives another algorithm determining whether a given graph has a tree decomposition of width at most . Then it finds such a tree decomposition if it exists. The running time of this algorithm is for any fixed using polynomial space.

It is not mentioned that the space complexity of the algorithm is polynomial but this algorithm saves only the results for each strand and has a polynomial space complexity.
So, even if the tree decomposition is not given, still we can use Reed’s algorithm to find a tree decomposition with width of . We can start from and set to (binary search) if this algorithm says there is a tree decomposition of width . At some point, the answer would be no and then the algorithm will output the tree decomposition of size . Now, we can use the output to compute the size of minimum dominating set of the original graph by Algorithm 1.
We also should mention that, the same framework can be applied on the Maximum Independent Set (MIS) problem to covert a dynamic programming algorithm using exponential time to a parametrized algorithm taking tree depth as its parameter using only polynomial space. On the other hand, if we look closer to the Maximum Independent Set problem and write down the recursions, we see that also by applying zeta transform we save space, but we sacrifice huge amount of time. But the good point about MIS is that, if we solve it using recursion on tree decomposition, it has already polynomial space complexity and we don’t need to apply zeta transform.
One good problem would be to see if this framework works on the Hamiltonian Cycle problem. This one is much complicated because the nature of the problem is different. In previous problems, only pairs of vertices and edges (individually) were important but here one should look for all of possible disjoint paths.
In general, any problem of graphs (even if the problem is not on graph, (maybe) we can convert it) which has convolution in the join nodes, can be a candidate for our framework.

6 Conclusion

We applied the dynamic algebraization approach () to the Minimum Dominating Set problem to give a space-efficient dynamic programming algorithm using polynomial space. Our algorithm runs in time using space where and are depth and width of the tree decomposition respectively. Even if the tree decomposition is not given, as mentioned in Remark 1, we can construct a tree decomposition with sufficiently small treewidth by using polynomial space and then apply our algorithm. Again, the space complexity is polynomial, and for some function , the running time is parametrized by the treewidth or tree-depth .
The essential part is to do the computation on the zeta transformed mirror image of the tree decomposition to save space. As in , it is important to introduce the edges in auxiliary leaves to avoid an exponential blow-up.

References

•  Eyal Amir. Efficient approximation for triangulation of minimum treewidth. In

Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence

, pages 7–15. Morgan Kaufmann Publishers Inc., 2001.
•  Eyal Amir. Approximation algorithms for treewidth. Algorithmica, 56(4):448–479, 2010.
•  Stefan Arnborg, D. G. Corneil, and A. Proskurowski. Complexity of finding embeddings in a -tree. SIAM J. Algebraic and Discrete Methods, 8:277–284, 1987.
•  Andreas Björklund, Thore Husfeldt, Petteri Kaski, and Mikko Koivisto. Fourier meets möbius: fast subset convolution. CoRR, abs/cs/0611101, 2006.
•  Hans L Bodlaender. A linear-time algorithm for finding tree-decompositions of small treewidth. SIAM Journal on computing, 25(6):1305–1317, 1996.
•  Hans L Bodlaender, John R Gilbert, Hjálmtyr Hafsteinsson, and Ton Kloks. Approximating treewidth, pathwidth, frontsize, and shortest elimination tree. Journal of Algorithms, 18(2):238–255, 1995.
•  Vincent Bouchitté, Dieter Kratsch, Haiko Müller, and Ioan Todinca. On treewidth approximations. Discrete Applied Mathematics, 136(2):183–196, 2004.
•  Marek Cygan, Fedor V Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. Parameterized algorithms, volume 3. Springer, 2015.
•  M. Davis and H. Putnam. A computing procedure for quantification theory. J. ACM, 7:201–215, April, 1960.
•  Uriel Feige, MohammadTaghi Hajiaghayi, and James R Lee. Improved approximation algorithms for minimum weight vertex separators. SIAM Journal on Computing, 38(2):629–657, 2008.
•  Fomin, Grandoni, and Kratsch. Measure and conquer: Domination – A case study. In ICALP: Annual International Colloquium on Automata, Languages and Programming, 2005.
•  Martin Fürer and Huiwen Yu. Space Saving by Dynamic Algebraization Based on Tree-Depth. Theory of Computing Systems, 61(2):283–304, 2017.
•  Fabrizio Grandoni. A note on the complexity of minimum dominating set. J. Discrete Algorithms, 4(2):209–214, 2006.
•  T. Jian. An -algorithm for solving maximum independent set problem. ieeetoc, C-35(9):847–851, 1986.
•  D. Lokshtanov and J. Nederlof. Saving space by algebraization. In L. J. Schulman, editor, Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC, pages 321–330. ACM, 2010.
•  Bruce A Reed. Finding approximate separators and computing tree width quickly. In Proceedings of the twenty-fourth annual ACM symposium on Theory of computing, pages 221–228. ACM, 1992.
•  Gian-Carlo Rota. On the foundations of combinatorial theory i. theory of möbius functions. Probability theory and related fields, 2(4):340–368, 1964.
•  Richard P Stanley. Enumerative combinatorics. vol. 1, vol. 49 of cambridge studies in advanced mathematics, 1997.
•  Robert Endre Tarjan and Anthony E. Trojanowski. Finding a maximum independent set. siamjcomp, 6:537–546, 1977.
•  Johan M. M. van Rooij and Hans L. Bodlaender. Design by measure and conquer, A faster exact algorithm for dominating set. CoRR, abs/0802.2827, 2008.