    A Space-efficient Parameterized Algorithm for the Hamiltonian Cycle Problem by Dynamic Algebraziation

An NP-hard graph problem may be intractable for general graphs but it could be efficiently solvable using dynamic programming for graphs with bounded width (or depth or some other structural parameter). Dynamic programming is a well-known approach used for finding exact solutions for NP-hard graph problems based on tree decompositions. It has been shown that there exist algorithms using linear time in the number of vertices and single exponential time in the width (depth or other parameters) of a given tree decomposition for many connectivity problems. Employing dynamic programming on a tree decomposition usually uses exponential space. In 2010, Lokshtanov and Nederlof introduced an elegant framework to avoid exponential space by algebraization. Later, Fürer and Yu modified the framework in a way that even works when the underlying set is dynamic, thus applying it to tree decompositions. In this work, we design space-efficient algorithms to solve the Hamiltonian Cycle and the Traveling Salesman problems, using polynomial space while the time complexity is only slightly increased. This might be inevitable since we are reducing the space usage from an exponential amount (in dynamic programming solution) to polynomial. We give an algorithm to solve Hamiltonian cycle in time O((4w)^d nM(nn)) using O(dnn) space, where M(r) is the time complexity to multiply two integers, each of which being represented by at most r bits. Then, we solve the more general Traveling Salesman problem in time O((4w)^d poly(n)) using space O(Wdnn), where w and d are the width and the depth of the given tree decomposition and W is the sum of weights. Furthermore, this algorithm counts the number of Hamiltonian Cycles.

Authors

11/28/2017

Saving Space by Dynamic Algebraization Based on Tree Decomposition: Minimum Dominating Set

An algorithm is presented that solves the Minimum Dominating Set problem...
02/11/2020

Hamiltonian Cycle Parameterized by Treedepth in Single Exponential Time and Polynomial Space

For many algorithmic problems on graphs of treewidth t, a standard dynam...
12/15/2017

New Algorithms for Unordered Tree Inclusion

The tree inclusion problem is, given two node-labeled trees P and T (the...
11/05/2021

Fast FPT-Approximation of Branchwidth

Branchwidth determines how graphs, and more generally, arbitrary connect...
03/20/2019

A Novel Dynamic Programming Approach to the Train Marshalling Problem

Train marshalling is the process of reordering the railcars of a train i...
05/26/2015

Efficient Decomposition of Image and Mesh Graphs by Lifted Multicuts

Formulations of the Image Decomposition Problem as a Multicut Problem (M...
10/12/2017

On the Power of Tree-Depth for Fully Polynomial FPT Algorithms

There are many classical problems in P whose time complexities have not ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Dynamic programming (DP) is largely used to avoid recomputing subproblems. It may decrease the time complexity, but it uses auxiliary space to store the intermediate values. This auxiliary space may go up to exponential in the size of the input. This means both the running time and the space complexity are exponential for some algorithms solving those NP-complete problems. Space complexity is a crucial aspect of algorithm design, because we typically run out of space before running out of time. To fix this issue, Lokshtanov and Nederlof  introduced a framework which works on a static underlying set. The problems they considered were Subset Sum, Knapsack, Traveling Salesman (in time using polynomial space),Weighted Steiner Tree, and Weighted Set Cover. They use DFTs, zeta transforms and Möbius transforms [19, 20]

, taking advantage of the fact that working on zeta (or discrete Fourier) transformed values is significantly easier since the subset convolution operation converts to pointwise multiplication operation. In all their settings, the input is a set or a graph which means the underlying set for the subproblems is static. Fürer and Yu

 changed this approach modifying a dynamic programming algorithm applied to a tree decomposition (instead of the graph itself). The resulting algorithm uses only polynomial space and the running time does not increase drastically. By working with tree decompositions, they obtain parametrized algorithms which are exponential in the tree-depth and linear in the number of vertices. If the tree decomposition has a bounded width, then the algorithm is both fast and space-efficient. In this setting, the underlying set is not static anymore, because they are working with different bags of nodes. They show that using algebraization helps to save space even if the underlying set is dynamic. They consider perfect matchings in their paper. In recent years, there have been several results in this field where algebraic tools are used to save space when DP algorithms are applied to NP-hard problems. In 2018, Pilipczuk and Wrochna  applied a similar approach to solve the Minimum Dominating Set problem. Although they have not directly used these algebraic tools but it is a similar approach (in time using space).
We have to mention that there is no general method to automatically transform dynamic programming solutions to polynomial space solutions while increasing the tree-width parameter to the tree-depth in the running time.
One of the interesting NP-hard problems in graph theory is Hamiltonian Cycle. It seems harder than many other graph problems. We are given a graph and we want to find a cycle visiting each vertex exactly once. The naive deterministic algorithm for the Hamiltonian Cycle problem and the more general the Traveling Salesman problem runs in time using polynomial space. Later, deterministic DP and inclusion-exclusion algorithms for these two problems running in time 111 notation hides the polynomial factors of the expression. using exponential space were given in [13, 15, 2]. The existence of a deterministic algorithm for Hamiltonian cycle running in time , for a fixed is still an open problem. There are some randomized algorithms which run in time , for a fixed like the one given in . Although, there is no improvement in deterministic running time, there are some results on parametrized algorithms. In 2011, Cygan et al.  designed a parametrized algorithm for the Hamiltonian Cycle problem, which runs in time . They also presented a randomized algorithm for planar graphs running in time . In 2015, Bodlaender et al.  introduced two deterministic single exponential time algorithm for Hamiltonian Cycle: One based on pathwidth running in time 222 notation hides the logarithmic factors of the expression. and the other is based on treewidth running in time , where and are the pathwidth and the treewidth respectively. The authors also solve the Traveling Salesman problem in time if a path decomposition of width of is given, and in time , where denotes the matrix multiplication exponent. One of the best known upper bound for  is . They do not consider the space complexity of their algorithm and as far as we checked it uses exponential space.
Recently, Curticapean et al.  showed that there is no positive such that the problem of counting the number of Hamiltonian cycles can be solved in time assuming SETH. Here is the width of the given path decomposition of the graph. They show this tight lower bound via matrix rank.

2 Preliminaries

In this section we review notations that we use later.

2.1 Tree Decomposition

A tree decomposition of a graph , of is a tree such that each node in is associated with a set (called the bag of ) of vertices in , and has the following properties:

• The union of all bags is equal to . In other words, for each , there exists at least one node with containing .

• For every edge , there exists a node such that

• For any nodes , and any node belonging to the path connecting and in ,

The width of a tree decomposition is the size of its largest bag minus one. The treewidth of a graph is the minimum width over all tree decompositions of called . In the following, we use the letter for the treewidth. Arnborg et al.  showed that constructing a tree decomposition with the minimum treewidth is an NP-hard problem but there are some approximation algorithms for finding near-optimal tree decompositions [7, 9, 5]. In 1996, Bodlaender  introduced a linear time algorithm to find the minimum treewidth if the treewidth is bounded by a constant.
To simplify many application algorithms, the notion of a nice tree decomposition has been introduced which has the following properties. The tree is rooted and every node in a nice tree decomposition has at most two children. Any node in a nice tree decomposition is of one of the following types (let be the only child of or let and be the two children of ):

• Leaf node, a leaf of without any children.

• Forget node (forgetting vertex ), where and ,

• Introduce vertex node (introducing vertex ), where and ,

• Join node, where has two children with the same bag as , i.e. .

We should mention that in some papers like , for the sake of simplicity an introduce edge node is defined which is not a part of the standard definition of the nice tree decomposition. Here introduce edge nodes are not needed and we can handle the problem easier without such nodes. In fact, we add edges to the bags as soon as the endpoints are introduced.
It has been shown that any given tree decomposition can be converted to a nice tree decomposition with the same treewidth in polynomial time .

Definition 1

The depth of a tree decomposition is the maximum number of distinct vertices in the union of all bags on a path from the root to the leaf. We use to denote the depth of a given tree decomposition.

We have to mention that this is different from the depth of a tree. This is the depth of a tree decompisition as defined above.

Definition 2

The tree-depth of a graph , is the minimum over the depths of all tree decompositions of . We use to denote the tree-depth of a graph.

After defining the treewidth and the tree-depth, now we have to talk about the relationship between these parameters in a given graph :

Lemma 1

(see [17, Corollary 2.5] and )

For any connected graph , .

Example: The path with vertices, has treewidth and tree-depth of .
One should note that finding the treewidth and the tree-depth of a given graph are known to be an NP-hard problem. A question which arises here, is whether there exists a tree decomposition such that its width is the treewidth and its depth is the tree-depth of the original graph? In other words, is it possible to construct a tree decomposition of a given graph which minimizes both the depth and the width? If the answer is yes, how to gain such a tree decomposition. Although we are not focusing on this question here, but it seems an interesting problem to think about.

2.2 Algebraic tools to save space

When we use dynamic programming to solve a graph problem on a tree decomposition, it usually uses exponential space. Lokshtanov and Nederlof converted some algorithms using subset convolution or union product into transformed version in order to reduce the space complexity. Later, Fürer and Yu  also used this approach in a dynamic setting, based on tree decompositions for the Perfect Matching problem. In this work, we introduce algorithms to solve the Hamiltonian cycle and the Traveling Salesman problems. First, let us recall some definitions.
Let be the set of all functions from the power set of the universe to the ring . The operator is the pointwise addition and the operator is the pointwise multiplication.

Definition 3

A relaxation of a function is a sequence of functions , where and , is defined as:

 fi[X]=⎧⎪⎨⎪⎩0if i<|X|,f[X]if i=|X|,arbitrary valueif i>|X|. (1)
Definition 4

The zeta transform of a function is defined as:

 ζf[X]=∑Y⊆Xf[Y]. (2)
Definition 5

The Möbius transform of a function is defined as:

 μf[X]=∑Y⊆X(−1)|X∖Y|f[Y]. (3)
Lemma 2

The Möbius transform is the inversion of the zeta transform and vice versa, i.e.

 μ(ζf[X])=ζ(μf[X])=f[X]. (4)

See [19, 20] for the proof.
Instead of storing exponentially many intermediate results, we store the zeta transformed values. We can assume that instead of working on the original nice tree decomposition, we are working on a mirrored nice tree decomposition to which the zeta transform has been applied. We work on zeta transformed values and finally to recover the original values, we use Equation 4. The zeta transform converts the hard union product operation () to the easier pointwise multiplication operation () which results in saving space.

Definition 6

Given and , the Subset Convolution of and denoted is defined as:

 (f∗Rg)[X]=∑X1⊆Xf(X1)g(X∖X1). (5)
Definition 7

Given and , the Union Product of and denoted is defined as:

 (f∗ug)[X]=∑X1∪X2=Xf(X1)g(X2). (6)
Theorem 2.1

() Applying the zeta transform to a union product operation, results in the pointwise multiplication of zeta transforms of the outputs, i.e., given and ,

 ζ(f∗ug)[X]=(ζf)⊙(ζg)[X]. (7)

All of the previous works which used either DFT or the zeta transform on a given tree decomposition (such as , and ), have one common central property that the recursion in the join nodes, can be presented by a formula using a union product operation. The union product and the subset convolution are complicated operations in comparison with pointwise multiplication or pointwise addition. That is why taking the zeta transform of a formula having union products makes the computation easier. As noted earlier in Theorem 2.1, taking the zeta transform of such a term, will result in a term having pointwise multiplication which is much easier to handle than the union product. After doing a computation over the zeta transformed values, we can apply the Möbius transform on the outcome to get the main result (based on Theorem 2). In other words, the direct computation has a mirror image using the zeta transforms of the intermediate values instead of the original ones. While the direct computation keeps track of exponentially many intermediate values, the computation over the zeta transformed values partitions into exponentially many branches, and they can be executed one after another. Later, we show that this approach improves the space complexity using only polynomial space instead of exponential space.

3 Counting the number of Hamiltonian cycles

We are given a connected graph and a nice tree decomposition of of width . If is a Hamiltonian cycle, then the induced subgraph of on (called , where is the union of all bags in ) is a set of disjoint paths with endpoints in (see Figure 1). Figure 1: H[Vx] is a set of paths with endpoints in Bx
Definition 8

A pseudo-edge is a pair of endpoints of a path of length in . We use the notation for the pseudo-edges. E.g., in Figure 1, is a pseudo-edge (it does not imply that there is an edge between and , it just says that there is a path of length at least two in where and are its endpoints). The notation is a symmetrical notation since our paths are undirected, i.e., . Each path is associated with a pseudo-edge.

Lemma 3

The degree of all vertices in is at most 2.

Proof

is a subgraph of the cycle .

Let be the vertices contained in the bags associated with nodes in the subtree rooted at which are not in . Remember that vertices in pseudo-edges are vertices in . Let be the union of pseudo-edges (in ). Let be the set of two-element subsets of , and let . Let be the union of vertices involved in . Then, is a subset of . For any , define to be the union of and . Let be the vertices of which are introduced through and are not present in the bag of the parent of . Define to be the number of sets of disjoint paths (except in their endpoints where they can share a vertex) whose pseudo-edge set is (remember is the union of vertices involved in ) visiting vertices of exactly once (it can also visit vertices which are not in but we require them to visit at least vertices in since they are not present in the proper ancestors of ). Computing gives us the number of possible Hamiltonian cycles in . Now, we show how to compute the values of for all types of nodes.

3.1 Computing fx[X]

In two rounds we will compute efficiently. In the first round we will introduce the recursive formulas for any kind of nodes (when space usage is still exponential) and in the second round we will explain how to compute the zeta transformed values (in the next section, when the space usage drops to polynomial):

• Leaf node: Assume is a leaf node.

 fx[X]={1if X=∅,0otherwise. (8)

Since is a leaf node, there is no path through , so for all non-empty sets of pseudo-edges, is zero and for , there is only one set of paths, which is empty.

• Forget node: Assume is a forget node (forgetting vertex ) with a child , where . Any pseudo-edge can define a path starting from , going to possibly through and then going to possibly through . Here, either or both pieces of the path (from to , and/or from to ) can consist of single edges. Figure 2: Forget node x forgetting vertex v with the child c.
 fx[X]=∑⟨u,w⟩∈X∑Q⊆{⟨u,v⟩,⟨v,w⟩}dQfc[X∖{⟨u,w⟩}∪Q], (9)

where

• Introduce vertex node: Assume is an introduce vertex node (introducing vertex ) with a child , where . The vertex cannot be an endpoint of a pseudo-edge because paths have length at least two.

 fx[X]={fc[X]if v∉SX,0otherwise. (10)
• Join node: Assume is a join node with two children and , where . For any given , can be partitioned in two sets and and each of them can be the set of pseudo-edges for one of the children. Figure 3: Join node x with two children c1 and c2.

The number of such paths associated with through is equal to the sum of the products of the number of corresponding paths associated with and through and respectively.

 fx[X]=∑X′⊆Xfc1[X′]fc2[X∖X′]=(fc1∗Rfc2)[X]. (11)

Here we get subset convolution and we have to convert it to union product to be able to use zeta transfrom. We do this conversion in the next subsection.

3.2 Computing ζfx[X]

In this subsection (the second round of computation), first we compute the relaxations of for all kinds of nodes and then we apply the zeta transform to the relaxations. In the following section let be a relaxation of .

• Leaf node: Assume is a leaf node. Since and for any , we can choose for all and . Then

 (ζfix)[X]=1, for all i and X. (12)
• Forget node: Assume is a forget node (forgetting vertex ) with a child , where . Thus,

 fix[X]=∑⟨u,w⟩∈X∑Q⊆{⟨u,v⟩,⟨v,w⟩}dQfi′c[X∖{⟨u,w⟩}∪Q], (13)

where , i.e., is the number of pseudo-edges in . Now we apply zeta transform:

 (ζfix)[X]=∑Y⊆Xfix[Y] (14) = ∑Y⊆X∑⟨u,w⟩∈Y∑Q⊆{⟨u,v⟩,⟨v,w⟩}dQfi′(Q)c[Y∖{⟨u,w⟩}∪Q] = ∑⟨u,w⟩∈X∑Q⊆{⟨u,v⟩,⟨v,w⟩}∑{⟨u,w⟩}⊆Y⊆XdQfi′(Q)c[Y∖{⟨u,w⟩}∪Q] = ∑⟨u,w⟩∈X∑Q⊆{⟨u,v⟩,⟨v,w⟩}dQ∑Y⊆(X∖{⟨u,w⟩})fi′(Q)c[Y∪Q]

We now express by -transforms, depending on the size of . We use the abbreviation

If , then

 EQ=∑Y⊆X′fi−1c[Y]=ζfi−1[X′]. (15)

If or , then

 EQ=∑Y⊆X′fic[Y∪Q]=ζfic[X′∪Q]−ζfic[X′]. (16)

If , then

 EQ = ∑Y⊆X′fi+1c[Y∪Q] (17) = ζfi+1c[X′∪Q]−ζfi+1c[X′∪{⟨u,v⟩}] −ζfi+1c[X′∪{⟨v,w⟩}]+ζfi+1c[X′].

With these sums computed, we can now express more concisely.

 (ζfix)[X] = ∑⟨u,w⟩∈X((d∅−d{⟨u,v⟩}−d{⟨v,w⟩}+d{⟨u,v⟩},{⟨v,w⟩})ζfi−1c[X′] (18) +d{⟨u,v⟩}ζfic[X′∪{⟨u,v⟩}]+d{⟨v,w⟩}ζfic[X′∪{⟨v,w⟩}] +d{⟨u,v⟩,⟨v,w⟩}ζfi+1c[X′∪{⟨u,v⟩,⟨v,w⟩}]).

Note that if , if , if , and is always 1, while otherwise . This implies that for every in the previous equation at least one of the 4 coefficients is 0. Therefore, in each forget node, we have at most a fold branching.

• Introduce vertex node: Assume is an introduce vertex node (introducing vertex ) with a child , where . Let be the set of pseudo-edges having as one their endpoints. Therefore,

 fix[X]={fic[X]if v∉SX,0otherwise. (19)
 (ζfix)[X]=∑Y⊆Xfix[Y]=∑Y⊆(X∖Xv)fic[Y]=(ζfic)[X∖Xv]. (20)
• Join node: Assume is a join node with two children and , where . To compute on a join node, we can use Equation 11. In order to convert the subset convolution operation to pointwise multiplication, first we need to convert the subset convolution to union product and then we are able to use Theorem 2.1. To convert subset convolution to union product we introduce a relaxation of . Let be a relaxation of .

 (ζfix)[X]=i∑j=0(ζfjc1)[X]⋅(ζfi−jc2)[X], for 0≤i≤w, (21)

where is the treewidth of .

To summerize, we present the following algorithm for the Hamiltonian Cycle problem where a tree decomposition is given.

Theorem 3.1

Given a graph and a tree decomposition of , we can compute the total number of the Hamiltonian Cycles of in time and in space by using Algorithm 1, where and are the width and the depth of respectively, and is the time complexity to multiply two numbers which can be encoded by at most bits.

Proof can be found in appendix.

4 The traveling Salesman problem

In the previous section, we showed how to count the total number of possible Hamiltonian cycles of a given graph. In this section, we discuss a harder problem. We know that the Hamiltonian Cycle problem is reducible to the traveling Salesman problem by setting all cost of edges to be 1. We could have just explained how to solve Traveling Salesman problem but we chose to first explain the easier one to help understanding the process. Now, we have most of the notations. First, we recap the formal definition of the traveling Salesman problem.

Definition 9

Traveling Salesman. Given an undirected graph with weighted edges, where for all , is the weight (= cost) of (nonnegative integer). In the traveling Salesman problem we are asked to find a cycle (if there is any) that visits all of the vertices exactly once (it is a Hamiltonian cycle) with minimum cost.

As mentioned, the output should be a minimum cost Hamiltonian cycle. We have the same notations as before. Thus, we are ready to explain our algorithmic framework:
The difference between counting the number of Hamiltonian cycles and finding the minimum cost of a Hamiltonian cycle (answer to the Traveling salesman problem), is that we should work with lengths instead of the numbers of solutions. In order to solve this problem, we work with the ring of polynomials , where is a variable. Our algorithm computes the polynomial , where is the number of solutions in associated with with cost , and is the sum of the weights of all edges. The edge lengths have to be nonnegative integers as mentioned above.

4.1 Computing the Hamiltonian Cycles of All Costs

To find the answer to the traveling salesman problem, we have find the first nonzero coefficient of , where is the root of the given nice tree decomposition. As we did for the Hamiltonian cycle problem, we show how compute this polynomial recursively for all kinds of node.

• Leaf node: Assume is a leaf node. Since we require leaf nodes to have empty bags, then

 Px(y)[X]={1if X=∅,0otherwise. (22)
• Forget node: Assume is a forget node (forgetting vertex ) with a child , where .

 Px(y)[X]=∑⟨u,w⟩∈X∑Q⊆{⟨u,v⟩,⟨v,w⟩}dQ(Pc(y))[X∖{⟨u,w⟩}∪Q], (23)

where

• Introduce vertex node: Assume is an introduce vertex node (introducing vertex ) with a child , where .

 Px(y)[X]={Pc(y)[X]if v∉SX,0otherwise. (24)
• Join node: Assume is a join node with two children and , where .

 fx[X]=∑X′⊆XPc1(y)[X′]Pc2(y)[X∖X′]. (25)

We skip the zeta transform part because it is similar to the Hamiltonian Cycle case.

Theorem 4.1

Given a graph and a tree decomposition of , we can solve the Traveling Salesman problem for in time and in space where is the sum of the weights, and are the width and the depth of the tree decomposition respectively.

You can find the proof in appendix.

4.2 Conclusion

In this work, we solved Hamiltonian Cycle and Traveling Salesman problems with polynomial space complexity where the running time is polynomial in size of the given graph and exponential in tree-depth. Our algorithms for both problems rely on modifying a DP approach such that instead of storing all possible intermediate values, we keep track of zeta transformed values which was first introduced in , and then in  for dynamic underlying sets.

References

•  Stefan Arnborg, Jens Lagergren, and Detlef Seese, Problems easy for tree-decomposable graphs extended abstract, International Colloquium on Automata, Languages, and Programming, Springer, 1988, pp. 38–51.
•  Richard Bellman, Dynamic programming treatment of the travelling salesman problem, Journal of the ACM (JACM) 9 (1962), no. 1, 61–63.
•  Andreas Björklund, Thore Husfeldt, Petteri Kaski, and Mikko Koivisto, Fourier meets Möbius: fast subset convolution

, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, ACM, 2007, pp. 67–74.

•  Andreas Björklund, Petteri Kaski, and Ioannis Koutis, Directed Hamiltonicity and out-branchings via generalized Laplacians, 44th International Colloquium on Automata, Languages, and Programming, ICALP 2017, July 10-14, 2017, Warsaw, Poland, 2017, pp. 91:1–91:14.
•  Hans L Bodlaender, A linear-time algorithm for finding tree-decompositions of small treewidth, SIAM Journal on computing 25 (1996), no. 6, 1305–1317.
•  Hans L Bodlaender, Marek Cygan, Stefan Kratsch, and Jesper Nederlof, Deterministic single exponential time algorithms for connectivity problems parameterized by treewidth, Information and Computation 243 (2015), 86–111.
•  Hans L Bodlaender, John R Gilbert, Hjálmtỳr Hafsteinsson, and Ton Kloks, Approximating treewidth, pathwidth, and minimum elimination tree height, International Workshop on Graph-Theoretic Concepts in Computer Science, Springer, 1991, pp. 1–12.
•  Hans L Bodlaender, John R Gilbert, Hjálmtyr Hafsteinsson, and Ton Kloks, Approximating treewidth, pathwidth, frontsize, and shortest elimination tree, Journal of Algorithms 18 (1995), no. 2, 238–255.
•  Vincent Bouchitté, Dieter Kratsch, Haiko Müller, and Ioan Todinca, On treewidth approximations, Discrete Applied Mathematics 136 (2004), no. 2-3, 183–196.
•  Radu Curticapean, Nathan Lindzey, and Jesper Nederlof, A tight lower bound for counting hamiltonian cycles via matrix rank, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, Society for Industrial and Applied Mathematics, 2018, pp. 1080–1099.
•  Marek Cygan, Jesper Nederlof, Marcin Pilipczuk, Michal Pilipczuk, Joham MM van Rooij, and Jakub Onufry Wojtaszczyk, Solving connectivity problems parameterized by treewidth in single exponential time, Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on, IEEE, 2011, pp. 150–159.
•  Martin Fürer and Huiwen Yu, Space saving by dynamic algebraization based on tree-depth, Theory of Computing Systems 61 (2017), no. 2, 283–304.
•  Richard M Karp, Dynamic programming meets the principle of inclusion and exclusion, Operations Research Letters 1 (1982), no. 2, 49–51.
•  Joachim Kneis, Daniel Mölle, Stefan Richter, and Peter Rossmanith, A bound on the pathwidth of sparse graphs with applications to exact algorithms, SIAM Journal on Discrete Mathematics 23 (2009), no. 1, 407–427.
•  Samuel Kohn, Allan Gottlieb, and Meryle Kohn, A generating function approach to the traveling salesman problem, Proceedings of the 1977 annual conference, ACM, 1977, pp. 294–300.
•  Daniel Lokshtanov and Jesper Nederlof, Saving space by algebraization, Proceedings of the forty-second ACM symposium on Theory of computing, ACM, 2010, pp. 321–330.
•  Jaroslav Nešetřil and Patrice Ossona De Mendez, Tree-depth, subgraph coloring and homomorphism bounds, European Journal of Combinatorics 27 (2006), no. 6, 1022–1041.
•  Michał Pilipczuk and Marcin Wrochna, On space efficiency of algorithms working on structural decompositions of graphs, ACM Transactions on Computation Theory (TOCT) 9 (2018), no. 4, 18:1–18:36.
•  Gian-Carlo Rota, On the foundations of combinatorial theory, I. Theory of Möbius functions, Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 2 (1964), no. 4, 340–368.
•  Richard P Stanley, Enumerative combinatorics. vol. 1, with a foreword by Gian-Carlo Rota. corrected reprint of the 1986 original, Cambridge Studies in Advanced Mathematics 49 (1997).
•  Virginia Vassilevska Williams, Multiplying matrices faster than Coppersmith-Winograd, Proceedings of the Forty-fourth Annual ACM Symposium on Theory of Computing (New York, NY, USA), STOC ’12, ACM, 2012, pp. 887–898.

Appendix 0.A Appendix

Here is the proof of Theorem 3.1.

Proof

The correctness of the algorithm is shown in Section 3.1 and Section 3.2. Now, we show the running time and the space complexity of our algorithm.
Running time: The only case that branching happens in where we are handling a forget node. We have at most branches in forget nodes since the number of pseudo-edges in each bag is bounded by because of Lemma 3. Furthermore, based on the formula for forget node, the two unordered pairs of and can contribute as an edge or a pseudo-edge (four possible cases). On the other hand the number of forget nodes in a path from the root to one leaf is bounded by the (the depth of tree decomposition ). Also, the number of vertices is and we work with numbers of size at most (number of paths) which can be represented by at most bits. We handle multiplication of these numbers which happens in . All being said, the total running time is .
Space complexity: We keep the results for each strand at a time. The number of nodes in a path from the root to a leaf is bounded by the depth of the tree decomposition (). Along the path, we keep track of bags (of size ) and number of disjoint paths (at most paths exist which can be shown by at most bits). Therefore, the total space complexity is .

And here is the proof of Theorem 4.1.

Proof

The correctness of the algorithm is shown in Section 3.1 and Section 3.2. Now, we show the running time and the space complexity of our algorithm.
Running time: The running time analysis is very similar to the analysis done for Hamiltonian Cycle.
Space complexity: The space complexity analysis is also similar to the analysis done for Hamiltonian Cycle except here we have to keep track of the sum of the wights which at most is . So there is a factor of in the space complexity here.