DeepAI
Log In Sign Up

The Asymmetric Travelling Salesman Problem in Sparse Digraphs

Asymmetric Travelling Salesman Problem (ATSP) and its special case Directed Hamiltonicity are among the most fundamental problems in computer science. The dynamic programming algorithm running in time O^*(2^n) developed almost 60 years ago by Bellman, Held and Karp, is still the state of the art for both of these problems. In this work we focus on sparse digraphs. First, we recall known approaches for Undirected Hamiltonicity and TSP in sparse graphs and we analyse their consequences for Directed Hamiltonicity and ATSP in sparse digraphs, either by adapting the algorithm, or by using reductions. In this way, we get a number of running time upper bounds for a few classes of sparse digraphs, including O^*(2^n/3) for digraphs with both out- and indegree bounded by 2, and O^*(3^n/2) for digraphs with outdegree bounded by 3. Our main results are focused on digraphs of bounded average outdegree d. The baseline for ATSP here is a simple enumeration of cycle covers which can be done in time bounded by O^*(μ(d)^n) for a function μ(d)≤(⌈d⌉!)^n/⌈d⌉. One can also observe that Directed Hamiltonicity can be solved in randomized time O^*((2-2^-d)^n) and polynomial space, by adapting a recent result of Björklund [ISAAC 2018] stated originally for Undirected Hamiltonicity in sparse bipartite graphs. We present two new deterministic algorithms: the first running in time O(2^0.441(d-1)n) and polynomial space, and the second in exponential space with running time of O^*(τ(d)^n/2) for a function τ(d)≤ d.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/18/2018

Capacitated Dynamic Programming: Faster Knapsack and Graph Algorithms

One of the most fundamental problems in Theoretical Computer Science is ...
05/05/2020

Many visits TSP revisited

We study the Many Visits TSP problem, where given a number k(v) for each...
09/24/2020

An Asymptotically Fast Polynomial Space Algorithm for Hamiltonicity Detection in Sparse Directed Graphs

We present a polynomial space Monte Carlo algorithm that given a directe...
05/11/2020

Faster Exponential-time Algorithms for Approximately Counting Independent Sets

Counting the independent sets of a graph is a classical #P-complete prob...
02/23/2019

Faster and simpler algorithms for finding large patterns in permutations

Permutation patterns and pattern avoidance have been intensively studied...
08/07/2018

On tail estimates for Randomized Incremental Construction

By combining several interesting applications of random sampling in geom...

1 Introduction

In the Directed Hamiltonicity problem, given a directed graph (digraph) one has to decide if has a Hamiltonian cycle, i.e., a simple cycle that visits all vertices. In its weighted version, called ATSP, we additionally have integer weights on edges , and the goal is to find a minimum weight Hamiltonian cycle in .

The ATSP problem has a dynamic programming algorithm running in time and space due to Bellman [BellmanTSP] and Held and Karp [HeldKarpTSP]. Gurevich and Shelah [GurevichShelahTSP] obtained the best known polynomial space algorithm, running in time . It is a major open problem whether there is an algorithm in time for an , even for the unweighted case of Directed Hamiltonicity. However, there has been a significant progress in answering this question in variants of Directed Hamiltonicity. Namely, Björklund and Husfeldt [BH:parity] showed that the parity of the number of Hamiltonian cycles in a digraph can be determined in time and Cygan, Kratsch and Nederlof solved the bipartite case of Directed Hamiltonicity in time , which was later improved to by Björklund, Kaski and Koutis [Bjorklund:directed-bipartite].

Graph class Undirected Hamiltonicity Travelling Salesman Problem
general [bjorklund-hamilton] [BellmanTSP, HeldKarpTSP]
[GurevichShelahTSP]
bipartite [bjorklund-hamilton] [BellmanTSP, HeldKarpTSP]
[OthmanTSP]
  [Cygan:pathwidth] [Bodlaender:TSP]
[XiaoNagamochi:deg3] [XiaoNagamochi:deg3]
  [Cygan:pathwidth][Fomin:pathwidth] [Bodlaender:TSP][Fomin:pathwidth]
[spotting-trees] [XiaoNagamochi:deg4]
[spotting-trees] [Bodlaender:TSP][Fomin:pathwidth]
[Yunos:deg5]
any [Bjorklund:bounded-degree]
  [Cygan:pathwidth][Kneis:pathwidth] [Bodlaender:TSP][Kneis:pathwidth]
[Cygan:average-degree]
bipartite [Bjorklund:sparse-bipartite]
pathwidth   [Cygan:pathwidth] [Bodlaender:TSP]
treewidth   [Cygan:connectivity] [Bodlaender:TSP]

Table 1: Running times (with polynomial factors omitted) of algorithms for undirected graphs. Rows marked with denote exponential space algorithms, rows marked with denote Monte Carlo algorithms.

Undirected graphs.

Even more is known in the undirected setting, where the problems are called Undirected Hamiltonicity and TSP. Björklund [bjorklund-hamilton] shows that Undirected Hamiltonicity can be solved in time in general and in the bipartite case. Very recently, Nederlof [nederlof:1.9999] showed that TSP admits an algorithm in time , assuming that square matrices can be multiplied in time . Finally, there is a number of results for Undirected Hamiltonicity and TSP restricted to graphs that are somewhat sparse. An early example is an algorithm of Eppstein [Eppstein:cubic] for TSP in graphs of maximum degree 3, running in time . This result has been later improved and generalized to larger values of maximum degree, we refer the reader to Table 1 for details ( denotes the maximum degree). Perhaps the most general measure of graph sparsity is the average degree . Cygan and Pilipczuk [Cygan:average-degree] showed that whenever is bounded, the barrier for TSP can be broken, although only slightly. For small values of , more significant improvements are possible. Namely, by combining the algorithms for Undirected Hamiltonicity and TSP parameterized by pathwidth [Cygan:pathwidth, Bodlaender:TSP] with a bound on pathwidth of sparse graphs [Kneis:pathwidth] we get the upper bound of and , respectively. For Undirected Hamiltonicity, if the input graph is additionally bipartite, Björklund [Bjorklund:sparse-bipartite] shows the upper bound.

Graph class Directed Hamiltonicity Asymmetric Travelling
Salesman Problem
general [Bax:inclusion-exclusion, Karp:inclusion-exclusion, Kohn:inclusion-exclusion] [BellmanTSP, HeldKarpTSP]
[GurevichShelahTSP]
bipartite [Bjorklund:directed-bipartite] [BellmanTSP, HeldKarpTSP]
[OthmanTSP]
-graphs (Corollary 2.9) (Corollary 2.9)
(Corollary 2.12) (Corollary 2.12)
(Corollary 2.11) (Corollary 2.11)
any (Theorem 2.14) [Bjorklund:bounded-degree]
average (Corollary 2.7) (Corollary 2.7)
(Theorem 2.14)
(Theorem 1.1) (Theorem 1.1)
(Theorem 1.2) (Theorem 1.2)
treewidth   [Cygan:connectivity]

Table 2: Running times (with polynomial factors omitted) of the algorithms for directed graphs. We preserve the notation from Table 1. By we denote maximum outdegree and denotes maximum total degree. Treewidth refers to the underlying undirected graph.

Directed sparse graphs: hidden results

The goal of this paper is to investigate Directed Hamiltonicity and ATSP in sparse directed graphs. Quite surprisingly, not much results in this topic are stated explicitly. In fact, we were able to find just three references of this kind: Björklund, Husfeldt, Kaski and Koivisto [Bjorklund:bounded-degree] describe an algorithm for digraphs with total degree (sum of indegree and outdegree) bounded by that works in time , for . Second, Cygan et al. [Cygan:connectivity] describe an algorithm for Directed Hamiltonicity running in time , where is the treewidth of the input graph. Finally, Björklund and Koutis [DBLP:journals/corr/BjorklundK16a] show an algorithm which counts Hamiltonian cycles in directed graphs of average degree in expected time , where is a constant.

However, one cannot say that nothing more is known, because many results for undirected graphs imply some running time bounds in the directed setting. We devote the first part of this work to investigating such implications. In some cases, the implications are immediate. For example, Gebauer [Gebauer:TCS:4-regular, GebauerTSP] shows an algorithm running in time that solves TSP in graphs of maximum degree 4. It uses the meet-in-the-middle approach and can be sketched as follows: guess two opposite vertices of the solution cycle, generate a family of paths of length from each of them (of size at most ) and store one of the families in a dictionary to enumerate all complementary pairs of paths in time . This algorithm, without a change, can be used for ATSP in digraphs of maximum outdegree 3, with the same running time bound (see Theorem 2.12).

The other implications that we found rely on a simple reduction from ATSP to a variant of TSP in bipartite undirected graphs (see Lemma 2.1): replace each vertex of the input digraph by two vertices , joined by an edge of weight 0, and for each edge create an edge of the same weight. Then find a lightest Hamiltonian cycle that contains the matching . By applying this reduction to a digraph with both outdegrees and indegrees bounded by 2, which we call a -graph, and using Eppstein’s algorithm [Eppstein:cubic] we get the running time of , see Corollary 2.9. Another consequence is an algorithm running in time for digraphs of maximum total degree , see Corollary 2.11. These two simple classes of digraphs were studied by Plesník [plesnik], who showed that Directed Hamiltonicity remains NP-complete when restricted to them.

We can also apply the reduction to an arbitrary digraph of average outdegree . A naive approach would be then to enumerate all perfect matchings in the bipartite graph induced by edges . Indeed, each such matching corresponds to a cycle cover in the input graph, so we basically enumerate cycle covers and filter-out the disconnected ones. Thanks to Bregman-Minc inequality [Bregman:permanent] which bounds the permament in sparse matrices the resulting algorithm has running time , where

See Corollary 2.7 for details.

Yet another upper bound for digraphs of average outdegree is obtained by using the reduction described above and next applying Björklund’s algorithm for sparse bipartite graphs [Bjorklund:sparse-bipartite] with a slight modification to force the matching in the Hamiltonian cycle (see Theorem 2.14). The resulting algorithm has running time .

Directed sparse graphs: main results

The simple consequences that we describe above are complemented by two more technical results.

The first algorithm runs in polynomial space and realizes the following idea. Assume . Then many of the vertices of the input graph have outdegree at most 2, and we can just branch on vertices of outdegree at least 3, and solve the resulting -graph using the fast -time algorithm mentioned before. This idea can be boosted a bit in the case when the initial branching is too costly, i.e., there are many vertices of high outdegree: then we observe that in such an unbalanced graph one can apply the simple cycle cover enumeration which then runs faster than in graphs of the same density but with balanced outdegrees. After a technical analysis of the running time we get the following theorem.

Theorem 1.1.

ATSP restricted to digraphs of average outdegree at most can be solved in time and polynomial space, where .

The second algorithm generalizes Gebauer’s meet-in-the-middle approach to digraphs of average outdegree . (We note that it uses exponential space.)

Theorem 1.2.

ATSP restricted to digraphs of average outdegree at most  can be solved in time  and the same space, where

Figure 1: Comparison of the running times of algorithms for solving ATSP ( enumcc, branch+, mim) and Directed Hamiltonicity ( Björklund) in sparse digraphs. Horizontal axis: average degree , vertical axis: base from the running time bound of the form .

Which algorithm is the best?

Figure 1 compares four algorithms for solving ATSP and Directed Hamiltonicity in digraphs of average outdegree described above:

  • enumcc: enumerating cycle covers (Corollary 2.7),

  • Björklund: adaptation of Björklund’s bipartite graphs algorithm (Theorem 2.14),

  • branch+: branching boosted by enumerating cycle covers (Theorem 1.1).

  • mim: meet in the middle (Theorem 1.2),

The choice of the best algorithm depends on , on whether we can afford exponential space, and on whether we solve ATSP or just Directed Hamiltonicity. We can conclude the following.

  • ATSP in polynomial space: for use branch+, for use enumcc, and for use the general algorithm of Gurevich and Shelah [GurevichShelahTSP].

  • ATSP in exponential space: for use branch+, for use mim, and for use the general dynamic programming of Bellman [BellmanTSP], Held and Karp [HeldKarpTSP].

  • Directed Hamiltonicity in polynomial space: for use branch+, for use enumcc, and for use Björklund.

  • Directed Hamiltonicity in exponential space: for use branch+, for use mim, and for use Björklund.

2 Reductions from undirected graphs

The objective of this section is to recall two reductions from the ATSP to the (forced) TSP. Then, we will discuss existing methods of solving Undirected Hamiltonicity and TSP, and present their implications for corresponding problems in directed graphs. The summary of this section is presented in Tables 1 and 2.

2.1 General reductions

We recall that in the Forced Travelling Salesman Problem [Eppstein:cubic, Rubin:forcedTSP, XiaoNagamochi:deg3, XiaoNagamochi:deg4], we are given an undirected graph , a weight function , and a subset . We say that a Hamiltonian cycle  is admissible, if . The goal is to find an admissible Hamiltonian cycle of the minimum total weight of the edges (or, report that there is no such cycle). Moreover, we define the Bipartite Forced Matching TSP (BFM-TSP) as a special case of the Forced TSP, where graph is bipartite, and the edges of form a perfect matching in .

The following lemma provides the relationship between the BFM-TSP and the ATSP.

Lemma 2.1.

For every instance of ATSP, where is a digraph on  vertices, there is an equivalent instance of BFM-TSP such that  is a graph on  vertices.

Moreover, if both outdegrees and indegrees of are bounded by , then has maximum degree . Similarly, if has average outdegree , then has average degree .

Proof.

Let be an instance of ATSP. Let and . We define as a bipartite graph on the vertex set with edges , where is the perfect matching . The edges of  inherit the weight from , i.e. for we set . Edges of have weight .

We claim that is the desired instance of BFM-TSP. Indeed,  has  vertices, and given a Hamiltonian cycle in , we can construct a perfect matching , where (we set ). Then, forms a Hamiltonian cycle in of the same weight as . Conversely, consider a Hamiltonian cycle  in  such that for a matching . Then . Hence, after orienting edges of  from  to  and contracting each edge to a single vertex , we get a Hamiltonian cycle  in  of weight . ∎

Lemma 2.1 implies, in particular, that if there is an algorithm for BFM-TSP running in time , then there is an algorithm for ATSP running in time .

When we solve an ATSP instance , in some cases it is easier to work with an equivalent instance of TSP (without forced edges).

Lemma 2.2.

For every instance of ATSP, where is a digraph on  vertices, there is an equivalent instance of TSP such that  is an undirected graph on  vertices.

Proof.

This is a classic result. Given an instance of ATSP, we start with constructing an equivalent instance of BFM-TSP by applying Lemma 2.1. Then, we substitute in every edge with a simple path of length : , where new edges and  have weight . We see that  has  vertices, and every Hamiltonian cycle  in  corresponds to a Hamiltonian cycle  in  such that , and . ∎

2.2 Enumerating cycle covers

Let be an instance of BFM-TSP, and let be a family of all perfect matchings in . We observe that every cycle cover in which contains all edges of is of the form , where . Hence, our goal is to find a matching such that is a Hamiltonian cycle in , and the weight of is minimum possible. One way to do it is to list all the perfect matchings , and choose the best one among these which form with  a Hamiltonian cycle in . We will investigate the complexity of such an approach in sparse graphs.

It is known that all perfect matchings in bipartite graph can be listed in time and polynomial space [Fukuda:matchings]. Hence, it is enough to provide a bound on the size of in sparse graphs. We start with recalling a classic result of Bregman together with its standard application.

Theorem 2.3 (Bregman-Minc inequality [Bregman:permanent, Schrijver:permanent]).

Let be an  binary matrix, and let denote the number of ones in the -th row. Then

Corollary 2.4 ([ProofsBook]).

Let be a bipartite digraph on , where , and let denote the degrees of vertices of . Then, the number of perfect matchings in can be bounded by

Corollary 2.5.

ATSP restricted to digraphs of outdegree bounded by can be solved in time and polynomial space.

Proof.

Given an instance of ATSP, we use Lemma 2.1 to obtain an equivalent instance of BFM-TSP. Then, is a bipartite graph on , and all vertices of in have degree at most . By Corollary 2.4, there are at most perfect matchings in . Hence, according to our initial observation, the instance can be solved in time . ∎

To the best of our knowledge, Corollary 2.5 provides the fastest polynomial space algorithm for . The Bregman-Minc inequality is also useful for digraphs with bounded average outdegree. First, we need to quote an analytic lemma.

Lemma 2.6 ([AlonSpencer]).

For a function , and numbers , where , the following inequality holds:

Corollary 2.7.

ATSP restricted to digraphs of average outdegree can be solved in time and polynomial space, where

In particular, for integral values of , the running time is bounded by .

Proof.

As before, we start by constructing an equivalent instance of BFM-TSP. Let denote the degrees of vertices of in . Note that their average is equal to . By Corollary 2.4, has at most perfect matchings. Lemma 2.6 implies that this value is maximized if for all , we have , or . Then, we claim that there must be vertices of degree  and vertices of degree . Indeed, this is true for , and for , if we denote the number of vertices of degree  by , then we have , and hence . It follows that there are at most perfect matchings in . ∎

2.3 Branching algorithms

One of the most common techniques which is used for solving NP-hard problems in sparse graphs is branching (bounded search trees). It is based on optimizing exhaustive search algorithms by bounding the size of the recursion tree. In case of TSP, the first result of this kind is due to Eppstein [Eppstein:cubic]. He showed a branching algorithm for subcubic graphs running in time . Actually, he proved a stronger result in his work.

Theorem 2.8 ([Eppstein:cubic]).

Forced TSP restricted to subcubic graphs can be solved in time and polynomial space.

Corollary 2.9.

ATSP restricted to digraphs with all out- and indegrees at most  can be solved in time and polynomial space.

Proof.

Let be an instance of ATSP, where is a digraph with all out- and indegrees at most . We apply Lemma 2.1 to obtain an equivalent instance of BFM-TSP. We know that has vertices, and is subcubic. Moreover, is an instance of Forced TSP with forced edges. Hence, we can use Theorem 2.8 to solve it in time . ∎

We should note here that since the work of Eppstein, faster algorithms for TSP in subcubic graphs were developed [IwamaNakashima:cubic, Liskiewicz:cubic, XiaoNagamochi:deg3]. However, all of them run still in time when we apply them to the -vertex subcubic graphs resulting from digraphs with all out- and indegrees at most  (as described in the proof of Corollary 2.9).

Since Lemma 2.1 allows us to transfer some of the results for subcubic instances of TSP to its version in digraphs with all out- and indegrees at most , one may also ask whether there is a relationship between subcubic instances of undirected TSP and instances of ATSP with maximum total degree at most . (Recall that total degree of a vertex is the sum of its indegre and outdegree.) The following lemma (implicit in Plesník [plesnik]) answers this question indirectly.

Lemma 2.10 ([plesnik]).

There is an algorithm for ATSP restricted to digraphs of maximum total degree , and working in time if and only if there is an algorithm for ATSP restricted to digraphs with out- and indegrees at most , and working in time .

Proof.

() Let be a digraph on vertices with all out- and indegrees at most . Let be the instance of BFM-TSP defined in the proof of Lemma 2.1. We construct a weighted digraph by orienting the edges of from to , and edges of from to (the weights stay the same). We see that has vertices, has all total degrees at most 3, and Hamiltonian cycles in correspond to Hamiltonian cycles in of the same weight.

() Let be a digraph on vertices with maximum total degree at most 3. We may assume that all indegrees and outegrees in graph equal to or , because otherwise has no Hamiltonian cycle. Since the total degree of each vertex is at most , each vertex has exactly one incoming edge or exactly one outgoing edge. We see that every Hamiltonian cycle in must contain all such edges, hence they can be contracted. When we contract an edge we also remove edges of the form and . Let us denote the remaining graph by . We claim that has all out- and indegrees at most . Indeed, consider a contraction of an edge to a new vertex . Since we remove all other edges of the form and , there is a one-to-one correspondence between the edges entering (resp. leaving) and the edges entering (resp. leaving ). Hence, a single contraction does not increase the maximum out- or indegree, which implies that after all contractions all out- and indegrees are still at most 2. Moreover, every vertex from takes part in at least one edge contraction, and thus . ∎

By combining Lemma 2.10 with Corollary 2.9 we obtain the following.

Corollary 2.11.

ATSP restricted to digraphs of maximum total degree  can be solved in time and polynomial space.

2.4 Meet in the middle technique

Another approach for solving TSP in sparse graphs was suggested by Gebauer [GebauerTSP]. Although it was originally presented for undirected graphs of maximum degree , we recall it here for digraphs with outdegrees bounded by , since the same method can be applied to them.

Theorem 2.12 ([GebauerTSP]).

ATSP restricted to digraphs with outdegrees bounded by  can be solved in time and exponential space.

The idea of this algorithm can be sketched as follows. We guess a pair of vertices which divide a hypothetical Hamiltonian cycle into two (almost) equal parts. Next, we run a branching procedure to generate all the paths from  to  of length , and all the paths from  to  of length . Finally, we try to combine such paths into one Hamiltonian cycle by memorizing in a dictionary and iterating over paths .

For a detailed description, we refer to the original work of Gebauer [GebauerTSP], and to Section 4, where we generalize this result to digraphs of bounded average outdegree.

2.5 Algebraic methods

Björklund [Bjorklund:sparse-bipartite] shows the following result.

Theorem 2.13 ([Bjorklund:sparse-bipartite]).

There is a Monte Carlo algorithm which solves Undirected Hamiltonicity restricted to bipartite graphs of average degree at most  in time and polynomial space.

It turns out that the proof of Theorem 2.13 can be modified to get the following Theorem. The idea is to use the reduction of Lemma 2.1 to get a sparse bipartite graph and modify the construction of Theorem 2.13 so that a relevant forced matching is a part of the resulting Hamiltonian cycle.

Theorem 2.14.

There is a Monte Carlo algorithm which solves Directed Hamiltonicity restricted to digraphs of average outdegree at most  in time and polynomial space.

Proof.

We assume that the reader is familiar with the proof of Theorem 2.13. We apply Lemma 2.1 and we get a bipartite undirected graph and a perfect matching . Recall that has vertices and average degree at most . The goal is to decide whether has a Hamiltonian cycle that contains .

Similarly as in [Bjorklund:sparse-bipartite] we define a polynomial matrix with rows indexed by the vertices of , and columns indexed by the vertices of , as follows.

These polynomials have three types of variables: for every , for every edge , , . The third type of variable is somewhat special. Pick a fixed edge . For every edge there is one variable with two names and ; there are also two different variables and . Then we define a polynomial over a large enough field of characteristic two:

Now we should prove that thanks to cancellation in a field of characteristic two, , where is the set of all Hamiltonian cycles in which contain . Björklund (Lemma 3 in [Bjorklund:sparse-bipartite]) shows this equality for the original polynomial using three observations: 1) after cancellation, the surviving terms do not contain -variables, 2) each surviving term corresponds to a unique cycle cover in the graph, and 3) terms corresponding to non-Hamiltonian cycle covers pair-up and cancel-out, because if we reverse the lexicographically first cycle that does not contain , then we get exactly the same term (and if we reverse a Hamiltonian cycle we get a different term, because of the asymmetry in defining variables). The arguments used in [Bjorklund:sparse-bipartite] for proving 1)-3) still hold for the new polynomial, essentially for the same reasons.

The second ingredient of Björklund’s construction is an upper bound on probability that none of the columns of

is identically zero, where is a fixed assignment,

is the vector of all

variables, and is a random assignment. The calculation relies on the observation that if for a vertex we have for all , then the column of is identically zero. Note that this observation still holds for our new design. It follows that the probability bounds derived in [Bjorklund:sparse-bipartite] apply also in our case.

The third ingredient is efficient identification of assignments , for which is non-zero (for fixed, random, values of ). This is done by creating a Boolean variable corresponding to every variable and building a CNF formula such that its satisfying assignments correspond to a superset of all assignments of variables that result in non-zero . Again, the fact that the resulting formula is in CNF follows from the fact that the -th column is non-zero if for some we have , which is also true in our design. Finally, Björklund [Bjorklund:sparse-bipartite] shows how to enumarate all satisfying assignments of the CNF formula efficiently, what is not altered in any way by our changes in the design of polynomial . ∎

2.6 Dynamic programming on pathwidth decompositions

There are many works [Fomin:pathwidth, FominHoie:pathwidth-cubic, Kneis:pathwidth] which show that the pathwidth of sparse undirected graphs is relatively small, and which provide a polynomial time algorithm for computing the corresponding decomposition. (For a definition of pathwidth, see [platypus], section 7.2.) These results, combined with algorithms working on a path decomposition of the input graph [Cygan:pathwidth, Bodlaender:TSP], often lead to the fastest algorithms for sparse undirected graphs (see Table 1).

A natural question that arises here is whether these methods can be transferred to the corresponding problems in sparse directed graphs. There are two natural strategies for that: either use the path decomposition of the underlying undirected graph, or the path decomposition of the graph resulting from the reduction of Lemma 2.1 or Lemma 2.2. Although in this way one can get algorithms faster than for some classes of sparse digraphs, it does not help to improve any of the bounds in Table 2, at least by combining currently known results. For completeness, in the remainder of this section we provide calculations that support this claim.

Let us try the direct approach first. We can use the following result of Cygan et al.

Theorem 2.15 ([Cygan:connectivity]).

There is a Monte Carlo algorithm which, given a graph  with a tree decomposition of its underlying undirected graph of width , solves Directed Hamiltonicity for in time and exponential space.

Consider a -graph, i.e., a digraph with both out- and indegrees bounded by . The undirected graph underlying a -graph has maximum degree , and hence it has pathwidth at most , according to Theorem 2.16 below.

Theorem 2.16 ([Fomin:pathwidth]).

For every , there exists an integer such that for every undirected graph  on vertices the inequality

holds, where is the number of vertices of degree  in , and is the number of vertices of degree at least . Moreover, a path decomposition which witnesses the above inequality can be computed in polynomial time.

This, combined with Theorem 2.15 gives an algorithm for Directed Hamiltonicity running in time , much slower than in Corollary 2.9.

Now consider a digraph of average outdegree degree . Then, the underlying undirected graph has average degree , and we can bound its pathwidth using the following result.

Theorem 2.17 ([Kneis:pathwidth]).

Let be an -vertex undirected graph of average degree . Then

Moreover, a path decomposition which witnesses the above inequality can be computed in polynomial time.

It follows that the algorithm from Theorem 2.15 applied on a digraph of average outdegree has running time of , which can be seen to be slower than, say, enumerating cycle covers (Corollary 2.7) for all values of .

Now let us focus on the reduction approach. We can use the following two results.

Theorem 2.18 ([Cygan:pathwidth]).

There is a Monte Carlo algorithm which, given a graph  with its path decompositions of width , solves Undirected Hamiltonicity for in time and exponential space.

Moreover, if is subcubic, the running time is bounded by instead.

Theorem 2.19 ([Bodlaender:TSP]).

There is an algorithm which, given a graph  with its path decomposition of width , solves TSP for in time and exponential space, where is the matrix multiplication exponent.

Moreover, if is subcubic, the running time is bounded by instead.

Theorems 2.18 and 2.19 combined with Theorem 2.16 give, in particular, and algorithms for Undirected Hamiltonicity and TSP in subcubic graphs, respectively. For undirected graphs of average degree at most  we can combine the above theorems with with Theorem 2.17 to obtain algorithms in time and for Undirected Hamiltonicity and TSP, respectively.

Now we turn to digraphs again. First consider -graphs, i.e., digraphs with out- and indegrees bounded by . Let be an instance of ATSP, where is such a digraph. We use Lemma 2.2 to obtain an equivalent instance of (undirected) TSP. From the construction of we see that has  vertices of which at most  have degree , and the remaining ones have degree . Hence, by Theorem 2.16 we have . Therefore, Theorems 2.18 and 2.19 give respectively the algorithms running in time and for Directed Hamiltonicity and ATSP in . Both results are worse than the running time of the algorithm from Corollary 2.9.

Again, consider digraphs with bounded average outdegree . Let be an instance of ATSP, where is such a digraph. We use Lemma 2.2 to obtain an equivalent instance  of TSP. Then, has  vertices of average degree , and  vertices of degree . The latter ones can increase the pathwidth only by  in total, hence , and consequently, Directed Hamiltonicity and TSP in can be solved in time and , respectively. Both results are worse than the algorithm enumerating cycle covers described in Subsection 2.2.

3 Polynomial space algorithm

This section is devoted to the proof of Theorem 1.1. We begin with introducing some additional notions, then we provide a branching algorithm which will be later used as a subroutine, and finally we describe and analyse an algorithm for digraphs of average outdegree at most .

3.1 Preliminaries

Interfaces and switching walks.  Let be a directed graph (digraph). For a vertex a set of all incoming edges to or a set of all outgoing edges from will be called an interface of . We define the type of an interface of so that and .

Consider a sequence of distinct edges in such that if we forget about the orientation of edges, then we get a walk in the underlying undirected graph, where for edge is an orientation of . Assume additionally that for every either both edges and enter or both leave , in other words, the orientation of edges on the walk alternates. Now, let be the consecutive interfaces visited by , i.e., for every we have that is an interface of and for every , we have . If and , for , the sequence  will be called a switching walk. Similarly, if for , and , i.e., the walk is closed, then will be called a switching circuit. In both cases, length of  is defined as . The sequence is called the vertex sequence of . Abusing the notation slightly, we will refer to  as a set, when it is convenient. The motivation for introducing the notions of switching walks and circuits is given by the following lemma.

Lemma 3.1.

Let be a switching walk or a switching circuit in a digraph . Let be a Hamiltonian cycle in . Then, , or .

Proof.

Let us assume that is a switching walk. (For a switching circuit the proof is analogous.) Consider two consecutive edges . By the definition of a switching walk, there is a vertex with an interface of size such that . Since the cycle passes through , we obtain that must contain exactly one of the edges and , and the lemma easily follows. ∎

In some cases it is convenient to study switching walks and circuits in the language of an auxiliary bipartite graph. Let and . The interface graph of is the bipartite graph such that and . Clearly, there is a one-to-one correspondence between interfaces in and vertices of , and the degree of a vertex in is the size of the corresponding interface. Moreover, if is a switching walk in with a vertex sequence and interface sequence , then corresponds to a simple path in with endpoints of degree larger than 2, and all inner vertices of degree 2. Similarly, a switching circuit corresponds to a simple cycle in with all vertices of degree 2 in , i.e., forms a connected component in . Observe that both in the case of path and cycle above, the edges are exactly the edges of corresponding to the edges of . Using the equivalence described in this paragraph, the following lemma is immediate.

Lemma 3.2.

Edges of every digraph can be uniquely partitioned into switching walks and circuits. Moreover, the partition can be computed in linear time.

Proof.

Let be a digraph. Recall that by the definition of , there is a one-to-one correspondence between edges of and edges of . It is clear that edges of can be uniquely partitioned into (1) cycles with all vertices of degree 2 and (2) paths with both endpoints of degree at least 3 and all inner vertices of degree 2. The corresponding switching circuits and switching walks form the desired partition of . An algorithm which constructs the partition is straightforward. ∎

Another view on Corollary 2.9 The run of the algorithm from Corollary 2.9 can be interpreted using the introduced notions as follows. We apply Lemma 3.2 to partition  into switching walks and circuits. If there is a switching circuit  of length at least , we guess the intersection of  with a hypothetical Hamiltonian cycle in . By Lemma 3.1 there are two possibilities for this intersection (in both cases it consists of at least  edges of ). We consider both cases by marking chosen edges as forced, and recursively calling on the remaining graph. If there is no switching circuit of length at least , it turns out that the remaining instance can be solved by finding a minimum spanning tree in an auxiliary graph (see [Eppstein:cubic] for the details). Hence, the size of the recursion tree of this algorithm can be bounded by .

3.2 Branching subroutine

Let us consider a digraph . By we will denote the number of vertices of  with outdegree equal to . Let  be the number of vertices of with outdegree at least , and let be the sequence of these outdegrees. Then, let us denote the sum by . An analogous sum for indegrees will be denoted by . Note that if has no vertex of out- or indegree , then by the handshaking lemma .

Theorem 3.3.

ATSP can be solved in time and polynomial space, where .

Proof.
Input: – a digraph on vertices,
– a function
Output: the minimum weight of a Hamiltonian cycle in ,
or if there is no such cycle
Function :
        Let with removed edges of the form , and for with contracted vertices and weights of inherited from appropriately return
Function :
        if  has exactly two vertices and  then
               return if , or otherwise
       if there is an empty interface in i.e. a vertex of out- or indegree  then
               return
       if there is an interface of size  then
               return
       Use Lemma 3.2 to partition into switching walks and circuits if there is a switching walk which begins and ends at the same interface  then
               with removed edges of return
       if there is a switching walk of even length then
               Let return
       if there is no interface of size at least  then