 # Deterministic "Snakes and Ladders" Heuristic for the Hamiltonian Cycle Problem

We present a polynomial complexity, deterministic, heuristic for solving the Hamiltonian Cycle Problem (HCP) in an undirected graph of order n. Although finding a Hamiltonian cycle is not theoretically guaranteed, we have observed that the heuristic is successful even in cases where such cycles are extremely rare, and it also performs very well on all HCP instances of large graphs listed on the TSPLIB web page. The heuristic owes its name to a visualisation of its iterations. All vertices of the graph are placed on a given circle in some order. The graph's edges are classified as either snakes or ladders, with snakes forming arcs of the circle and ladders forming its chords. The heuristic strives to place exactly n snakes on the circle, thereby forming a Hamiltonian cycle. The Snakes and Ladders Heuristic (SLH) uses transformations inspired by k-opt algorithms such as the, now classical, Lin-Kernighan heuristic to reorder the vertices on the circle in order to transform some ladders into snakes and vice versa. The use of a suitable stopping criterion ensures the heuristic terminates in polynomial time if no improvement is made in n^3 major iterations.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The essence of the Hamiltonian Cycle Problem (HCP, for short) is contained in the following, deceptively simple statement: Given a graph , find a Hamiltonian cycle (HC) (a simple cycle that contains all vertices of the graph) or prove that such a cycle does not exist. A graph is said to be Hamiltonian111The name stems from Sir William Hamilton’s investigations of such cycles on the dodecahedron graph around 1856, however Leonhard Euler studied the famous “knight’s tour” on a chessboard as early as 1759. if it contains at least one Hamiltonian cycle, and non-Hamiltonian otherwise.

Henceforth, a graph of order will mean an -vertex graph which is both simple (without self loops or multiple edges) and undirected (every edge admits two-way traffic.)

The HCP is known to be NP-complete and has become a challenge that attracts mathematical minds both in its own right and because of its close relationship to the famous Travelling Salesman Problem (TSP). An efficient solution of the latter would have an enormous impact in operations research, optimisation and computer science. However, the TSP is merely the problem of identifying a Hamiltonian cycle (also called a “tour”) of minimal length, where each edge has an associated length (or weight), and the length of the cycle is the sum of the lengths of edges comprising that cycle. The latter constitutes a simple, linear, objective function and it can be argued that much of the difficulty of the TSP is embedded in the HCP, namely, in finding an optimal tour in the space of Hamiltonian cycles.

There is now an extensive body of literature devoted to TSP and its many variants. Besides theoretical developments there are also many exact algortihms, and heuristics. The reader is referred to the comprehensive books by Lawler et al Lawl85Traveling and Gutin and Punnen Gutin . For many researchers, solution of HCP becomes a simple corollary of the TSP although, in principle, the former need not require the minimisation of any objective function.

Some of the most successful heuristics for solving TSP are local search methods that exploit the so-called “opt” transformations that facilitate movement from one tour to a shorter tour via an exchange of exactly edges. The notion of a “opt” transformation is widely attributed to Flood Flood . Subsequently, Lin Lin and Lin and Kernighan (L–K) LK developed powerful heuristics that exploited “opt” transformations more extensively. Indeed, L–K is still embedded in some of the best modern heuristics for the TSP, notably Helsgaun’s LKH Helsgaun . For a comprehensive discussion of modern developments we refer the reader to Applegate et al ConcordeBook .

The Snakes and Ladders Heuristic (SLH) for the Hamiltonian Cycle Problem presented here is a polynomial-complexity algorithm inspired by, but distinctly different from, the opt heuristics. The name Snakes and Ladders Heuristic comes from our visual representation of iterations of the algorithm described in detail in the next section.

In order to attempt to find a Hamiltonian cycle of a given graph , we place its vertices in some order on a circle. Any edges between adjacent vertices on the circle are rendered as arcs of the circle (which we call snakes), while all other edges are rendered as chords of the circle (which we call ladders). If two adjacent vertices on the circle have no snake between them, we say there is a gap between them. SLH attempts to place all edges of a Hamiltonian cycle on the circle by a number of transformations that are isomorphisms of the underlying graph. While some, but not all, of these isomorphisms coincide with opt transformations, the main difference compared to opt search methods is that our approach does not require an improvement of a TSP-type objective function; rather, SLH seeks changes in the arrangement of vertices of the graph on the circle with the goal of facilitating the eventual closure of gaps. We might say that in SLH, opt transformations have been generalised to change transformations in an appropriate way.

Whereas a opt method attempts to “improve” after each iteration by decreasing the number of gaps until a Hamiltonian cycle is obtained; SLH uses a more general notion of “improvement” by trying to achieve a suitable balance between increasing the number of Hamiltonian cycle edges on the circle and decreasing the number of gaps. As a consequence, SLH differs from standard opt heuristics in three important ways.

1. SLH performs a sequence of compositions of two simple generator operations: gamma () and kappa (). The sequence so constructed often results in transformations that - under opt heuristics - would not be allowed, or would be difficult to identify. We later prove that all Hamiltonian cycles in any given graph of size are reachable through the use of these two operations, with the number of operations required bounded above by a linear function of .

2. The opt heuristics update the tour only when an “improvement” is found, while SLH allows floating and opening transformations that may result in either no improvement or a “sacrifice”, respectively.

3. The opt heuristics rely on randomisation techniques to obtain a Hamiltonian cycle while SLH does not take advantage of these techniques, and is designed to run on any initial input arrangement in a deterministic fashion.

As indicated in item 3, we implement SLH as a deterministic heuristic. That is, for a graph and a given starting orientation, the heuristic will produce the same output every time it is run. A stopping condition is chosen to ensure that SLH will terminate in polynomial time, either by identifying a Hamiltonian cycle, or failing to improve after iterations. SLH has been implemented in C++. Although SLH is not guaranteed to find a Hamiltonian cycle in a Hamiltonian graph, preliminary experiments on many graphs (not exceeding 5000 vertices) have succeeded in all cases. That is, a Hamiltonian cycle has been found in all graphs that were Hamiltonian, while termination declaring the graph to be “likely non-Hamiltonian” was reached in all instances of graphs known, a priori, to not possess any Hamiltonian cycles.

This paper is organised as follows: Section 2 introduces the basic idea behind our approach, the transformations of SLH and our algorithm implementation. Section 3 gives a plausible explanation of why SLH is effective in finding Hamiltonian cycles. Section 4 reports on some of the experiments performed with the algorithm as well as a comparison with well known TSP solvers. Finally, our conclusions of this paper are presented in Section 5, including a link to our website, http://fhcp.edu.au/slhweb/ SLH , where readers are invited to test SLH on either built-in, or user-supplied, problems.

## 2 Description of the algorithm

We start by introducing the terminology that we use in this paper. Then we will describe the transformations that are used in our implementation of SLH. Finally, we discuss our implementation of the SLH algorithm.

### 2.1 Basic idea

In our approach, we place the vertices of a simple undirected graph on a circle in some order. Then, all edges of between adjacent vertices on the circle are represented as arcs on the circle, which we call snakes, while the other edges are represented as chords of the circle, which we call ladders.

The arrangements of vertices on the circle form natural equivalence classes. Namely, two arrangements are said to be equivalent if either one can be transformed to the other via a rotation (clockwise or anti-clockwise), a reversal of the ordering, or a composition of both. This implies two arrangements are equivalent if and only if all vertices have the same neighbours in both arrangements. For example, the arrangement (1,3,4,6,2,5) is equivalent to and but not .

It is clear that any member of the equivalence class, for a fixed graph , contains the same snakes and ladders as any other member of this class. We use the term ordering, or circle ordering to denote such an equivalence class, and give it the symbol . However, since the algorithm and all definitions in this paper are given for a fixed graph, we henceforth drop the subscript . For a given ordering , if two vertices and that are adjacent on the circle are not connected by a snake we say that is a gap on . When no confusion is possible, we will also use the term ordering to denote a particular member of the equivalence class, and use special notation to describe the ordering. For any adjacent pair of vertices, there are three distinct possibilities; there may be a snake between them, a gap between them, or there may be either a gap or a snake. These situations will be denoted by , , and respectively. This notation will allow us to define transformations that require the presence of particular snakes or gaps in certain parts of the ordering, but remain defined regardless of what is present elsewhere in the ordering. One example of this notation is , where is viewed as the initial vertex, and as the final vertex. In this example, immediately follows , then some number (possibly zero) additional vertices follow before is reached. Likewise, immediately follows , and then some number (possibly zero) additional vertices follow before reaching , and then follows as the circle is closed. When denoting an ordering, the initial vertex is repeated at the end. Equivalent notation without the initial vertex repeated refers to a segment of the ordering. For the sake of clarity, we can choose to view a given ordering as an ordered set of segments. For example, if we have an ordering , we can define segments and and rewrite the ordering as . We additionally define segment to be the reverse of , so in the previous example and ordering .

We note that a Hamiltonian cycle corresponds to an ordering containing exactly snakes, or equivalently, an ordering with gaps. Hence, the ultimate goal for SLH is to close all the gaps on through a series of transformations. These transformations can be described as compositions of two generator transformations, specifically the following two isomorphisms:

1. isomorphism. This transformation, denoted by , maps an ordering to ordering . That is, if we define segments and , then maps ordering to ordering . The definition implies that is only defined for orderings in which and are adjacent on the ordering. Vertex is implicitly defined as being the vertex adjacent to on the segment .

2. isomorphism. This transformation, denoted by , maps an ordering to an ordering . That is, if we define segments , , and , then maps ordering to ordering . The definition implies that is only defined for orderings in which the segment does not contain . This definition allows for , that is, segment contains a single vertex; for , that is, segment contains a single vertex; and for segment to be empty (i.e. so vertex directly precedes vertex on the ordering), or for , that is, segment contains a single vertex.

Note that, although the figures for the generator transformations include ladders and snakes, these need not be present for the isomorphisms to be performed.

Later in Section 3, we will show that for any initial ordering and any Hamiltonian ordering , that is, an ordering corresponding to a Hamiltonian cycle , there exists a transformation from to that is a composition of a number of and transformations in some order. Namely,

 T=ℓ∏i=1γεi∘ϰδi,

where and can take values or , and bounds the number of such transformations. In fact, our algorithm attempts to iteratively build such a transformation to some unknown Hamiltonian ordering as a composition of SLH transforms that are themselves specially designed combinations of and transformations. To ensure that our algorithm is polynomially bounded we choose .

Note that transforms the initial ordering to an ordering with zero gaps. Therefore, SLH transformations that reduce the number of gaps are viewed as desirable. Such transformations are widely used in TSP algorithms and are called opt exchanges. Adapting opt transformations for HCP, they improve the tour by exchanging ladders with gaps and snakes ( in total) in such a way that the new tour contains fewer gaps. Even though SLH incorporates opt transformations, it also incorporates transformations that may preserve the weight of the tour (i.e. keep the same number of gaps) or even increase the weight of the tour (i.e. increase the number of gaps). We call these flo transformations, to emphasise their indeterminant (floating) nature. Unlike opt transformations, flo transformations are defined for situations where it is not known, in advance, whether certain gaps or ladders are present. Specifically, a flo transformation is one in which as many as ladders may be turned into snakes, if they exist. Then, if it so happens that some of these ladders and/or gaps exist, the transformation might produce a new ordering with a reduced number of gaps. In such a case, the flo transformation specialises to an opt transformation for some .

### 2.2 Operations

In our implementation, we found it useful to consider special combinations of and that we found to be the most effective among the many alternatives we tried. These can be partitioned into three types; closing, floating and opening. In each case, certain ladders, snakes and gaps are required to either be present or absent for the combination of generator transformations to produce the transformation.

SLH–closing transformations: These correspond to particular 2–opt and 3–opt transformations:

1. SLH closing –opt type 1 transformation is . It maps ordering , where is a ladder, to an ordering .

2. SLH closing –opt type 2 transformation is that maps an ordering , where and are ladders, to an ordering .

3. SLH closing –opt transformation is . It maps ordering , where , and are ladders, to an ordering .

SLH–floating transformations: These are special ladder–snake interchange transformations that generate new orderings, where the number of gaps is either unchanged, or possibly reduced (if certain ladders, gaps, or both are present in the first ordering). Ladders which (if they exist) will be turned into snakes after the transformation are represented by dashed lines in the first ordering, with the corresponding snake also represented by a dashed line in the second ordering. Of course, if such a ladder does not exist then the corresponding dashed line in the second ordering represents a gap. In particular:

1. SLH floating –flo transformation is that maps an ordering , where is a ladder, to an ordering . If is also a ladder, this transformation closes at least one gap.

2. SLH floating –flo transformation is . It maps ordering , where and at least one of and are ladders, to an ordering . If both and are ladders, this transformation closes at least one gap.

3. SLH floating –flo type 1 transformation is . It maps ordering , where , and are ladders, to an ordering . If is also a ladder, this transformation closes at least one gap.

4. SLH floating –flo type 2 transformation is that maps ordering , where and are ladders, and at least one of and are ladders, to an ordering If both and are ladders, this transformation closes at least one gap.

5. SLH floating –flo transformation is that maps ordering , where , , and are ladders, to an ordering . If is also a ladder, this transformation closes at least one gap.

SLH opening transformations: Importantly, we introduce a single snake-ladder interchange transformation that generates a new ordering that generally contains one more gap.

1. SLH opening flo transformation is . It maps ordering , where and are ladders, to an ordering . In general, this transformation increases the number of gaps by 1, though this is not always the case.

Note that, in all cases, these transformations are designed to be performed for an ordering containing a gap . To represent this, we say that a transformation is performed “around gap ”.

### 2.3 Algorithm

SLH works in four mains stages. In each stage it attempts to find a Hamiltonian cycle by a particular approach. If no Hamiltonian cycle is found within the (polynomial complexity) constraints of that stage, SLH moves on to the next stage in order to attempt a different approach. Finally, if stage 3 has failed to reduce the number of gaps after iterations, the graph is declared to be “likely non-Hamiltonian”. The following is a description of our algorithm implementation.

Stage 0.

1. Let the initial ordering be the original assignment of the graph.

2. Perform SLH-closing transformations to obtain new orderings. Continue until the number of gaps can not be reduced by any SLH-closing transformation. If the number of gaps is zero, stop: a Hamiltonian cycle has been found.

3. Create a gap list, initially empty, and an ordering list, initially containing only the most recently obtained ordering. These two lists will act as tabu lists. Go to stage 1.

We note that we can substitute stage 0 with any effective polynomially bounded opt algorithm222An effective implementation of L–K, for example Helsgaun’s LKH Helsgaun , could be such an algorithm., disallowing the use of any randomisation techniques to retain the deterministic nature of SLH. The final ordering provided by such an algorithm will become the initial ordering for stage 1. Using operations from a sufficiently efficient opt algorithm instead of our closing operations may improve the performance of SLH. However, for this proof of concept, and to emphasise the effectiveness of SLH at stages 1, 2 and 3, we chose the simplest possible closing transformations.

Stage 1.

1. Perform any SLH-floating transformation around a gap on the latest ordering. Save in the gap list and continue as follows:

1. If the new ordering contains a gap that is not in the gap list, add the ordering to the ordering list and proceed to stage 1.2.

2. If the new ordering contains only gaps that are in the gap list repeat stage 1.1 with a different SLH-floating transformation on the previous ordering. If no SLH-floating transformation produces an ordering which contains only gaps that are in the gap list already, then revert to an earlier ordering, and continue (perhaps needing to revert back to an even earlier ordering) until a new SLH-floating transformation can be performed that does produce an ordering containing a gap that is not in the gap list. Then, add the ordering to the ordering list and proceed to stage 1.2.

3. If no SLH-floating transformation on any ordering in the ordering list produces an ordering with no gaps in the gap list, proceed to stage 2.

2. If the number of gaps is less than in the previous iteration, then remove all orderings from the ordering list, and all gaps from the gap list. If a Hamiltonian cycle is found, stop.

Note that, according to the stopping rule, if an SLH-floating transformation creates a gap that has already been obtained by some previous transformation, the transformation will not be performed and the ordering will not be listed. Thus, the stopping rule limits the number of orderings in the ordering list at this stage to Once each ordering in the ordering list has been considered, we proceed to stage 2.

Stage 2.

1. Record the most recently obtained ordering and the number of gaps .

2. Perform an SLH-opening transformation from a gap in and save the ordering so obtained in the ordering list.

3. Repeat stage 1 until no gap can be closed. If the number of gaps is zero, stop: a Hamiltonian cycle has been found.

4. If the number of gaps in the most recent ordering is less than , empty the gap list, remove all orderings except the most recent ordering from the ordering list, and return to stage 1.

5. Recover ordering and return to stage 2.2 but perform a different SLH-opening transformation. If all possible SLH-opening transformations from have been previously considered, go to stage 3.

Stage 3.

1. Record the most recently obtained ordering .

2. Perform an opening transformation from any gap in and add the resulting ordering to the ordering list. Then, attempt to perform opt transformations (floating transformations that reduce the number of gaps), without obtaining already listed orderings, saving each new ordering in the ordering list. Continue until no more gaps can be closed. If the number of gaps is zero, stop: a Hamiltonian cycle has been found. If at any stage the number of orderings in the ordering list exceeds , go to stage 3.5.

3. If the number of gaps in the most recently obtained ordering is less than (recorded in stage 2.1), empty the gap list, remove all orderings except the most recent ordering from the ordering list, and return to stage 1. Otherwise, revert to the most recently obtained ordering, descending from , in which opt transformations that have not yet been tried are possible. Continue attempting to perform opt transformations without obtaining already listed orderings, saving each new ordering in the ordering list.

4. If all possible opt transformations in each previous ordering descending from have already been tried, return to stage 3.1.

5. If the number of orderings in the ordering list is equal to , and the number of gaps is greater than zero in all, declare the graph to be “likely non-Hamiltonian” and stop.

Note that in stages above where a transformation is selected, there may be multiple eligible transformations. The algorithm searches for any applicable transformations, and as soon as an eligible transformation is discovered, it is performed. If the transformations can be performed from any gap, the gaps are perused in order of the ordering. This ensures that the above process is deterministic, with the order of transformations determined entirely by the initial assignment of the graph.

### 2.4 Worst-case algorithmic complexity

In the algorithm implementation above, stage 3 has the largest search space and the stopping condition with the largest bound, so in the worst case dominates the execution time. In each iteration of stage 3 we allow as many opening transformations as are necessary to enable a floating transformation to close a gap. Then, there will be a sequence of such floating transformations that close gaps. If is the maximum degree of the graph, then there are potential floating transformations at each step of the sequence. This number of transformations arises because the SLH floating 5-flo transformation requires four edges to be chosen, the first emanating from any vertex next to a gap, and each subsequent edge emanating from a vertex determined by the previous edge. Thus, there could be at most gaps, and edges from each vertex. Then, to determine if such a transformation produces an ordering not already in the ordering list, we need to search the latter. There could be up to orderings stored, in an index set, so using a binary search over the set is an process, and comparing two orderings is a process. There are, at most, such iterations in stage 3. Theoretically, stages 1–3 could be repeated times as the requirement to return from stage 3.3 to stage 1 is only that the number of gaps (at most ) has decreased by at least 1. Therefore the worst-case complexity of the algorithm detailed above is . For a sparse graph, where is , this reduces to . Other operations (such as adding the newly constructed ordering to the ordering list) are dominated by the time taken to search for the transformations in stage 3.

Our experience indicates, however, that this worst-case complexity is very unlikely to be encountered in practice. Experimentally, we have seen that when SLH reaches stage 3 the number of gaps is very close to the minimal possible number, so stages 1–3 only need to be repeated a handful of times. Also, the likelihood of needing to open many gaps before one can be closed in stage 3 is extremely small. So, even in cases where a non-Hamiltonian graph is submitted to SLH the performance is likely to be closer to , or for a sparse non-Hamiltonian graph. Furthermore, in our experiments we have seen that almost all Hamiltonian graphs are solved without ever needing to reach stage 2. In such cases the heuristic has complexity , or for sparse Hamiltonian graphs. This bound arises because there are at most iterations in stage 1, at each iteration we search from a prescribed gap so there are choices of floating transformations, and after each iteration we need to record the new ordering which is an process.

## 3 Motivation

It is worth mentioning that the performance of SLH has exceeded our expectations. By that we mean that SLH correctly solved all graph instances that we tried, which contained up to 5000 vertices. These instances included several particularly hard graphs such as generalized Petersen (GP) cubic graphs that contain only 3 Hamiltonian cycles, clique graphs, leap graphs, TSPLIB graphs and many others, in an entirely deterministic way. That is, we have not yet encountered a Hamiltonian graph where SLH fails to find a Hamiltonian cycle before the invocation of the stopping rules. Moreover, SLH works with the given arrangement of the initial ordering without benefiting from randomisation techniques and preprocessing.

The effectiveness of SLH can arguably be justified by our new perspective on what it means to “improve” after each iteration. Let us suppose that an undirected graph is represented by the current ordering that contains gaps, and that a Hamiltonian ordering corresponding to a Hamiltonian cycle is known. Also, assume that there are snakes on that are edges of the Hamiltonian cycle (or, equivalently, snakes in ). We define the distance between and as

 dist(C,CH)=n−k.

The difference between and is then defined as

 Δ(C,CH)=g(C)3+dist(C,CH).

In our perspective, a transformation that maps ordering to ordering improves the tour if and hence, constitutes a desirable transformation. Specifically, by applying such transformations iteratively we ensure will eventually be obtained. We now show that from any ordering there exists a or transformation, or a composition that, in our sense, improves the tour. This approach is in contrast with opt algorithms (adapted for HCP), where the focus is solely on reducing the number of gaps. We argue that reducing the number of gaps is not a sufficient measure of improvement, because closing gaps alone might not bring us any closer to ; indeed, the might increase compared to . Thus, in situations where the distance is large, opt algorithms might struggle with finding a Hamiltonian cycle, even if the number of gaps is small. In Section 4 we present examples when opt approach quickly reduces the number of gaps to 1 but fails to find any Hamiltonian cycle , arguably because the distances of any Hamiltonian ordering to the current ordering with 1 gap is quite large. By contrast, reducing the distance, without controlling the number of gaps, is enough to converge to a Hamiltonian ordering. However, since we do not know the Hamiltonian cycle , we can not measure the distance. Hence, reducing the number of gaps in an ordering is merely a pragmatic, surrogate, objective. Nonetheless, SLH is willing to sacrifice the latter to move, “laterally”, to a possibly better ordering.

We now proceed to the argument that and transformations (or a composition of the two) suffice to transform an ordering to another ordering such that .

More precisely, we show that the and that . Figure 12: Options for a Hamiltonian cycle containing edge (x,a) constituting a ladder in a given ordering.

Suppose that and that contains a gap . Then at least one edge in must be a ladder in , say , emanating from . There are three options for the other edge with endpoint on the Hamiltonian cycle :

• is a snake in , and the segment does not contain ;

• is a snake in , and the segment contains ;

• is a ladder in .

In situations (i) and (iii) if we choose it results in the ordering , such that because the distance . This is because the ladder , which is in , becomes a snake, whereas if snake exists, it becomes a ladder but is not in . Also, the number of gaps in does not increase compared to , though it could decrease if was a ladder, or was a gap.

In situation (ii) continues from vertex to some vertex on the segment and then moves to a vertex on the segment . Without loss of generality, let be . There are now three possibilities Figure 13: From left to right, Hamiltonian cycle containing edge (b,r), (b,d) and (b,s) respectively.
1. Edge In this case we apply transform that maps into some . The number of gaps will not increase by more than . However, contains new snakes and which are in , but does not contain that is a snake in , and may be an edge in . Snake that becomes a ladder on does not belong to and therefore its absence on the circle does not increase So, overall, the number of snakes in that are in must grow by at least 1 compared to .

2. Edge In this case we apply transform that maps into . Again, the number of gaps can not increase by more than . On the other hand, contains former ladders and (which are edges in ) as snakes, but former snake is a ladder in , and may be an edge in . Former snake that becomes a ladder on , does not belong to and therefore does not contribute to So, overall, the number of snakes in that are in must grow by at least 1 compared to .

3. Some edge despite being a ladder in . In this case turns ladders and , both of which are in , into snakes in . Former snake becomes a ladder in and may be in . All other former snakes that become ladders in are not in . So, overall, the number of snakes in that are in must grow by at least 1 compared to .

In all three cases we see that , and . Therefore, even though the transformation that maps to might increase the number of gaps.

Suppose now that , that is corresponds to a Hamiltonian cycle, different from Then must contain an edge, say , that is a ladder in . Then, there is vertex , adjacent to in , such that the snake is not in If we treat as if it were a gap (since we are not concerned if it is transformed into a ladder), and apply exactly those transformations as above in the corresponding situation, we improve the distance by at least 1 in every case and may create at most 2 gaps if or a composition of and is applied. Therefore, an overall improvement for the function will be at least compared to when is a genuine gap.

The above argument implies that, as mentioned in the introduction, for every ordering and any Hamiltonian ordering , there exists a transformation mapping to , such that

 T=ℓ∏i=1γεi∘ϰδi,

where and can take values or , and bounds the number of transformations. At any stage of the process, there exists at least one transformation that can decrease by at least . Performing the correct sequence of these transformations in the process of constructing ensures convergence to in no more than transformations.

Of course, in our implementation of the algorithm, we do not know a Hamiltonian cycle in advance, so we use the transformations outlined in Section 3. Each of those transformations can be represented as where , and we stop after such transformations. So, given a graph and an initial ordering, our implementation of SLH produces, in a deterministic fashion, a more specialised mapping

 T=L∏i=1γei∘ϰdi,

where depend upon which transformation is performed in step , and . Then, either transforms the initial ordering to a Hamiltonian ordering, or we report that the graph is “likely non-Hamiltonian”. Figure 14: A 15 vertex graph with only one Hamiltonian cycle. The dashed lines are the edges that are not a part of the Hamiltonian cycle.

Next, we use the 15-vertex example given in Figure 14 to demonstrate the contrast between SLH and the most famous -opt heuristic; the classical Lin-Kernighan algorithm LK . The main tool for finding the optimal tour (Hamiltonian cycle in this case) in the Lin-Kernighan algorithm is using sequential exchanges where an improvement is possible. In other words, consider a set of snakes and gaps and a set of ladders where the exchange of every with constitutes an improved tour (i.e. fewer gaps). A sequential exchange is when and share an endpoint, and so do and , for all . For the graph of Figure 14 with only one Hamiltonian cycle, an improvement is possible only if consists of the gap and all snakes that are not a part of the Hamiltonian cycle, while consists of the ladders that are on the Hamiltonian cycle. This can be seen because there is only one gap, and therefore improvement implies that the number of gaps must be zero after a opt transformation. Therefore, a unique –opt transformation exists that can improve the tour by exchanging with on the ordering. Although it is possible to construct this –opt transformation through a sequence of sequential edge-exchange moves (which individually do not improve the tour), the classical Lin-Kernighan algorithm would be unable to do so even if unlimited backtracking was permitted333It should be noted that modern implementations of Lin-Kernighan view sequential exchanges in a more refined sense, which permits them to construct the required –opt transformation.. In the words of the authors of LK , “the procedure is augmented by a limited defense in [such] situation[s]”. So Lin–Kernighan’s main technique to deal with situations that require complicated non-sequential exchanges is randomisation techniques and hoping that an easier situation arises. However, if the graph is large and the number of Hamiltonian cycles is small, randomisation might be of little help as demonstrated in the next section with generalised Petersen graphs. On the other hand, one can check that the following SLH transformation can unfold the graph and find the Hamiltonian cycle in the graph of Figure 14.

 T=γ(v13,v9,v15)∘γ(v9,v6,v14)∘ϰ(v5,v8,v7,v13)∘γ(v15,v10,v11)∘γ(v5,v2,v12)∘ϰ(v1,v4,v3,v11).

The current implementation of SLH solves this graph in stage 1. However, more difficult instances are sometimes solved in stages 2 and 3, where opening transformations are featured. In those stages, the differences between SLH and the HCP adapted L–K algorithm are best revealed.

## 4 Experimental results

The times reported in this section were obtained by running SLH on a Dell R210 PC, (Intel(R) Xeon(R) E3-1270, 3.4GHz, 16GB RAM) running Linux CentOS 6.

We first tested SLH on several million subcubic (degree three or less) graphs containing up to 50 vertices. These tests helped us design effective stages and transformations for the algorithm. With the stages in place and the transformations selected as described in Section 2, the performance of SLH proved to be competitive with benchmark solvers, based upon experiments conducted on a variety of graphs. Specifically, we compared the performance of SLH to that of Concorde TSP Solver Concorde , Helsgaun’s Lin-Kernighan implementation, LKH HCP solver Helsgaun , and Eppstein’s HCP deterministic solver eppstein for cubic graphs. Though Concorde is a TSP solver, we represent instances of HCP as TSP by assigning weight to existing edges (edges of the given graph) and weight to non-existing edges. It should be noted that LKH has a built-in pre-solve phase that performs subgradient optimisation. In order to obtain a fair comparison with SLH, we have chosen to evaluate LKH both with and without this pre-solve stage. As is demonstrated below, the pre-solve is very effective on certain classes of graphs, which indicates that SLH would benefit from a similar pre-solve phase. Some experimentation with alternative parameters was performed, however with only one exception, we did not discover any instances where altering the parameters significantly improved the solving time or reliability in any of the comparison solvers. The sole exception was in switching off the restricted search option in LKH; in this case, LKH was more reliable, but took longer to solve. For the remainder of this manuscript, default parameters are assumed. We ran the Windows/Cygwin implementation of Concorde, and Eppstein’s solver, on a Dell Optiplex 780 (Intel(R) Core(TM)2 Quad CPU Q9650, 3.0Ghz, 4GB RAM) running Windows 7. We ran LKH v2.0.7 on the same Dell R210 PC used to run SLH.

For the test graphs we selected all TSPLIB HCP data TSPLIB ; Flower snarks flower for ; Sheehan graphs sheehan for ; and also generalised Petersen cubic graphs GP for The TSPLIB graphs are benchmark instances notable for their large size (up to 5000 vertices). They are all sparse, irregular graphs, and are all Hamiltonian. The Flower snarks are a family of non-Hamiltonian cubic graphs of size , that all contain Hamiltonian paths; so the optimal solution, from a TSP viewpoint, is a path of length 1. These were tested to give an indication of how quickly SLH can detect non-Hamiltonicity compared to the other solvers tested, which are not designed to deal with such instances. This is an important test, as the essential difficulty of HCP is that a Hamiltonian cycle may not exist for a given graph, and hence it is interesting to see how the algorithms will perform in these cases. Sheehan graphs are maximally dense uniquely Hamiltonian graphs; that is, Hamiltonian graphs with only a single Hamiltonian cycle, that contain as large a ratio of edges to vertices as possible. Finally, the generalised Petersen graphs, for the values of we chose (that is, ), are cubic graphs that contain only three Hamiltonian cycles, which is the minimum number a Hamiltonian cubic graph may contain. For the Sheehan graphs and the generalised Petersen graphs we chose a random permutation of the vertices to help disguise the inherent structure.

In our tests, the success rate of LKH algorithm – which invokes randomisations – is determined after many trials (50 trials for Sheehan graphs, 1000 trials for other graphs) and execution time is taken as average over the trials. Eppstein’s algorithm is only designed for cubic graphs and so was tested only on cubic graphs, that is, the Flower snarks and the generalised Petersen graphs. Eppstein’s algorithm finds all Hamiltonian cycles, so in the case of Hamiltonian graphs, the total time was divided by the number of Hamiltonian cycles. Also note that Eppstein’s algorithm, like SLH (and unlike Concorde and LKH), is a deterministic algorithm.

The results of the above comparisons are reported in Table 1. Assessment Fail means that the solver terminated without managing to find a Hamiltonian cycle despite the graph being Hamiltonian, while Time Fail means that the solver had not concluded within 48 hours. Note that, for the non-Hamiltonian instances (the Flower snarks), assessment Fail does not apply as there is no Hamiltonian cycle to find.

Table 1 reveals that the performance of SLH compares increasingly favourably as the difficulty of the graph grows. For general instances, LKH dominates both Concorde and SLH, but struggles to solve reliably and efficiently when there is only a small number of orderings with the minimal number of gaps. Although LKH is able to solve the Sheehan graphs very quickly, it does so in its pre-solve phase. As Table 1 demonstrates, its performance on these graphs deteriorates when this phase is disabled. Of course, since SLH does not currently have any pre-solve phase, arguably, it is more meaningful to compare it to LKH without pre-solve. The latter, like SLH, is essentially a -exchange type algorithm. For the Flower snarks, which are hypohamiltonian, Concorde quickly finds a Hamiltonian path (the optimal solution) but takes much longer to determine that no Hamiltonian cycle exists.

For the two largest tested instances of the generalised Petersen graphs, SLH had to proceed to later stages, where opening transformations were performed, in order to find a Hamiltonian cycle. For these two instances, Eppstein’s algorithm was unable to provide a solution within 48 hours, and in both cases Concorde encountered a numerical error that prevented it from returning any result at all. LKH was able to find a solution to GP(123,2), but only in nine out of 1000 attempts (or eleven attempts without the pre-solve phase), and failed to produce any solutions to GP(243,2) in 1000 attempts, with or without the pre-solve phase.

For the cubic graphs, we ran Eppstein’s algorithm, which was dominated by the performance of the other three algorithms; however, it should be noted that Eppstein’s algorithm enumerates all Hamiltonian cycles, rather than terminating once one is found. The adaptation of SLH to find additional Hamiltonian cycles, after a first has been identified, is a topic for future research.

For all four solvers tested, the hardest graphs in our experiments to solve were the generalised Petersen cubic graphs for . It is interesting to note that, although LKH failed to find a Hamiltonian cycle in , it very quickly descended to an ordering that contained exactly one gap. At the same time, the value of remained large for all three possible choices of due to the small number of common snakes in and any . In our experiments, Helsgaun’s LKH generated a final ordering for , for which was , and for the three Hamiltonian cycles.

Table 2 reports the success rate of SLH over large test sets. SLH was first tested on the Foster census (see Bouwer et al. bouwer and Royle Royle ). SLH was next tested on 1,000,000 cubic graphs of size 100 (generated uniformly, at random, see wormald ), separated into four test sets. Finally, SLH was tested on 10,000 cubic graphs of size 1,000 (generated uniformly by random). The success rate of SLH was checked by confirming that the reported Hamiltonian cycles exist in the graph444Of course, this is guaranteed by the algorithm, but was checked nonetheless to confirm our implementation was accurate., and independently confirming any result of non-Hamiltonicity. There were 95 non-Hamiltonian graphs in the 1,000,000 randomly generated cubic graphs of size 100. All of the randomly generated cubic graphs of size 1,000 were Hamiltonian. The reported times are the total times to solve all instances in each set of graphs, with the input/output times removed.

As can be seen in Table 2, SLH succeeded in finding a Hamiltonian cycle in all Hamiltonian graphs tested. To date the authors have not found an instance of a Hamiltonian graph for which SLH has terminated without finding a Hamiltonian cycle, having tested graphs of sizes up to 5000.

## 5 Conclusions

In this paper we presented SLH; a new heuristic with polynomial complexity for solving the Hamiltonian Cycle Problem. Table 1 shows that SLH, while still in its infancy, is already a competitive HCP solver, while Table 2 indicates that SLH is extremely reliable, succeeding in finding a Hamiltonian cycle in every single tested Hamiltonian graph, notably including graphs possessing very few Hamiltonian cycles. This includes graphs where other, benchmark, solvers failed to find any such cycles. Furthermore, these results were achieved without the need for randomisation techniques that are a common feature in many contemporary HCP heuristics. We believe that balancing the use of our and generator transformations overcomes some of the difficulties that other heuristics try to overcome by randomisation.

In addition, we expect that in future versions, the speed of SLH will be significantly improved by optimising the implementation in a variety of ways. In particular, replacing SLH-closing transformations with Lin-Kernighan type sequential transformations, such as those implemented in LKH Helsgaun , seems promising and deserves further exploration. Similarly, investigations of both alternative initial orderings and alternative stopping rules could lead to improvements in performance. Taking advantage of a pre-solver, such as the subgradient optimisation routine used in LKH, is also likely to improve the solving time of the algorithm in general. A demonstration version of SLH is available on our website SLH , where users are invited to submit and solve their HCP problems through our online interface.

## Acknowledgments

The authors gratefully acknowledge useful comments from the anonymous referees which improved the exposition, and useful discussions with Brendan McKay and Gordon Royle that helped us to find suitable test instances. The editor, William Cook, also contributed significantly by suggesting further testing and changes of inaccurate statements. The research presented in this manuscript was supported by the ARC Discovery Grant DP120100532.

## References

•  Applegate, D.L., Bixby, R.B., Chavátal, V., and Cook, W.J. Concorde TSP Solver: http://www.tsp.gatech.edu/concorde/index.html.
•  Applegate, D.L., Bixby, R.B., Chavátal, V., and Cook, W.J. The Traveling Salesman Problem: A Computational Study. Princeton University Press (2006).
•  Baniasadi, P., Clancy, K., Ejov, V., Filar, J.A., Haythorpe, M., and Rossomakhine, S. Snakes and Ladders Heuristic – Web Interface: http://fhcp.edu.au/slhweb/ (2012).
•  Bouwer, I.Z., Chernoff, W.W., Monson, B., Star, Z. The Foster Census. Charles Babbage Research Center, Winnipeg (1988).
•  Eppstein, D. The Traveling Salesman Problem for Cubic Graphs. In Frank Dehne, Jörg-Rüdiger Sack, and Michiel Smid, editors, Algorithms and Data Struct., volume 2748 of Lecture Notes in Computer Science, pages 307–318. Springer Berlin (2003).
•  Flood, M.M. The Traveling Salesman Problem. Oper. Res. 4, 61–75 (1956).
•  Gutin, G. and Punnen, A.P. Traveling Salesman Problem and Its Variations. Kluwer Academic Publishers (2002).
•  Helsgaun, K. An Effective Implementation of Lin-Kernighan Traveling Salesman Heuristic. Eur. J. Oper. Res. 126, 106–130 (2000).
•  Isaacs, R. Infinite Families of Nontrivial Trivalent Graphs Which Are Not Tait Colorable. Amer. Math. Monthly 82:221–239 (1975).
•  Lawler, E.L., Lenstra, J.K., Rinooy Kan A.H.G., and Shmoys, D.B.

The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization

. John Wiley and Sons (1985).
•  Lin, S. Computer Solutions of The Traveling Salesman Problem. The Bell Systems Tech. J. 44, 2245–2269 (1965).
•  Lin, S. and Kernighan, B.W. An Effective Heuristic Algorithm for The Traveling Salesman Problem. Oper. Res. 21, 496–516 (1973).
•  Royle, G., Conder, M., McKay, B., and Dobscanyi, P. Cubic symmetric graphs (The Foster Census): http://school.maths.uwa.edu.au/gordon/remote/foster (2001).
•  Sheehan, J. Graphs with exactly one hamiltonian circuit. J. Graph Th. 1:37–43 (1977).
•  TSPLIB. Hamiltonian cycle problem (HCP): http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95 (2008).
•  Weisstein, E.W. Generalized Petersen Graph (From MathWorld – A Wolfram Web Resource): http://mathworld.wolfram.com/generalizedpetersengraph.html.
•  Wormald, N. Models of Random Regular Graphs, in Surveys in Combinatorics, Cambridge University press, pages 239–298 (1999).