1 Introduction
The essence of the Hamiltonian Cycle Problem (HCP, for short) is contained in the following, deceptively simple statement: Given a graph , find a Hamiltonian cycle (HC) (a simple cycle that contains all vertices of the graph) or prove that such a cycle does not exist. A graph is said to be Hamiltonian^{1}^{1}1The name stems from Sir William Hamilton’s investigations of such cycles on the dodecahedron graph around 1856, however Leonhard Euler studied the famous “knight’s tour” on a chessboard as early as 1759. if it contains at least one Hamiltonian cycle, and nonHamiltonian otherwise.
Henceforth, a graph of order will mean an vertex graph which is both simple (without self loops or multiple edges) and undirected (every edge admits twoway traffic.)
The HCP is known to be NPcomplete and has become a challenge that attracts mathematical minds both in its own right and because of its close relationship to the famous Travelling Salesman Problem (TSP). An efficient solution of the latter would have an enormous impact in operations research, optimisation and computer science. However, the TSP is merely the problem of identifying a Hamiltonian cycle (also called a “tour”) of minimal length, where each edge has an associated length (or weight), and the length of the cycle is the sum of the lengths of edges comprising that cycle. The latter constitutes a simple, linear, objective function and it can be argued that much of the difficulty of the TSP is embedded in the HCP, namely, in finding an optimal tour in the space of Hamiltonian cycles.
There is now an extensive body of literature devoted to TSP and its many variants. Besides theoretical developments there are also many exact algortihms, and heuristics. The reader is referred to the comprehensive books by Lawler et al Lawl85Traveling and Gutin and Punnen Gutin . For many researchers, solution of HCP becomes a simple corollary of the TSP although, in principle, the former need not require the minimisation of any objective function.
Some of the most successful heuristics for solving TSP are local search methods that exploit the socalled “opt” transformations that facilitate movement from one tour to a shorter tour via an exchange of exactly edges. The notion of a “opt” transformation is widely attributed to Flood Flood . Subsequently, Lin Lin and Lin and Kernighan (L–K) LK developed powerful heuristics that exploited “opt” transformations more extensively. Indeed, L–K is still embedded in some of the best modern heuristics for the TSP, notably Helsgaun’s LKH Helsgaun . For a comprehensive discussion of modern developments we refer the reader to Applegate et al ConcordeBook .
The Snakes and Ladders Heuristic (SLH) for the Hamiltonian Cycle Problem presented here is a polynomialcomplexity algorithm inspired by, but distinctly different from, the opt heuristics. The name Snakes and Ladders Heuristic comes from our visual representation of iterations of the algorithm described in detail in the next section.
In order to attempt to find a Hamiltonian cycle of a given graph , we place its vertices in some order on a circle. Any edges between adjacent vertices on the circle are rendered as arcs of the circle (which we call snakes), while all other edges are rendered as chords of the circle (which we call ladders). If two adjacent vertices on the circle have no snake between them, we say there is a gap between them. SLH attempts to place all edges of a Hamiltonian cycle on the circle by a number of transformations that are isomorphisms of the underlying graph. While some, but not all, of these isomorphisms coincide with opt transformations, the main difference compared to opt search methods is that our approach does not require an improvement of a TSPtype objective function; rather, SLH seeks changes in the arrangement of vertices of the graph on the circle with the goal of facilitating the eventual closure of gaps. We might say that in SLH, opt transformations have been generalised to change transformations in an appropriate way.
Whereas a opt method attempts to “improve” after each iteration by decreasing the number of gaps until a Hamiltonian cycle is obtained; SLH uses a more general notion of “improvement” by trying to achieve a suitable balance between increasing the number of Hamiltonian cycle edges on the circle and decreasing the number of gaps. As a consequence, SLH differs from standard opt heuristics in three important ways.

SLH performs a sequence of compositions of two simple generator operations: gamma () and kappa (). The sequence so constructed often results in transformations that  under opt heuristics  would not be allowed, or would be difficult to identify. We later prove that all Hamiltonian cycles in any given graph of size are reachable through the use of these two operations, with the number of operations required bounded above by a linear function of .

The opt heuristics update the tour only when an “improvement” is found, while SLH allows floating and opening transformations that may result in either no improvement or a “sacrifice”, respectively.

The opt heuristics rely on randomisation techniques to obtain a Hamiltonian cycle while SLH does not take advantage of these techniques, and is designed to run on any initial input arrangement in a deterministic fashion.
As indicated in item 3, we implement SLH as a deterministic heuristic. That is, for a graph and a given starting orientation, the heuristic will produce the same output every time it is run. A stopping condition is chosen to ensure that SLH will terminate in polynomial time, either by identifying a Hamiltonian cycle, or failing to improve after iterations. SLH has been implemented in C++. Although SLH is not guaranteed to find a Hamiltonian cycle in a Hamiltonian graph, preliminary experiments on many graphs (not exceeding 5000 vertices) have succeeded in all cases. That is, a Hamiltonian cycle has been found in all graphs that were Hamiltonian, while termination declaring the graph to be “likely nonHamiltonian” was reached in all instances of graphs known, a priori, to not possess any Hamiltonian cycles.
This paper is organised as follows: Section 2 introduces the basic idea behind our approach, the transformations of SLH and our algorithm implementation. Section 3 gives a plausible explanation of why SLH is effective in finding Hamiltonian cycles. Section 4 reports on some of the experiments performed with the algorithm as well as a comparison with well known TSP solvers. Finally, our conclusions of this paper are presented in Section 5, including a link to our website, http://fhcp.edu.au/slhweb/ SLH , where readers are invited to test SLH on either builtin, or usersupplied, problems.
2 Description of the algorithm
We start by introducing the terminology that we use in this paper. Then we will describe the transformations that are used in our implementation of SLH. Finally, we discuss our implementation of the SLH algorithm.
2.1 Basic idea
In our approach, we place the vertices of a simple undirected graph on a circle in some order. Then, all edges of between adjacent vertices on the circle are represented as arcs on the circle, which we call snakes, while the other edges are represented as chords of the circle, which we call ladders.
The arrangements of vertices on the circle form natural equivalence classes. Namely, two arrangements are said to be equivalent if either one can be transformed to the other via a rotation (clockwise or anticlockwise), a reversal of the ordering, or a composition of both. This implies two arrangements are equivalent if and only if all vertices have the same neighbours in both arrangements. For example, the arrangement (1,3,4,6,2,5) is equivalent to and but not .
It is clear that any member of the equivalence class, for a fixed graph , contains the same snakes and ladders as any other member of this class. We use the term ordering, or circle ordering to denote such an equivalence class, and give it the symbol . However, since the algorithm and all definitions in this paper are given for a fixed graph, we henceforth drop the subscript . For a given ordering , if two vertices and that are adjacent on the circle are not connected by a snake we say that is a gap on . When no confusion is possible, we will also use the term ordering to denote a particular member of the equivalence class, and use special notation to describe the ordering. For any adjacent pair of vertices, there are three distinct possibilities; there may be a snake between them, a gap between them, or there may be either a gap or a snake. These situations will be denoted by , , and respectively. This notation will allow us to define transformations that require the presence of particular snakes or gaps in certain parts of the ordering, but remain defined regardless of what is present elsewhere in the ordering. One example of this notation is , where is viewed as the initial vertex, and as the final vertex. In this example, immediately follows , then some number (possibly zero) additional vertices follow before is reached. Likewise, immediately follows , and then some number (possibly zero) additional vertices follow before reaching , and then follows as the circle is closed. When denoting an ordering, the initial vertex is repeated at the end. Equivalent notation without the initial vertex repeated refers to a segment of the ordering. For the sake of clarity, we can choose to view a given ordering as an ordered set of segments. For example, if we have an ordering , we can define segments and and rewrite the ordering as . We additionally define segment to be the reverse of , so in the previous example and ordering .
We note that a Hamiltonian cycle corresponds to an ordering containing exactly snakes, or equivalently, an ordering with gaps. Hence, the ultimate goal for SLH is to close all the gaps on through a series of transformations. These transformations can be described as compositions of two generator transformations, specifically the following two isomorphisms:

isomorphism. This transformation, denoted by , maps an ordering to ordering . That is, if we define segments and , then maps ordering to ordering . The definition implies that is only defined for orderings in which and are adjacent on the ordering. Vertex is implicitly defined as being the vertex adjacent to on the segment .

isomorphism. This transformation, denoted by , maps an ordering to an ordering . That is, if we define segments , , and , then maps ordering to ordering . The definition implies that is only defined for orderings in which the segment does not contain . This definition allows for , that is, segment contains a single vertex; for , that is, segment contains a single vertex; and for segment to be empty (i.e. so vertex directly precedes vertex on the ordering), or for , that is, segment contains a single vertex.
Note that, although the figures for the generator transformations include ladders and snakes, these need not be present for the isomorphisms to be performed.
Later in Section 3, we will show that for any initial ordering and any Hamiltonian ordering , that is, an ordering corresponding to a Hamiltonian cycle , there exists a transformation from to that is a composition of a number of and transformations in some order. Namely,
where and can take values or , and bounds the number of such transformations. In fact, our algorithm attempts to iteratively build such a transformation to some unknown Hamiltonian ordering as a composition of SLH transforms that are themselves specially designed combinations of and transformations. To ensure that our algorithm is polynomially bounded we choose .
Note that transforms the initial ordering to an ordering with zero gaps. Therefore, SLH transformations that reduce the number of gaps are viewed as desirable. Such transformations are widely used in TSP algorithms and are called opt exchanges. Adapting opt transformations for HCP, they improve the tour by exchanging ladders with gaps and snakes ( in total) in such a way that the new tour contains fewer gaps. Even though SLH incorporates opt transformations, it also incorporates transformations that may preserve the weight of the tour (i.e. keep the same number of gaps) or even increase the weight of the tour (i.e. increase the number of gaps). We call these flo transformations, to emphasise their indeterminant (floating) nature. Unlike opt transformations, flo transformations are defined for situations where it is not known, in advance, whether certain gaps or ladders are present. Specifically, a flo transformation is one in which as many as ladders may be turned into snakes, if they exist. Then, if it so happens that some of these ladders and/or gaps exist, the transformation might produce a new ordering with a reduced number of gaps. In such a case, the flo transformation specialises to an opt transformation for some .
2.2 Operations
In our implementation, we found it useful to consider special combinations of and that we found to be the most effective among the many alternatives we tried. These can be partitioned into three types; closing, floating and opening. In each case, certain ladders, snakes and gaps are required to either be present or absent for the combination of generator transformations to produce the transformation.
SLH–closing transformations: These correspond to particular 2–opt and 3–opt transformations:

SLH closing –opt type 1 transformation is . It maps ordering , where is a ladder, to an ordering .

SLH closing –opt type 2 transformation is that maps an ordering , where and are ladders, to an ordering .

SLH closing –opt transformation is . It maps ordering , where , and are ladders, to an ordering .
SLH–floating transformations: These are special ladder–snake interchange transformations that generate new orderings, where the number of gaps is either unchanged, or possibly reduced (if certain ladders, gaps, or both are present in the first ordering). Ladders which (if they exist) will be turned into snakes after the transformation are represented by dashed lines in the first ordering, with the corresponding snake also represented by a dashed line in the second ordering. Of course, if such a ladder does not exist then the corresponding dashed line in the second ordering represents a gap. In particular:

SLH floating –flo transformation is that maps an ordering , where is a ladder, to an ordering . If is also a ladder, this transformation closes at least one gap.

SLH floating –flo transformation is . It maps ordering , where and at least one of and are ladders, to an ordering . If both and are ladders, this transformation closes at least one gap.

SLH floating –flo type 1 transformation is . It maps ordering , where , and are ladders, to an ordering . If is also a ladder, this transformation closes at least one gap.

SLH floating –flo type 2 transformation is that maps ordering , where and are ladders, and at least one of and are ladders, to an ordering If both and are ladders, this transformation closes at least one gap.

SLH floating –flo transformation is that maps ordering , where , , and are ladders, to an ordering . If is also a ladder, this transformation closes at least one gap.
SLH opening transformations: Importantly, we introduce a single snakeladder interchange transformation that generates a new ordering that generally contains one more gap.

SLH opening flo transformation is . It maps ordering , where and are ladders, to an ordering . In general, this transformation increases the number of gaps by 1, though this is not always the case.
Note that, in all cases, these transformations are designed to be performed for an ordering containing a gap . To represent this, we say that a transformation is performed “around gap ”.
2.3 Algorithm
SLH works in four mains stages. In each stage it attempts to find a Hamiltonian cycle by a particular approach. If no Hamiltonian cycle is found within the (polynomial complexity) constraints of that stage, SLH moves on to the next stage in order to attempt a different approach. Finally, if stage 3 has failed to reduce the number of gaps after iterations, the graph is declared to be “likely nonHamiltonian”. The following is a description of our algorithm implementation.
Stage 0.

Let the initial ordering be the original assignment of the graph.

Perform SLHclosing transformations to obtain new orderings. Continue until the number of gaps can not be reduced by any SLHclosing transformation. If the number of gaps is zero, stop: a Hamiltonian cycle has been found.

Create a gap list, initially empty, and an ordering list, initially containing only the most recently obtained ordering. These two lists will act as tabu lists. Go to stage 1.
We note that we can substitute stage 0 with any effective polynomially bounded opt algorithm^{2}^{2}2An effective implementation of L–K, for example Helsgaun’s LKH Helsgaun , could be such an algorithm., disallowing the use of any randomisation techniques to retain the deterministic nature of SLH. The final ordering provided by such an algorithm will become the initial ordering for stage 1. Using operations from a sufficiently efficient opt algorithm instead of our closing operations may improve the performance of SLH. However, for this proof of concept, and to emphasise the effectiveness of SLH at stages 1, 2 and 3, we chose the simplest possible closing transformations.
Stage 1.

Perform any SLHfloating transformation around a gap on the latest ordering. Save in the gap list and continue as follows:

If the new ordering contains a gap that is not in the gap list, add the ordering to the ordering list and proceed to stage 1.2.

If the new ordering contains only gaps that are in the gap list repeat stage 1.1 with a different SLHfloating transformation on the previous ordering. If no SLHfloating transformation produces an ordering which contains only gaps that are in the gap list already, then revert to an earlier ordering, and continue (perhaps needing to revert back to an even earlier ordering) until a new SLHfloating transformation can be performed that does produce an ordering containing a gap that is not in the gap list. Then, add the ordering to the ordering list and proceed to stage 1.2.

If no SLHfloating transformation on any ordering in the ordering list produces an ordering with no gaps in the gap list, proceed to stage 2.


If the number of gaps is less than in the previous iteration, then remove all orderings from the ordering list, and all gaps from the gap list. If a Hamiltonian cycle is found, stop.

Return to stage 1.1.
Note that, according to the stopping rule, if an SLHfloating transformation creates a gap that has already been obtained by some previous transformation, the transformation will not be performed and the ordering will not be listed. Thus, the stopping rule limits the number of orderings in the ordering list at this stage to Once each ordering in the ordering list has been considered, we proceed to stage 2.
Stage 2.

Record the most recently obtained ordering and the number of gaps .

Perform an SLHopening transformation from a gap in and save the ordering so obtained in the ordering list.

Repeat stage 1 until no gap can be closed. If the number of gaps is zero, stop: a Hamiltonian cycle has been found.

If the number of gaps in the most recent ordering is less than , empty the gap list, remove all orderings except the most recent ordering from the ordering list, and return to stage 1.

Recover ordering and return to stage 2.2 but perform a different SLHopening transformation. If all possible SLHopening transformations from have been previously considered, go to stage 3.
Stage 3.

Record the most recently obtained ordering .

Perform an opening transformation from any gap in and add the resulting ordering to the ordering list. Then, attempt to perform opt transformations (floating transformations that reduce the number of gaps), without obtaining already listed orderings, saving each new ordering in the ordering list. Continue until no more gaps can be closed. If the number of gaps is zero, stop: a Hamiltonian cycle has been found. If at any stage the number of orderings in the ordering list exceeds , go to stage 3.5.

If the number of gaps in the most recently obtained ordering is less than (recorded in stage 2.1), empty the gap list, remove all orderings except the most recent ordering from the ordering list, and return to stage 1. Otherwise, revert to the most recently obtained ordering, descending from , in which opt transformations that have not yet been tried are possible. Continue attempting to perform opt transformations without obtaining already listed orderings, saving each new ordering in the ordering list.

If all possible opt transformations in each previous ordering descending from have already been tried, return to stage 3.1.

If the number of orderings in the ordering list is equal to , and the number of gaps is greater than zero in all, declare the graph to be “likely nonHamiltonian” and stop.
Note that in stages above where a transformation is selected, there may be multiple eligible transformations. The algorithm searches for any applicable transformations, and as soon as an eligible transformation is discovered, it is performed. If the transformations can be performed from any gap, the gaps are perused in order of the ordering. This ensures that the above process is deterministic, with the order of transformations determined entirely by the initial assignment of the graph.
2.4 Worstcase algorithmic complexity
In the algorithm implementation above, stage 3 has the largest search space and the stopping condition with the largest bound, so in the worst case dominates the execution time. In each iteration of stage 3 we allow as many opening transformations as are necessary to enable a floating transformation to close a gap. Then, there will be a sequence of such floating transformations that close gaps. If is the maximum degree of the graph, then there are potential floating transformations at each step of the sequence. This number of transformations arises because the SLH floating 5flo transformation requires four edges to be chosen, the first emanating from any vertex next to a gap, and each subsequent edge emanating from a vertex determined by the previous edge. Thus, there could be at most gaps, and edges from each vertex. Then, to determine if such a transformation produces an ordering not already in the ordering list, we need to search the latter. There could be up to orderings stored, in an index set, so using a binary search over the set is an process, and comparing two orderings is a process. There are, at most, such iterations in stage 3. Theoretically, stages 1–3 could be repeated times as the requirement to return from stage 3.3 to stage 1 is only that the number of gaps (at most ) has decreased by at least 1. Therefore the worstcase complexity of the algorithm detailed above is . For a sparse graph, where is , this reduces to . Other operations (such as adding the newly constructed ordering to the ordering list) are dominated by the time taken to search for the transformations in stage 3.
Our experience indicates, however, that this worstcase complexity is very unlikely to be encountered in practice. Experimentally, we have seen that when SLH reaches stage 3 the number of gaps is very close to the minimal possible number, so stages 1–3 only need to be repeated a handful of times. Also, the likelihood of needing to open many gaps before one can be closed in stage 3 is extremely small. So, even in cases where a nonHamiltonian graph is submitted to SLH the performance is likely to be closer to , or for a sparse nonHamiltonian graph. Furthermore, in our experiments we have seen that almost all Hamiltonian graphs are solved without ever needing to reach stage 2. In such cases the heuristic has complexity , or for sparse Hamiltonian graphs. This bound arises because there are at most iterations in stage 1, at each iteration we search from a prescribed gap so there are choices of floating transformations, and after each iteration we need to record the new ordering which is an process.
3 Motivation
It is worth mentioning that the performance of SLH has exceeded our expectations. By that we mean that SLH correctly solved all graph instances that we tried, which contained up to 5000 vertices. These instances included several particularly hard graphs such as generalized Petersen (GP) cubic graphs that contain only 3 Hamiltonian cycles, clique graphs, leap graphs, TSPLIB graphs and many others, in an entirely deterministic way. That is, we have not yet encountered a Hamiltonian graph where SLH fails to find a Hamiltonian cycle before the invocation of the stopping rules. Moreover, SLH works with the given arrangement of the initial ordering without benefiting from randomisation techniques and preprocessing.
The effectiveness of SLH can arguably be justified by our new perspective on what it means to “improve” after each iteration. Let us suppose that an undirected graph is represented by the current ordering that contains gaps, and that a Hamiltonian ordering corresponding to a Hamiltonian cycle is known. Also, assume that there are snakes on that are edges of the Hamiltonian cycle (or, equivalently, snakes in ). We define the distance between and as
The difference between and is then defined as
In our perspective, a transformation that maps ordering to ordering improves the tour if and hence, constitutes a desirable transformation. Specifically, by applying such transformations iteratively we ensure will eventually be obtained. We now show that from any ordering there exists a or transformation, or a composition that, in our sense, improves the tour. This approach is in contrast with opt algorithms (adapted for HCP), where the focus is solely on reducing the number of gaps. We argue that reducing the number of gaps is not a sufficient measure of improvement, because closing gaps alone might not bring us any closer to ; indeed, the might increase compared to . Thus, in situations where the distance is large, opt algorithms might struggle with finding a Hamiltonian cycle, even if the number of gaps is small. In Section 4 we present examples when opt approach quickly reduces the number of gaps to 1 but fails to find any Hamiltonian cycle , arguably because the distances of any Hamiltonian ordering to the current ordering with 1 gap is quite large. By contrast, reducing the distance, without controlling the number of gaps, is enough to converge to a Hamiltonian ordering. However, since we do not know the Hamiltonian cycle , we can not measure the distance. Hence, reducing the number of gaps in an ordering is merely a pragmatic, surrogate, objective. Nonetheless, SLH is willing to sacrifice the latter to move, “laterally”, to a possibly better ordering.
We now proceed to the argument that and transformations (or a composition of the two) suffice to transform an ordering to another ordering such that .
More precisely, we show that the and that .
Suppose that and that contains a gap . Then at least one edge in must be a ladder in , say , emanating from . There are three options for the other edge with endpoint on the Hamiltonian cycle :

is a snake in , and the segment does not contain ;

is a snake in , and the segment contains ;

is a ladder in .
In situations (i) and (iii) if we choose it results in the ordering , such that because the distance . This is because the ladder , which is in , becomes a snake, whereas if snake exists, it becomes a ladder but is not in . Also, the number of gaps in does not increase compared to , though it could decrease if was a ladder, or was a gap.
In situation (ii) continues from vertex to some vertex on the segment and then moves to a vertex on the segment . Without loss of generality, let be . There are now three possibilities

Edge In this case we apply transform that maps into some . The number of gaps will not increase by more than . However, contains new snakes and which are in , but does not contain that is a snake in , and may be an edge in . Snake that becomes a ladder on does not belong to and therefore its absence on the circle does not increase So, overall, the number of snakes in that are in must grow by at least 1 compared to .

Edge In this case we apply transform that maps into . Again, the number of gaps can not increase by more than . On the other hand, contains former ladders and (which are edges in ) as snakes, but former snake is a ladder in , and may be an edge in . Former snake that becomes a ladder on , does not belong to and therefore does not contribute to So, overall, the number of snakes in that are in must grow by at least 1 compared to .

Some edge despite being a ladder in . In this case turns ladders and , both of which are in , into snakes in . Former snake becomes a ladder in and may be in . All other former snakes that become ladders in are not in . So, overall, the number of snakes in that are in must grow by at least 1 compared to .
In all three cases we see that , and . Therefore, even though the transformation that maps to might increase the number of gaps.
Suppose now that , that is corresponds to a Hamiltonian cycle, different from Then must contain an edge, say , that is a ladder in . Then, there is vertex , adjacent to in , such that the snake is not in If we treat as if it were a gap (since we are not concerned if it is transformed into a ladder), and apply exactly those transformations as above in the corresponding situation, we improve the distance by at least 1 in every case and may create at most 2 gaps if or a composition of and is applied. Therefore, an overall improvement for the function will be at least compared to when is a genuine gap.
The above argument implies that, as mentioned in the introduction, for every ordering and any Hamiltonian ordering , there exists a transformation mapping to , such that
where and can take values or , and bounds the number of transformations. At any stage of the process, there exists at least one transformation that can decrease by at least . Performing the correct sequence of these transformations in the process of constructing ensures convergence to in no more than transformations.
Of course, in our implementation of the algorithm, we do not know a Hamiltonian cycle in advance, so we use the transformations outlined in Section 3. Each of those transformations can be represented as where , and we stop after such transformations. So, given a graph and an initial ordering, our implementation of SLH produces, in a deterministic fashion, a more specialised mapping
where depend upon which transformation is performed in step , and . Then, either transforms the initial ordering to a Hamiltonian ordering, or we report that the graph is “likely nonHamiltonian”.
Next, we use the 15vertex example given in Figure 14 to demonstrate the contrast between SLH and the most famous opt heuristic; the classical LinKernighan algorithm LK . The main tool for finding the optimal tour (Hamiltonian cycle in this case) in the LinKernighan algorithm is using sequential exchanges where an improvement is possible. In other words, consider a set of snakes and gaps and a set of ladders where the exchange of every with constitutes an improved tour (i.e. fewer gaps). A sequential exchange is when and share an endpoint, and so do and , for all . For the graph of Figure 14 with only one Hamiltonian cycle, an improvement is possible only if consists of the gap and all snakes that are not a part of the Hamiltonian cycle, while consists of the ladders that are on the Hamiltonian cycle. This can be seen because there is only one gap, and therefore improvement implies that the number of gaps must be zero after a opt transformation. Therefore, a unique –opt transformation exists that can improve the tour by exchanging with on the ordering. Although it is possible to construct this –opt transformation through a sequence of sequential edgeexchange moves (which individually do not improve the tour), the classical LinKernighan algorithm would be unable to do so even if unlimited backtracking was permitted^{3}^{3}3It should be noted that modern implementations of LinKernighan view sequential exchanges in a more refined sense, which permits them to construct the required –opt transformation.. In the words of the authors of LK , “the procedure is augmented by a limited defense in [such] situation[s]”. So Lin–Kernighan’s main technique to deal with situations that require complicated nonsequential exchanges is randomisation techniques and hoping that an easier situation arises. However, if the graph is large and the number of Hamiltonian cycles is small, randomisation might be of little help as demonstrated in the next section with generalised Petersen graphs. On the other hand, one can check that the following SLH transformation can unfold the graph and find the Hamiltonian cycle in the graph of Figure 14.
The current implementation of SLH solves this graph in stage 1. However, more difficult instances are sometimes solved in stages 2 and 3, where opening transformations are featured. In those stages, the differences between SLH and the HCP adapted L–K algorithm are best revealed.
4 Experimental results
The times reported in this section were obtained by running SLH on a Dell R210 PC, (Intel(R) Xeon(R) E31270, 3.4GHz, 16GB RAM) running Linux CentOS 6.
We first tested SLH on several million subcubic (degree three or less) graphs containing up to 50 vertices. These tests helped us design effective stages and transformations for the algorithm. With the stages in place and the transformations selected as described in Section 2, the performance of SLH proved to be competitive with benchmark solvers, based upon experiments conducted on a variety of graphs. Specifically, we compared the performance of SLH to that of Concorde TSP Solver Concorde , Helsgaun’s LinKernighan implementation, LKH HCP solver Helsgaun , and Eppstein’s HCP deterministic solver eppstein for cubic graphs. Though Concorde is a TSP solver, we represent instances of HCP as TSP by assigning weight to existing edges (edges of the given graph) and weight to nonexisting edges. It should be noted that LKH has a builtin presolve phase that performs subgradient optimisation. In order to obtain a fair comparison with SLH, we have chosen to evaluate LKH both with and without this presolve stage. As is demonstrated below, the presolve is very effective on certain classes of graphs, which indicates that SLH would benefit from a similar presolve phase. Some experimentation with alternative parameters was performed, however with only one exception, we did not discover any instances where altering the parameters significantly improved the solving time or reliability in any of the comparison solvers. The sole exception was in switching off the restricted search option in LKH; in this case, LKH was more reliable, but took longer to solve. For the remainder of this manuscript, default parameters are assumed. We ran the Windows/Cygwin implementation of Concorde, and Eppstein’s solver, on a Dell Optiplex 780 (Intel(R) Core(TM)2 Quad CPU Q9650, 3.0Ghz, 4GB RAM) running Windows 7. We ran LKH v2.0.7 on the same Dell R210 PC used to run SLH.
For the test graphs we selected all TSPLIB HCP data TSPLIB ; Flower snarks flower for ; Sheehan graphs sheehan for ; and also generalised Petersen cubic graphs GP for The TSPLIB graphs are benchmark instances notable for their large size (up to 5000 vertices). They are all sparse, irregular graphs, and are all Hamiltonian. The Flower snarks are a family of nonHamiltonian cubic graphs of size , that all contain Hamiltonian paths; so the optimal solution, from a TSP viewpoint, is a path of length 1. These were tested to give an indication of how quickly SLH can detect nonHamiltonicity compared to the other solvers tested, which are not designed to deal with such instances. This is an important test, as the essential difficulty of HCP is that a Hamiltonian cycle may not exist for a given graph, and hence it is interesting to see how the algorithms will perform in these cases. Sheehan graphs are maximally dense uniquely Hamiltonian graphs; that is, Hamiltonian graphs with only a single Hamiltonian cycle, that contain as large a ratio of edges to vertices as possible. Finally, the generalised Petersen graphs, for the values of we chose (that is, ), are cubic graphs that contain only three Hamiltonian cycles, which is the minimum number a Hamiltonian cubic graph may contain. For the Sheehan graphs and the generalised Petersen graphs we chose a random permutation of the vertices to help disguise the inherent structure.
In our tests, the success rate of LKH algorithm – which invokes randomisations – is determined after many trials (50 trials for Sheehan graphs, 1000 trials for other graphs) and execution time is taken as average over the trials. Eppstein’s algorithm is only designed for cubic graphs and so was tested only on cubic graphs, that is, the Flower snarks and the generalised Petersen graphs. Eppstein’s algorithm finds all Hamiltonian cycles, so in the case of Hamiltonian graphs, the total time was divided by the number of Hamiltonian cycles. Also note that Eppstein’s algorithm, like SLH (and unlike Concorde and LKH), is a deterministic algorithm.
The results of the above comparisons are reported in Table 1. Assessment Fail means that the solver terminated without managing to find a Hamiltonian cycle despite the graph being Hamiltonian, while Time Fail means that the solver had not concluded within 48 hours. Note that, for the nonHamiltonian instances (the Flower snarks), assessment Fail does not apply as there is no Hamiltonian cycle to find.
Graphs  Concorde  LKH  LKH without presolve  SLH  Eppstein  
Time(sec)  Time(sec)  Success (%)  Time(sec)  Success (%)  Time(sec)  Stage  Time(sec)  
ALB 1000  4.95  0.0  100  0.0  100  0.16  1   
ALB 2000  7.30  0.0  100  0.0  100  0.79  1   
ALB 3000a  9.56  0.1  100  0.1  100  3.19  1   
ALB 3000b  9.94  0.1  100  0.1  100  3.31  1   
ALB 3000c  9.95  0.1  100  0.1  100  2.91  1   
ALB 3000d  10.14  0.1  100  0.1  100  3.26  1   
ALB 3000e  10.44  0.1  100  0.1  100  2.77  1   
ALB 4000  13.45  0.1  100  0.1  100  5.75  1   
ALB 5000  17.24  0.1  100  0.1  100  12.48  1   
Flower5  0.15  0.0  100  0.0  100  0.04  3  0.57 
Flower15  2014.01  0.1  100  0.1  100  1.80  3  14.31 
Flower25  Time Fail  0.2  100  0.2  100  12.26  3  11675.73 
Flower35  Time Fail  0.3  100  0.3  100  41.91  3  Time Fail 
Sheehan50  0.23  0.0  100  2.8  100  0.02  1   
Sheehan60  0.33  0.0  100  16.5  100  0.27  1   
Sheehan70  0.41  0.0  100  69.2  96  0.29  1   
Sheehan80  0.59  0.0  100  24  1.79  1    
GP(39,2)  34.38  0.0  100  0.0  100  0.01  1  0.6 
GP(45,2)  50.91  0.0  100  0.0  100  0.06  1  3.4 
GP(51,2)  50.09  0.0  99.8  0.0  99.8  0.07  1  9.5 
GP(63,2)  737.88  0.0  84.9  0.0  83.8  0.15  1  129.1 
GP(123,2)  Fail  0.3  00.1  0.3  00.1  6.27  2  Time Fail 
GP(243,2)  Fail  0.9  00.0  0.9  00.0  620.48  3  Time Fail 
Table 1 reveals that the performance of SLH compares increasingly favourably as the difficulty of the graph grows. For general instances, LKH dominates both Concorde and SLH, but struggles to solve reliably and efficiently when there is only a small number of orderings with the minimal number of gaps. Although LKH is able to solve the Sheehan graphs very quickly, it does so in its presolve phase. As Table 1 demonstrates, its performance on these graphs deteriorates when this phase is disabled. Of course, since SLH does not currently have any presolve phase, arguably, it is more meaningful to compare it to LKH without presolve. The latter, like SLH, is essentially a exchange type algorithm. For the Flower snarks, which are hypohamiltonian, Concorde quickly finds a Hamiltonian path (the optimal solution) but takes much longer to determine that no Hamiltonian cycle exists.
For the two largest tested instances of the generalised Petersen graphs, SLH had to proceed to later stages, where opening transformations were performed, in order to find a Hamiltonian cycle. For these two instances, Eppstein’s algorithm was unable to provide a solution within 48 hours, and in both cases Concorde encountered a numerical error that prevented it from returning any result at all. LKH was able to find a solution to GP(123,2), but only in nine out of 1000 attempts (or eleven attempts without the presolve phase), and failed to produce any solutions to GP(243,2) in 1000 attempts, with or without the presolve phase.
For the cubic graphs, we ran Eppstein’s algorithm, which was dominated by the performance of the other three algorithms; however, it should be noted that Eppstein’s algorithm enumerates all Hamiltonian cycles, rather than terminating once one is found. The adaptation of SLH to find additional Hamiltonian cycles, after a first has been identified, is a topic for future research.
For all four solvers tested, the hardest graphs in our experiments to solve were the generalised Petersen cubic graphs for . It is interesting to note that, although LKH failed to find a Hamiltonian cycle in , it very quickly descended to an ordering that contained exactly one gap. At the same time, the value of remained large for all three possible choices of due to the small number of common snakes in and any . In our experiments, Helsgaun’s LKH generated a final ordering for , for which was , and for the three Hamiltonian cycles.
Table 2 reports the success rate of SLH over large test sets. SLH was first tested on the Foster census (see Bouwer et al. bouwer and Royle Royle ). SLH was next tested on 1,000,000 cubic graphs of size 100 (generated uniformly, at random, see wormald ), separated into four test sets. Finally, SLH was tested on 10,000 cubic graphs of size 1,000 (generated uniformly by random). The success rate of SLH was checked by confirming that the reported Hamiltonian cycles exist in the graph^{4}^{4}4Of course, this is guaranteed by the algorithm, but was checked nonetheless to confirm our implementation was accurate., and independently confirming any result of nonHamiltonicity. There were 95 nonHamiltonian graphs in the 1,000,000 randomly generated cubic graphs of size 100. All of the randomly generated cubic graphs of size 1,000 were Hamiltonian. The reported times are the total times to solve all instances in each set of graphs, with the input/output times removed.
Graphs  Number  Size range  Total time (s)  Success rate 

Foster Census  332  4 – 998  17.74  100 
Cubic100 #1  250,000  100  448.39  100 
Cubic100 #2  250,000  100  439.78  100 
Cubic100 #3  250,000  100  476.10  100 
Cubic100 #4  250,000  100  440.40  100 
Cubic10000  10,000  1000  31.72  100 
As can be seen in Table 2, SLH succeeded in finding a Hamiltonian cycle in all Hamiltonian graphs tested. To date the authors have not found an instance of a Hamiltonian graph for which SLH has terminated without finding a Hamiltonian cycle, having tested graphs of sizes up to 5000.
5 Conclusions
In this paper we presented SLH; a new heuristic with polynomial complexity for solving the Hamiltonian Cycle Problem. Table 1 shows that SLH, while still in its infancy, is already a competitive HCP solver, while Table 2 indicates that SLH is extremely reliable, succeeding in finding a Hamiltonian cycle in every single tested Hamiltonian graph, notably including graphs possessing very few Hamiltonian cycles. This includes graphs where other, benchmark, solvers failed to find any such cycles. Furthermore, these results were achieved without the need for randomisation techniques that are a common feature in many contemporary HCP heuristics. We believe that balancing the use of our and generator transformations overcomes some of the difficulties that other heuristics try to overcome by randomisation.
In addition, we expect that in future versions, the speed of SLH will be significantly improved by optimising the implementation in a variety of ways. In particular, replacing SLHclosing transformations with LinKernighan type sequential transformations, such as those implemented in LKH Helsgaun , seems promising and deserves further exploration. Similarly, investigations of both alternative initial orderings and alternative stopping rules could lead to improvements in performance. Taking advantage of a presolver, such as the subgradient optimisation routine used in LKH, is also likely to improve the solving time of the algorithm in general. A demonstration version of SLH is available on our website SLH , where users are invited to submit and solve their HCP problems through our online interface.
Acknowledgments
The authors gratefully acknowledge useful comments from the anonymous referees which improved the exposition, and useful discussions with Brendan McKay and Gordon Royle that helped us to find suitable test instances. The editor, William Cook, also contributed significantly by suggesting further testing and changes of inaccurate statements. The research presented in this manuscript was supported by the ARC Discovery Grant DP120100532.
References
 [1] Applegate, D.L., Bixby, R.B., Chavátal, V., and Cook, W.J. Concorde TSP Solver: http://www.tsp.gatech.edu/concorde/index.html.
 [2] Applegate, D.L., Bixby, R.B., Chavátal, V., and Cook, W.J. The Traveling Salesman Problem: A Computational Study. Princeton University Press (2006).
 [3] Baniasadi, P., Clancy, K., Ejov, V., Filar, J.A., Haythorpe, M., and Rossomakhine, S. Snakes and Ladders Heuristic – Web Interface: http://fhcp.edu.au/slhweb/ (2012).
 [4] Bouwer, I.Z., Chernoff, W.W., Monson, B., Star, Z. The Foster Census. Charles Babbage Research Center, Winnipeg (1988).
 [5] Eppstein, D. The Traveling Salesman Problem for Cubic Graphs. In Frank Dehne, JörgRüdiger Sack, and Michiel Smid, editors, Algorithms and Data Struct., volume 2748 of Lecture Notes in Computer Science, pages 307–318. Springer Berlin (2003).
 [6] Flood, M.M. The Traveling Salesman Problem. Oper. Res. 4, 61–75 (1956).
 [7] Gutin, G. and Punnen, A.P. Traveling Salesman Problem and Its Variations. Kluwer Academic Publishers (2002).
 [8] Helsgaun, K. An Effective Implementation of LinKernighan Traveling Salesman Heuristic. Eur. J. Oper. Res. 126, 106–130 (2000).
 [9] Isaacs, R. Infinite Families of Nontrivial Trivalent Graphs Which Are Not Tait Colorable. Amer. Math. Monthly 82:221–239 (1975).

[10]
Lawler, E.L., Lenstra, J.K., Rinooy Kan A.H.G., and Shmoys, D.B.
The Traveling Salesman Problem: A Guided Tour of Combinatorial Optimization
. John Wiley and Sons (1985).  [11] Lin, S. Computer Solutions of The Traveling Salesman Problem. The Bell Systems Tech. J. 44, 2245–2269 (1965).
 [12] Lin, S. and Kernighan, B.W. An Effective Heuristic Algorithm for The Traveling Salesman Problem. Oper. Res. 21, 496–516 (1973).
 [13] Royle, G., Conder, M., McKay, B., and Dobscanyi, P. Cubic symmetric graphs (The Foster Census): http://school.maths.uwa.edu.au/gordon/remote/foster (2001).
 [14] Sheehan, J. Graphs with exactly one hamiltonian circuit. J. Graph Th. 1:37–43 (1977).
 [15] TSPLIB. Hamiltonian cycle problem (HCP): http://comopt.ifi.uniheidelberg.de/software/TSPLIB95 (2008).
 [16] Weisstein, E.W. Generalized Petersen Graph (From MathWorld – A Wolfram Web Resource): http://mathworld.wolfram.com/generalizedpetersengraph.html.
 [17] Wormald, N. Models of Random Regular Graphs, in Surveys in Combinatorics, Cambridge University press, pages 239–298 (1999).
Comments
There are no comments yet.