1 Introduction
In recent years, a large number of metaheuristic optimization algorithms (MOAs) has been proposed, and some of these were created based on inspiration drawn from natural phenomena glover2006 . Examples of these are the Ant Colony System algorithm that was inspired by the foraging behavior of certain species of ants and Simulated Annealing (SA) with some ideas taken from metallurgy a6 ; czech2009 . Metaheuristics are often applied to find solutions of an acceptable quality to difficult combinatorial optimization problems, particularly NPcomplete ones. A good example is the Sequential Ordering Problem (SOP) which consists in finding a minimum weight Hamiltonian path on a directed graph with weights that is subject to precedence constraints among the nodes. Although less timeconsuming than the exact approaches, MOAs differ in their efficiency, which can sometimes be improved by combining ideas taken from other MOAs.
1.1 Contributions
The main aim of the paper is to show that Simulated Annealing could be used to improve the convergence speed of ACS and Enhanced ACS (EACS) algorithms. The proposed solution is easy to implement and does not increase the algorithms’ asymptotic complexity. Moreover, we developed a modified version of the SOP3exchange
local search (LS) heuristic as proposed by Gambardella et al.
a11 for the SOP. The modification, again, includes ideas taken from the SA to allow the algorithm to escape local optima. A thorough experimental evaluation on a number of SOP instances from wellknown datasets confirms the efficiency of the proposed algorithms. In fact, in several cases we obtained results of a better quality than those from stateoftheart methods in the literature a12 ; a13 .The paper is organized as follows: Section 2 focuses on recent ideas of improving the efficiency of the ACS (and related algorithms), some of which include the SA. We also briefly discuss recent work on solving the SOP. In Sec. 3 we describe our approach to improve the convergence speed of the ACS by using the SA. Section 4 presents a similar approach, but to improve the local minima escape ability of the SOP3exchange local search algorithm which is paired with the ACS and the EACS when solving the SOP. Section 5 presents the results of computational experiments conducted on two sets of SOP instances. The last section contains the conclusions and some ideas for further work.
2 Related work
Multiple modifications to the ACO family of algorithms have been proposed in the literature. Most of them refer to pheromone update rules and parameter tuning. Hassani et al. a34
proposed a modified global pheromone update rule for the ACS in which not only the global best ant but also all ants with inferior solutions may update the pheromone with probability calculated according to the acceptance rule of the SA. The initial temperature was set arbitrarily to 100 and an exponential cooling schedule was applied. The limited computational experiments showed that in most cases the algorithm achieved better results than the Ant System. Bouhafs et al.
a33 proposed a twophase approach based on the SA and ACS to solve the Capacitated LocationRouting Problem. The SA was used to find facility locations while the ACS was used to solve the corresponding location routing problem. In most cases The algorithm was able to improve the bestknown solutions to the problem.In most of the ACO and SA combinations the latter plays the role of a local search used to improve the quality of the solutions generated by the ants. Behnamian et al. a42 proposed a hybrid of the ACO, SA and Variable Neighborhood Search algorithms for solving parallelmachine scheduling problems. The SA was used to guide the dedicated LS. A successful combination of the ACS and the SA was proposed by Ayob and Jaradat a35 for solving course timetabling problems. The SA was used along with the Tabu Search to improve solutions generated by the ACS. The results for the proposed algorithm were of better quality than those of the ACS alone or of the MAXMIN Ant System. The SA was again used as the LS for the ACS by Wassila and Boukra a36 . The approach was slightly faster but comparable in terms of solutions quality, than other natureinspired metaheuristics for the intrusion detection problem. Similarly, the SA played the role of an LS improving the results generated by the ants in the ACS solving the Vehicle Routing Problem with Time Windows a37 . Chen and Chien a38
proposed a complex hybrid of four metaheuristics, namely of the Genetic Algorithm, SA, ACS, and the Particle Swarm Optimization, for solving the TSP. The SA played the role of a mutation operator in the GA part of the hybrid. In a paper by Xi et al.
a39 the solutions generated by the ant system were later improved by the SA when solving the 3D/2D fixedoutline floor planning problem. McKendall and Shang a40 used the SA as the LS method in one of their Hybrid Ant System algorithms for solving the dynamic facility layout problem. The resulting algorithm was able to improve some of the best known results for the problem. Similarly, the solutions generated by the ACO were a starting point for the SA solving the problem of managing energy resources considering intensive use of electric vehicles a41 . The combined approach produced solutions of quality better than of the SA or ACO alone.2.1 Sequential Ordering Problem
The Sequential Ordering Problem is a generalization of the Asymmetric TSP (ATSP). The goal is to find the shortest Hamiltonian path from a starting city (source node) to a destination city (final node) by going through each of the remaining cities (nodes) exactly once. Moreover, some cities have to be visited before others. Due to precedence constraints, the problem is sometimes referred to as the Precedence Constrained Traveling Salesman problem (PCATS). The SOP can be viewed as a scheduling problem in which many jobs have to be scheduled on a single machine. The processing times for the jobs are given along with the setup times between pairs of jobs. Also, some jobs have to be completed before others. The goal is to minimize the total makespan a15 . Other realworld problems that can be modeled as an instance of the SOP include the Single Vehicle Routing Problem with pickup and delivery constraints or the routing of a stacker crane in an automatic storage system a14 .
An instance of the SOP can be described using a graph where is a set of nodes containing the starting and the final nodes, and is a set of weighted directed edges. Additionally, a precedence graph is given, where defines the precedence constraints, i.e. an edge if node has to precede node in every feasible solution. By definition, the starting node precedes every other node, i.e. , and the final node has to be preceded by all other nodes, i.e. . The precedence graph, , has to be acyclic for feasible solutions to exist.
On the one hand, the precedence constraints make solving the SOP more difficult than the ATSP because the solution construction algorithms have to check for the precedence constraints. On the other hand, the precedence constraints may limit the number of feasible solutions, which can be beneficial for the exact methods, such as the branchandcut a16 . All of the SOP instances from the SOPLIB2006 repository with the largest relative number of precedence constraints (60%) were solved to optimality, whereas those in which the constraints concerned 15% percent of all edges remained unsolved a13 .
The SOP was introduced along with a mathematical model and exact algorithms by Escudero et al. a19 . Several exact approaches for solving the SOP have since been proposed. Escudero et al. applied Lagrangian Relaxation to solve the SOP, which the authors called the RelaxandCut algorithm a20 . Hernádvölgyi et al. proposed a branchandbound algorithm with the lower bounds obtained from homomorphic abstractions of the original states space a16 . The authors solved to optimality several instances (with 4050 vertices) from the TSPLIB repository.
Gouveia et al. proposed a cutting plane algorithm with the SOP formulations involving additional exponentialsized sets of inequalities a17 . The authors were able to improve the best known lower bounds for many SOP instances from the TSPLIB repository and to solve to optimality instance p43.4 (the calculations took 15282 sec.). Later, Guveia and Ruthmair solved to optimality several SOP instances from the SOPLIB2006 repository by using the branchandcut algorithms combined with several preprocessing methods, heuristics, and separation routines a13 . They used a single core of an Intel Xeon E5540 or E5649 both with a 2.53GHz clock. For most of the instances the optima were found under an hour, but for 12 instances no optima were found with the time limit set to 24 hours.
The exact methods are time consuming, particularly if the size of the problem reaches a few hundred nodes, hence much of the research has been focused on heuristic algorithms for the SOP. Guerriero and Mancini proposed a parallel rollout heuristic in which several threads simultaneously visit different portions of the solution space and periodically exchange information about the solutions found a21 . The algorithm was able to match the bestknown solutions for most of the SOP instances from the TSPLIB repository, although its main drawback was a high computational cost.
Gambardella et al. proposed a combination of the Ant Colony System and a novel LS procedure called the SOP3exchange a11 . The resulting algorithm, denoted as HASSOP, allowed to improve many bestknown results for many SOP instances from the TSPLIB repository. Monetamanni et al. added to the HASSOP a Heuristic Manipulation Technique which creates and adds artificial precedence constraints to the original problem a23 . The method led to better results, particularly for large SOP instances.
A discrete Particle Swarm Optimization hybridized with the SOP3exchange heuristic was proposed by Anghinolfi et al. a24 . The algorithm was able to improve many of the best results presented in a11 ; a23 . Later, Gamabrdella et al., basing their findings on an analysis of the drawbacks of the HASSOP algorithm, proposed an improved ACS version called the Enhanced Ant Colony System (EACS) a22 . The two main changes were proposed. First, the construction phase of the EACS used information about the best solution found so far. Second, the LS was run only if the current solution was within 20% of the best found solution. The EACS was able to further improve some of the best results obtained by Anghionlfi et al. a24 and to date remains one of the most efficient methods for solving the SOP.
3 Improving ACS Convergence with Simulated Annealing
Ant colony optimization (ACO) is probably the bestknown algorithm that was inspired by the foraging behavior of ants in nature. It is a populationbased metaheuristic algorithm that is often used to solve difficult combinatorial and continuous optimization problems. In general, it does not guarantee that the optimal solution will be found, but solutions that are found are often of good enough quality (for practical use) a9 .
In the ACO, a number of artificial agents (ants) construct iteratively complete solutions to an optimization problem. An ant starts with an empty solution and, in subsequent steps, extends it with components selected from the set of all available components. Each component has an associated pheromone trail and a heuristic value. The higher the product of the pheromone concentration (value) and the heuristic value is, the higher the probability that it will be selected by the ant.
In nature, ants communicate indirectly with one another by depositing small amounts of chemical substances called pheromones, e.g., an ant that has found a food source marks the path to the nest with small amounts of pheromone. The pheromone trail attracts other ants and leads them to the food source. The more ants that repeat the process, the higher the concentration of the pheromone trail becomes, hence the process becomes autocatalytic. The pheromone evaporates with time, so the pheromone concentration does not increase indefinitely. The ACO algorithms use artificial pheromone trails, with the pheromone concentration represented as real numbers. The set of all pheromone trails is usually called a pheromone memory and plays a crucial role in the performance of the ACO family of algorithms a5 ; a9 .
For the TSP (and related problems) the problem is usually modeled by using a graph . An artificial ant constructs its solution starting from a randomly selected node. In subsequent steps it moves from the current node to one of the unvisited neighbor nodes by using the corresponding edge. The pheromone trails are deposited on the edges, , of graph and together with a priori knowledge about the problem, reflected in the heuristic values associated with each edge they guide the construction process.
The Ant Colony System is an improved version of the Ant System by Dorigo et al. a8 . In the ACS the ant located at node selects a next node according to a pseudorandom proportional rule a9 :
(1) 
where is a cost associated with an edge , is the value of the pheromone trail on edge , is a set of available (candidate) nodes of ant , and is a parameter, .
is a node (city) selected according to the probability distribution defined by:
(2) 
The choice defined by Eq. 1 depends on the value of parameter . If the randomly drawn number is lower than the parameter , then the choice is greedy and the ant selects the node to which an edge leads with the maximum product of pheromone trail and heuristic values. Otherwise and the choice is random with the probability distribution given by Eq. 2. The first case is often referred to as exploitation of the knowledge gathered by the ants (in the pheromone memory). Usually, a value of close to 1 (often 0.9 and above) leads to good quality results in a shorter period of time as compared to the base ACO algorithm a9 . Some authors even use a higher value calculated as follows: , where parameter is the number of nodes that should be selected randomly with the probability defined by Eq. 2 a11 .
During construction of the solutions the ants in the ACS update the values of the pheromone trails on the traversed edges. Each ant, after making a move from node to node , applies a local pheromone update rule that decreases the amount of pheromone on edge according to:
(3) 
where is a parameter regulating evaporation of the pheromone over time and is the initial pheromone level. The rationale behind formula (3) is that it lowers the probability of selecting the same nodes by subsequent ants, hence it increases variety in the constructed solutions.
The global pheromone update performed after the ants have completed construction of their solutions is more important. The update rule results in the increase in pheromone levels on trails corresponding to the best solution found so far () and its value by . For each , the pheromone changes according to the formula:
(4) 
where and is a parameter regulating the strength of the pheromone increase. The global pheromone update ensures that edges belonging to the current best solution have higher probabilities of being selected in the algorithm’s subsequent iterations. The global best solution is used during the global pheromone update because it leads to slightly better solutions than the iteration best solution a9 .
In order to further shorten the computation time of Eq. 1, the socalled candidate set is used which consists of the nearest neighbors of the current node. The size of the candidate set is usually in the range of 10 to 25 a5 ; a9 . For comparison, the size of the problem is often two or three orders of magnitude larger, hence the use of candidate sets further limits exploration of the solution search space. The candidate sets are a greedy heuristic based on the observation that good quality solutions are comprised mainly of short edges. If all of the candidate set elements are already a part of the constructed solution the ant selects one of the remaining (unvisited) nodes. The candidate set is usually computed at the beginning and does not change. Randall and Montgomery a26 investigated the idea of dynamic candidate set updates for the TSP and the Quadratic Assignment Problem (QAP). The dynamic versions resulted in solutions of better quality but also significantly increased the computation time of the whole algorithm.
3.1 Enhanced ACS
The Enhanced ACS algorithm proposed by Gambardella et al. is an efficient metaheuristic for the SOP a22 . It differs from the ACS in two ways. The first is a modified solution construction phase which is much more focused on the best solution found so far. Instead of direct application of Eq. 1 an ant selects the node which follows the current node in the best solution so far (if the random number is lower than the parameter ). Only if the node is already a part of the constructed solution does the ant consider other nodes, i.e. it selects the edge with the maximum product of the pheromone and heuristic values. If , the selection process from the ACS is used. Parameter usually has a value of or higher, hence this modification significantly speeds up the construction process although it limits the exploration capability of the EACS, and without a strong LS, it achieves results of lower quality than the ACS a12 .
The second modification is strong integration of the solution construction phase with the LS. The LS is run only if the cost of the current solution is within 20% of the best solution found so far. Also, the LS is initialized so that only elements of the current solution which are out of order with respect to the best solution are placed on the socalled don’t push stack. The elements of the stack are the starting points for the LS. This increases the emphasis on areas of the solution search space that are potentially unexplored.
A slightly modified version of the EACS was proposed by Ezzat a27 . The main difference concerns the choice of the next node in the solution construction process. At first it tries to select the node which follows the current node in the best solution so far. If is already a part of the constructed solution, it selects the next node with the probability defined by Eq. 3. This change favors exploration and makes the algorithm less exploitative than the EACS but still more exploitative than the ACS. Later Ezzat et al. adapted the EigenAnt algorithm to solve the SOP ezzat2014 . The computational experiments showed that the proposed algorithm had performance comparable to the EACS.
3.2 Simulated Annealing
Simulated Annealing is one of the most wellknown general metaheuristic optimization methods. It was inspired by the Monte Carlo method of sampling the states of a (physical) thermodynamic system. In the SA, a solution to the optimized problem is equivalent to a state of the thermodynamic system, and its quality corresponds to the system’s current energy a32 . The SA works as follows: starting from an initial solution , a sequence of solutions is generated, where is a set of all feasible solutions. Given a current solution , a candidate solution is generated and its cost is calculated. The next solution is selected according to:
(5) 
Probability is defined as , where is called a temperature. The physical analogy on which the SA is based requires that the system be kept close to a thermal equilibrium as the temperature is lowered.
The most often used cooling schedule is the exponential schedule of the form: , where is a parameter. In fact, the exponential cooling schedule usually lowers the temperature too fast for the system to reach a nearequilibrium state and does not guarantee convergence to the global optimum. Nevertheless, it is useful in practice because it is easy to implement and often leads to good quality solutions if the computation time is limited a31 . More advanced cooling schedules have been proposed; the two wellknown ones are the adaptive cooling schedule by Huang et al. a32 and the efficient cooling schedule by Lam a30 .
3.3 Combining ACS with Simulated Annealing
The ACS generally offers a better convergence speed than the Ant System or ACO a5 . This stems, among others, from the more exploitative solution construction process and the global pheromone update rule that places emphasis on the best solution found so far. This usually speeds up the process of finding good quality solutions but also makes escaping local minima very difficult. Simulated Annealing, on the other hand, offers a simple solution to escape the local minima. We propose how to combine the ACS and SA to enhance the ACS search process while maintaining its exploitation oriented nature. The proposed algorithm, ACS with the SA (ACSSA in short), can be summarized as follows. The ACS search process is guided (in part) by the pheromone trail values. At the end of each iteration the global pheromone update rule increases the values of the pheromone trails corresponding to the components (edges) of the current best solution (global best). In the proposed ACSSA algorithm the global update rule uses instead an active solution which may not necessarily be the best solution found so far. At the end of every iteration each of the solutions generated by the ants is compared with the active solution. If the new solution is of better quality, it replaces the current active solution. Otherwise, the new solution may still replace the active solution but with a probability defined by the Metropolis criterion known from the SA. While the ACS is always focused on the neighborhood of the best solution found so far and can become trapped in a local optimum for a long time, the proposed ACSSA has a greater chance of escaping the local optima by shifting focus to the solution with a higher cost.
Figure 1 presents a pseudocode of the proposed ACSSA algorithm. The major part of the algorithm does not differ from the ACS, i.e. the only differences are related to temperature initialization (line 1), the cooling schedule (line 26) and the active solution selection process (line 19). Inclusion of the SA into the ACS results in a more exploratory search process, but it may also lead to a prolonged examination of areas of the solution space that contain solutions of a poor quality. This is prevented by allowing the current global best solution to be selected as the active solution with a probability of 0.1 (line 20). This heuristic might not be necessary if a more advanced cooling schedule is used. The present work is intended to be proof of the concept that the SA may be used to improve the convergence speed of the ACS, hence the geometric cooling schedule was adapted for its simplicity. In future work a more advanced schedule, e.g. an adaptive cooling schedule by Lam a30 , could be applied.
Figure 2 presents the active solution selection procedure. The process iterates over a set of solutions built by the ants. If the cost of an ant’s solution is lower than the cost of the active solution, it replaces the active solution (lines 3–5 in Fig. 2). Otherwise, the ant’s solution (of a worse quality) may replace the active solution with a probability calculated according to the Metropolis criterion from the SA (lines 6–7). As the temperature is lowered, the probability of accepting a worse solution goes down to 0 and the process becomes equivalent to that of the ACS.
The initial temperature plays an important role in the SA. In our work we applied the idea of an adaptive temperature calculation which was proposed in a47 . The calculation requires a sample of randomly generated solutions whose values (costs) are used to calculate the initial temperature according to:
(6) 
where is the mean of absolute differences between the costs of consecutive pairs of solutions from the sample,
is the sample standard deviation and
is a parameter denoting the probability of accepting a worse solution, i.e. with a higher cost. The idea behind Eq. 6is based on the central limit theorem which states that the mean of a large sample of independent random variables is approximately normally distributed, hence, almost all (approx. 99.7%) absolute differences between the quality of randomly generated solutions fall in the range of
. Knowing the approximation of the highest difference in quality between a pair of solutions allows to calculate the initial temperature so that the probability of accepting a worse solution is .Although the temperature initialization requires additional computations, it does not increase the asymptotic complexity of the ACS algorithm. In our experiments a sample of 1000 random solutions was used due to a negligible additional cost; however, a much smaller number could also be acceptable.
3.4 Combining Enhanced ACS with the SA
As described in Sec. 3.1, the EACS differs only slightly from the ACS. The differences are minor and concern the solution construction process and the LS application, hence it is straightforward to apply exactly the same ideas to incorporate the SA into the EACS as in the proposed ACSSA algorithm. Due to its more (i.e. relative to the ACS) exploitative behavior, the EACS is even more susceptible to getting trapped in the local minima, hence it should also benefit from the SA component. The resulting algorithm will henceforth be denoted as the EACSSA.
4 Efficient Local Search for the SOP
Even though the ACS, MMAS and related algorithms perform competitively to other nature inspired metaheuristics their convergence can still be improved with a problemspecific local search a9 . When combined with the LS, the ACS is responsible for finding a candidate solution, while the aim of the LS is to improve it by performing small changes leading to a neighboring solution of a better quality. In this section we start with a description of the stateoftheart LS heuristic for the SOP and later propose a modified version which incorporates the SA component.
4.1 SOP3exchange
Gambardella et al. a11 proposed an efficient LS heuristic for the SOP called the SOP3exchange. It adapts the 3opt heuristic known from the TSP to the SOP without an increase in algorithm time complexity. The SOP3exchange belongs to the family of edgeexchange procedures, in which a new solution is generated by replacing existing edges with another set of edges for which the cost of the solution is lower. This operation is usually called exchange, and the value of can be fixed (typically 2 or 3) or can vary as in the LinKernighan heuristic a28 . Starting from the initial solution and applying the exchange iteratively until no further improving exchange exists leads to a koptimal solution. This process requires in the worstcase scenario, time.
During a exchange procedure existing edges are removed producing disjoined paths which are then reconnected with new edges. In some cases, reconnection of the paths requires that some of them be reversed, e.g. in the case of a 2opt move and a closed path ; there are two possible ways to reconnect the subpaths after removal of the and edges, namely and ; both require a reversal of the subpath. The reversal, however, is problematic for the SOP because the distances between the nodes are asymmetric, hence the length of the path after the reversal should be recalculated what requires time. Because the cost of a opt move should be calculated in a constant time an efficient implementation of the opt heuristic for the SOP should be restricted only to pathpreserving exchanges a11 .
The smallest that allows a pathpreserving exchange is , denoted as the pathpreserving3exchange shown in Fig. 3. By removing the , and edges and adding , and edges the two neighboring subpaths are swapped, thus preserving the relative order of the elements. After performing the pathpreserving3exchange one would still need to verify if the precedence constraints for the two subpaths are preserved. This requires time in the general case but can be avoided as in the method proposed by Gambardella et al. a11 . There are two necessary procedures to reduce the computation time. The first requires keeping the lexicographic order while searching for the pathpreserving3exchange. The second, is the use of a labeling method.
Figure 3 shows how the route changes when applying the pathpreserving3exchange. The tree indices , , and () define two subpaths in the route: left and right . The subpaths are swapped as a result of performing the exchange, i.e. the right path comes before the left path. This can only happen if there are no precedence constraints between the considered node and the nodes in the left path.
The pathpreserving3exchange as proposed by Gambardella et al. a11 consists of forward and backward searches for feasible pathpreserving3exchanges. The forward search involves incrementing iteratively, thus increasing the length of the right path by one. This requires that only the precedence constraints be checked between the elements of the left path and the node considered for inclusion into the right path. Eventually, a precedence constraint is hit and the procedure is repeated with the left path being extended with a single element (by incrementing ) and the right path set to a single element, i.e. , . After all of the possibilities are exhausted is incremented and the process repeats for all possible and values (). This leads to possible pairs of subpaths each requiring constraints verification, hence a total complexity of .
The cost of constraints verification can be reduced to due to the labeling procedure. The procedure works as follows. Each time the left subpath is extended with a new node (during the sop3exchange), mark() is set to count for every node for which there exists a precedence constraint between and . The count is a variable initially set to 0 and incremented each time the left path grows, i.e. is incremented. Thanks to the procedure, each time the right path is extended with a node one needs only to check the value of . If the value equals the count, then the node at index (in the right path) has to be visited after the nodes in the left path, hence the two paths cannot be swapped. This reduces the complexity of the whole search for a feasible pathpreserving3exchange to , which is asymptotically equal to the complexity of a 3opt heuristic used to solve the TSP.
The forward search for the pathpreserving3exchange considers only exchanges defined by indices , and such that , where is the number of nodes. The backward search is analogous to the forward search but the left and right paths are expanded in the direction of decreasing indices, i.e. the left path "moves" from the end of the sequence to the beginning.
Summarizing, the time complexity of finding a single profitable pathpreserving3exchange using the described procedure is . This is still expensive as the procedure is applied (to a single solution) in a loop until no further improving move is found and it has to be repeated for the subsequent solutions. Gambardella et al. a11 proposed two additional changes to reduce the algorithm’s computation time. The first is to limit the search to only a subset of all potential moves. By default the SOP3exchange for each index considers all valid and indices. Assuming most of the changes will involve relatively short paths, the values can be restricted to for the forward procedure and for the backward procedure. This version was named ORexchange a11 . The second change involves the use of two additional heuristics, i.e. don’t look bits and don’t push stack. Don’t look bits is a data structure that was proposed by Bentley a29 which works as follows. A bit is associated with each node of the solution. At the beginning all the bits are turned off and are turned on when the SOP3exchange starts looking for a profitable exchange originating from the node. If the don’t look bit is turned on, the corresponding node is ignored by the subsequent SOP3exchange searches until the node is involved in a profitable pathpreserving3exchange. Then all six pivot nodes () are turned off. Use of the don’t look bits aims to focus the search on the changing parts of the solution. The purpose behind the use of the don’t push stack is similar – it contains nodes to be selected as a starting point of a pathpreserving3exchange. At the beginning the stack is initialized with all of the nodes. During the search a node, , is removed from the stack and if the feasible move originating from node is found the six nodes involved in the exchange are pushed onto the stack (if they do not belong to it already). An additional benefit of using the don’t push stack is that the linear order in which the nodes are considered during the search for a profitable pathpreserving3exchange is broken.
Figure 4 presents a pseudocode of the SOP3exchange algorithm. It starts with an initialization of the don’t push stack and repeatedly searches for a feasible move. First, it searches in the forward direction (line 4) and if it fails, the backward search is applied (line 5). The pseudocode of the search in the forward direction is shown in Fig. 5 (the search in the backward direction is analogous). It starts with a given index that denotes the starting point of a possible pathpreserving3exchange and searches for the remaining two points, denoted by indices and . The labeling procedure is applied incrementally (lines 4–6). The function is_move_accepted in line 12 simply checks if the proposed decrease in the solution value is better than the current best, but it can be replaced by a more advanced criterion as will be shown later.
4.2 Improving SOP3exchange Efficiency with SA
The SOP3exchange LS is efficient in improving solutions generated by the ants; however, the improvement process is greedy and only better (downhill) moves are accepted. It makes it possible to reach a local optimum quickly; however, it also makes it unable to find any better solution that would require making at least one uphill move. Similarly to our idea of incorporating the SA into the ACS and EACS algorithms, we propose to include the SA decision process into the SOP3exchange in order to make it more explorative. The proposed modification is easy to implement as it only requires to modify the greedy condition as to whether to accept a given subpath exchange in the forward search for a pathpreserving3exchange (line 12 in Fig. 5) (analogously in the backward search). The pseudocode of the proposed modification is shown in Fig. 6. The decision whether to accept the proposed move (subpath exchange) is made based on the change (decrease) in the solution value and the value of the best move found so far. If the proposed move is better than the current best, it is always accepted. Otherwise, if it results in the same decrease of the solution length then it is accepted with a probability of 10% (lines 4–5 in Fig. 6). It allows to accept moves which do not change the solution value but which result in a different relative order of the solution nodes. Finally, if the proposed move is worse than the best move found so far, it is accepted with the probability calculated using the Metropolis criterion, as in the SA.
Similarly to the ACSSA, there are two parameters related to the SA component of the proposed SOP3exchangeSA algorithm, namely and . The former is used in the geometric cooling schedule to lower the temperature , while is related to an initial probability of accepting a worse move. There is, however, a slight difference in the temperature initialization relative to the ACSSA. In the ACSSA the initial temperature is calculated based on a sample of differences in the quality (length) of the randomly generated solutions, just before the main computations. In the SOP3exchangeSA the sample comes from the values of the differences in the solution quality (delta values) resulting from the subsequent pathpreserving3exchanges considered during the initial runs of the SOP3exchangeSA (lines 11–16 in Fig. 6). In other words, there is no dedicated temperature initialization phase in the SOP3exchangeSA and the sample of delta values is collected on the run in order not to slow down the whole algorithm. After the sample of (a value found experimentally) is collected, the initial value of temperature is calculated, and in subsequent invocations of the SOP3exchangeSA the temperature is reset to this initial value without recalculating.
5 Computational Experiments
A series of computational experiments was conducted in order to evaluate the performance of the proposed algorithms. In the first part of the experiments we focused on the efficiency of the ACSSA and EACSSA used alone, i.e. without the problemspecific LS. In the second part the focus was placed on the efficiency of the algorithms coupled with the SOP3exchange and SOP3exchangeSA LS heuristics.
The ACS and EACS require that a number of parameters be set. Based on preliminary computations and suggestions from the literature we used the following settings in our experiments: number of ants, ; ; and , and local and global pheromone evaporation ratio, respectively; , where is the size of the problem. The computations were repeated 30 times for each configuration of the parameter values and the problem instance. The computations were conducted on a machine equipped with a Xeon E52680v3 12 core CPU clocked at 2.5GHz, although a single core was used per run. All algorithms were implemented in C++ and compiled with the GNU compiler with the Ofast switch^{1}^{1}1The source code is available at https://github.com/RSkinderowicz/AntColonySystemSA.
5.1 ACSSA Parameter Tuning
The first part of the experiments was focused on the behavior of the ACSSA algorithm depending on the values of the SArelated parameters. The proposed ACSSA algorithm uses a simple exponential cooling schedule , where is the cooling factor and is the initial temperature. Although the exponential cooling schedule does not guarantee convergence to a global optimum, it has the advantage of being easy to implement and often performs well in practice a43 . Preliminary computations showed that the most important factor for the performance of the ACSSA was the parameter which directly influences the speed of the SA convergence. The best performance was observed for , for which the probability of accepting worse quality solutions and, hence, escaping local minima remained high for a relatively long time. It is not without significance that the algorithm was run for iterations, and for shorter/longer runs a smaller/higher value could prove better. In fact, the value could be calculated based on the total number of iterations if used in practice a43 . A number of "promising" values was selected for a more thorough investigation, namely . The initial temperature was calculated for each problem instance during the initialization phase so that the probability of accepting a worse solution (an uphill move) at the beginning was approx. equal to the specified probability
(a parameter, independent of a problem instance). The mean difference between the successive solutions was estimated based on a sample of randomly generated solutions. In our experiments we considered
leading to a total of 9 combinations of and . The algorithm was run for a total of 14 SOP instances from the TSPLIB repository, namely: ft53.1, ft53.4, ft53.3, ft53.2, ft70.4, ft70.3, ft70.2, ft70.1, prob.100, kro124p.4, kro124p.3, kro124p.2 and kro124p.1.We used statistical tests to verify if the results for the various values of parameters differed significantly. The proposed experimental design can be viewed as a twoway (twofactor) layout in which the main factor is the combination of and values, while the second factor (also called a blocking factor) is the problem instance (13 instances in our case) a44 . More specifically, the design can be described as a randomized block design with an equal number of replications per treatmentblock combination. A suitable nonparametric (distributionfree) statistical test was proposed by Mack and Skillings and is an equivalent of a parametric twoway ANOVA a45
. The null hypothesis,
, which is of our interest here is that of no differences in the medians (of the solution quality) for algorithms with various and values considered here (a total of 9 combinations). Rejecting the null hypothesis would mean that the different values of the and parameters lead various performance of the ACSSA. The test requires that the MackSkillings statistic be computed (MS) which is then compared with a critical value at the level of significance ( in our case) a44 . The null hypothesis is rejected if . In our case while the critical value , hence was rejected, providing rather strong evidence that the values of and have a significant impact on the quality of the results generated by the ACSSA. This is an expected result because the value of should have a strong effect on the search trajectory of the SA.A  B  C  D  E  F  G  H  I  
(0.999, 0.1)  (0.999, 0.5)  (0.999, 0.9)  (0.9995, 0.1)  (0.9995, 0.5)  (0.9995, 0.9)  (0.9999, 0.1)  (0.9999, 0.5)  (0.9999, 0.9)  
A  –  0.9970  1.0  0.3863  0.9740  0.9595  0.0004  0.4397  0.0205+ 
B  0.9970  –  0.9765  0.8856  1.0  1.0  0.0093  0.9155  0.0010+ 
C  1.0  0.9765  –  0.2235  0.9019  0.8678  0.0001  0.2639  0.0509 
D  0.3863  0.8856  0.2235  –  0.9703  0.9817  0.4191.0  1.0  
E  0.9740  1.0  0.9019  0.9703  –  1.0  0.0265  0.9813  0.0002+ 
F  0.9595  1.0  0.8678  0.9817  1.0  –  0.0346  0.9891.0  0.0002+ 
G  0.0004+  0.0093+  0.0001+  0.4191.0  0.0265+  0.0346+  –  0.3668  
H  0.4397  0.9155  0.2639  1.0  0.9813  0.9891.0  0.3668  –  
I  0.0205  0.0010  0.0509  0.0002  0.0002  – 
After the rejection of , we can apply a posthoc test to compare the individual pairs of algorithm results obtained for respective pairs of and values. A suitable asymptotically distributionfree, twosided, multiple comparison procedure using withinblock ranks was proposed by Mack and Skillings a44 ; a45 . Table 1 contains the final values of the pairwise comparison. As can be observed, in most cases there were no significant differences between the results of the ACSSA with the various and values. The only exception was configuration and , for which the results were significantly better 6 out of 8 times. On the other hand, configuration and was worse 7 out of 8 times. This shows that the SA component of the ACSSA has the strongest influence if the temperature is decreased slowly. It is important to properly adjust the initial probability of accepting a worse quality solution and, hence, the initial temperature . If the probability is high, the algorithm easily accepts inferior solutions, particularly at the beginning of the computations, and drifts away from the good quality solutions. It is worth emphasizing that these observations are valid for the computation budget (time) used in the experiments; greatly increasing the computation time could show even better convergence for higher values. Figure 7 shows the convergence plots for the ACSSA with various and levels: for the temperature drops relatively quickly and convergence of the ACSSA resembles that of the ACS. For the temperature drops more slowly and the algorithm has a greater chance of escaping the local minima for a longer period of time. By increasing the initial temperature (as for ) we can extend the initial "free wandering" phase at the expense of slower convergence.


5.2 ACSSA and EACSSA Performance
The first experiment showed that the SA component indeed had a significant impact on ACSSA search convergence. In the subsequent experiment we focused on a comparison between the ACSSA relative to the ACS. We also considered the EACS and the EACS combined with the SA (EACSSA). Both the ACSSA and the EACSSA were run with and , chosen based on the previous experiment. To make the comparison fair, all of the algorithms were run with a time limit of 60 seconds of CPU time. Although the limit was relatively low it was sufficient to detect differences in the performance of the algorithms. A total of 20 SOP instances of varying size were selected from the TSPLIB repository.
The boxplots of the mean solution error are shown in Fig. 8 and Fig. 9
. The differences between the quality of the solutions generated by the algorithms are clearly visible. For the smaller instances, performance of the ACS and EACS was relatively similar and, in most cases, worse than that of the ACSSA and EACSSA, respectively. The differences became more distinct for larger instances (up to 380 nodes), for which the EACS outperformed the ACS in most cases. The ACSSA generally beat the ACS but even better performance was achieved by the EACSSA version, particularly for the largest instances. The results were compared statistically to make the comparison more complete. For each problem instance, the KruskalWallis nonparametric oneway analysis of variance test (an extension of the MannWhitney U test) with
was applied to check the hypothesis that the results of the four algorithms came from the same distribution. The hypothesis was rejected in 19 out of 20 cases meaning that the results of the algorithms differed significantly. In such cases a posthoc test was applied to compare all pairs of results. For this purpose the BonferroniDunn test was employed with a familywise Type I error correction (
) a46 . The results are summarized in Tab. 2. For each pair of algorithms, only the final verdict is shown with a letter indicating the algorithm that achieved significantly better results than the others. The ACSSA outperformed the ACS in 12 cases, while never generating worse results. The SA component is actually more beneficial than the exploitationoriented heuristics introduced in the EACS which generated significantly better quality results only in 7 cases as compared to the ACS. The EACSSA outperformed both the EACS and the ACS in 16 out of 20 cases. The greatest difference could be observed particularly for the larger SOP instances.Problem  ACS (A)  ACSSA (B)  EACS (C)  EACSSA (D)  A vs B  A vs C  A vs D  B vs C  B vs D  C vs D 

ft53.1  7857 (161.8)  7673 (18.5)  7818 (162.9)  7702 (46.1)  B    D  B    D 
ft53.2  8713 (159.5)  8522 (107.0)  8647 (256.8)  8348 (156.6)  B    D    D  D 
ft53.3  11506 (578.9)  11417 (506.4)  11271 (605.5)  11418 (544.0)             
ft53.4  14744 (201.8)  14704 (81.7)  14779 (128.2)  14639 (101.1)      D    D  D 
ft70.1  40437 (458.0)  40054 (223.6)  40692 (588.6)  40150 (345.5)  B      B    D 
ft70.2  42263 (454.5)  41629 (409.0)  42396 (664.4)  41710 (355.0)  B    D  B    D 
ft70.3  44674 (667.0)  44388 (333.3)  44589 (570.0)  43946 (436.6)      D    D  D 
ft70.4  56098 (325.5)  56146 (127.0)  55593 (564.1)  55305 (362.9)    C  D  C  D   
kro124p.1  42166 (757.8)  41313 (572.8)  42324 (988.9)  41768 (731.4)  B      B     
kro124p.2  44548 (1113.2)  43049 (764.4)  44270 (1318.3)  43529 (809.9)  B    D  B     
kro124p.3  53915 (1832.6)  51411 (321.2)  53313 (1678.5)  51351 (963.8)  B    D  B    D 
kro124p.4  80373 (1120.1)  79708 (922.3)  81204 (1064.1)  78973 (746.0)      D  B    D 
prob.100  1485 (87.8)  1489 (75.2)  1505 (76.6)  1438 (66.8)          D  D 
rbg109a  1111 (9.9)  1107 (11.4)  1093 (9.7)  1067 (7.9)    C  D  C  D  D 
rbg150a  1872 (11.8)  1855 (9.7)  1817 (11.6)  1788 (10.5)    C  D  C  D  D 
rbg253a  3156 (15.1)  3123 (13.6)  3101 (19.8)  3041 (9.0)  B  C  D    D  D 
rbg341a  3103 (60.8)  2989 (40.8)  2990 (37.2)  2821 (31.5)  B  C  D    D  D 
rbg323a  3590 (29.9)  3517 (21.0)  3540 (27.8)  3393 (31.3)  B  C  D    D  D 
rbg358a  3284 (81.1)  3128 (34.9)  3087 (52.0)  2883 (36.3)  B  C  D    D  D 
rbg378a  3483 (54.0)  3347 (38.9)  3442 (59.8)  3184 (37.5)  B    D  B  D  D 
The Simulated Annealing component in both the ACSSA and the EACSSA does not increase the asymptotic time complexity of the algorithms. Only the initial temperature calculation requires a number of random solutions to be constructed, while the main ACS loop is little affected by the Metropolis rule and the cooling schedule computations. Figure 10 shows the mean number of iterations performed by each of the considered algorithms within a time limit of 60 sec. As can be observed, the number of iterations depends mostly on the size of the problem instance, while the differences between the algorithms are relatively small. The EACS and EACSSA are faster than the other two algorithms due to the less expensive solution construction process which builds a new solution by reusing significant parts of a solution from the previous iteration.
5.3 SOP3exchangeSA Parameter Tuning
Similarly to the ACSSA and EACSSA the SOP3exchangeSA algorithm has two more, SArelated, parameters, namely and . Based on preliminary computations, several values were preselected for further investigation, namely and . All 12 combinations of the parameters values were considered. For each combination the EACS algorithm with the SOP3exchangeSA was run on a set of 14 instances of sizes from 400 to 700 selected from the SOPLIB2006 repository, namely: R.400.100.15, R.400.100.30, R.400.1000.15, R.400.1000.30, R.500.100.15, R.500.1000.1, R.500.1000.15, R.500.1000.30, R.600.100.15, R.600.100.30, R.600.1000.15, R.700.100.30, R.700.1000.1, R.700.1000.15.
The nonparametric MackSkillings test was used to verify if there were any significant differences between the results for the different and values, similarly to Sec. 5.1. The null hypothesis stating that there were no differences between the medians of the solutions’ quality produced for the different parameter values was rejected if , where is the MackSkillings statistic and is the critical value at specified level of significance . In our case, and , hence was rejected, providing strong evidence for the significant differences between the quality of results of the EACSSA with SOP3exchangeSA obtained for the different and values.
A posthoc multiple comparison test by Mack and Skillings a44 was applied to find out for which values of the parameters the results were of better quality. Table 3 contains the computed values, where a value at the intersection of the th row and th column denotes the value of the comparison between the results obtained for values of and corresponding to the th row and th column, respectively. An analysis of the test results revealed that for and the results were significantly better than for any other combination of values. Simultaneously, the worst configuration, in terms of solution quality, was and , hence the parameter is of lower significance than , which directly influences the speed of the temperature decrease in the SA component of the SOP3exchangeSA. Generally, the best results were obtained for equal to 0.95 and 0.99.














A  –      0.0131                  
B  +  –    0.8878  0.0001                
C  +  +  –  +  1.0000    1.0000            
D  0.0131+  0.8878    –                  
E  +  0.0001+  1.0000  +  –    0.9990            
F  +  +  +  +  +  –  +  0.0309          
G  +  +  1.0000  +  0.9990    –            
H  +  +  +  +  +  0.0309+  +  –  0.0077    0.0014  0.3709  
I  +  +  +  +  +  +  +  0.0077+  –  0.0073  1.0000  0.9712  
J  +  +  +  +  +  +  +  +  0.0073+  –  0.0331+  +  
K  +  +  +  +  +  +  +  0.0014+  1.0000  0.0331  –  0.8270  
L  +  +  +  +  +  +  +  0.3709  0.9712    0.8270  – 
5.4 Comparison of algorithms
The last part of the experiments concerned the performance of ACS, ACSSA, EACS and EACSSA combined with the LS algorithms, i.e. SOP3exchange and SOP3exchangeSA. This gives a total of 8 algorithm combinations. To make the comparison fair, the algorithms were run with the same time limit of 120 seconds and the same values of parameters (where appropriate). The algorithms were run on SOP instances (48 in total) from the SOPLIB2006 repository a23 . It is worth noting, that the EACS with the SOP3exchange is the current stateoftheart metaheuristic for the SOP a12 .
Algorithm  A  B  C  D  E  F  G  H 

ACS (A)    <0.0001  <0.0001+  <0.0001+  <0.0001  <0.0001  <0.0001  <0.0001 
ACS (B)  <0.0001+    <0.0001+  <0.0001+  <0.0001  <0.0001  <0.0001  <0.0001 
ACSSA (C)  <0.0001  <0.0001    1.0000  <0.0001  <0.0001  <0.0001  <0.0001 
ACSSA (D)  <0.0001  <0.0001  1.0000    <0.0001  <0.0001  <0.0001  <0.0001 
EACS (E)  <0.0001+  <0.0001+  <0.0001+  <0.0001+    <0.0001  0.9999  <0.0001 
EACS (F)  <0.0001+  <0.0001+  <0.0001+  <0.0001+  <0.0001+    <0.0001+  <0.0001+ 
EACSSA (G)  <0.0001+  <0.0001+  <0.0001+  <0.0001+  0.9999  <0.0001    <0.0001 
EACSSA (H)  <0.0001+  <0.0001+  <0.0001+  <0.0001+  <0.0001+  <0.0001  <0.0001+   
A quick analysis of the obtained results showed noticeable differences in the efficiency of the investigated algorithms. The experiment design allows to check for statistically significant differences between the algorithms by using the nonparametric MackSkillings test for a twofactor layout. The first factor is the algorithm that is applied while the second (blocking) factor is the SOP instance that is solved. The null hypothesis of our interest is that there are no differences in the quality of the solutions generated by the algorithms. The rejection of would mean that the algorithms differ in the quality of generated solutions. The critical value for the test at level of significance equal to 0.05 is and the MackSkillings statistic is , meaning that , hence was rejected.
Rejection of the null hypothesis allows us to apply a posthoc test (also proposed by Mack and Skillings a44 ) to perform a pairwise comparison of the algorithms. The resulting values are shown in Tab. 4. As can be observed, all values are either close to 0 or close to 1, meaning that the differences are either sharp or nonexistent, respectively. Not surprisingly, all EACS variants obtained significantly better results than the ACSbased algorithms. The most efficient algorithm was the EACS with the SOP3exchangeSA LS, which obtained results that were significantly better than any of the other remaining algorithms. The second best was the EACSSA with the SOP3exchangeSA LS. Out of the four ACS variants the ACS with SOP3exchangeSA performed better than the other three, thus confirming the efficiency of the proposed SOP3exchangeSA LS. The worst performing were the ACSSA with the SOP3exchange and the ACSSA with the SOP3exchangeSA. The poor performance of the ACSSA variants can be explained by the weakened emphasis on the exploitation which admittedly increases the probability of escaping from local optima but also slows the overall convergence of the algorithm, which is clearly visible if the computational budget is modest, as in the experiment conducted here (120 sec.).
Instance 






Avg.  Std. dev.  Avg.  Std. dev.  Avg.  Std. dev.  Avg.  Std. dev.  
R.200.100.1  74.5  3.8  69.4  2.4  67.6  1.9  66.6  2.2  
R.200.100.15  1935.5  31.0  1837.1  25.0  1916.9  27.8  1828.6  56.5  
R.200.100.30  4216.0  0.0  4216.0  0.0  4216.0  0.0  4216.0  0.0  
R.200.100.60  71749.0  0.0  71749.0  0.0  71749.0  0.0  71749.0  0.0  
R.200.1000.1  1457.2  17.9  1459.0  21.5  1433.9  9.8  1467.2  12.2  
R.200.1000.15  21766.2  298.1  20771.4  201.0  21577.6  399.5  21104.0  881.0  
R.200.1000.30  41196.0  0.0  41196.0  0.0  41196.0  0.0  41196.0  0.0  
R.200.1000.60  71556.0  0.0  71556.0  0.0  71556.0  0.0  71556.0  0.0  
R.300.100.1  45.2  3.9  36.0  2.7  39.7  3.2  31.3  2.4  
R.300.100.15  3328.1  54.8  3177.4  18.7  3289.9  31.0  3209.9  97.4  
R.300.100.30  6124.2  10.9  6120.0  0.0  6122.3  5.3  6120.0  0.0  
R.300.100.60  9726.0  0.0  9726.0  0.0  9726.0  0.0  9726.0  0.0  
R.300.1000.1  1424.0  29.4  1432.2  33.3  1436.7  21.6  1464.2  24.3  
R.300.1000.15  31556.9  647.4  29713.0  441.0  31196.1  637.0  30002.0  1319.5  
R.300.1000.30  54223.3  72.7  54147.0  0.0  54176.5  30.7  54147.0  0.0  
R.300.1000.60  109482.9  36.3  109471.0  0.0  109488.3  41.3  109471.0  0.0  
R.400.100.1  35.9  4.8  23.6  4.1  33.1  4.6  19.0  2.2  
R.400.100.15  4270.9  67.9  3969.8  29.3  4184.8  52.5  3930.5  28.2  
R.400.100.30  8167.6  9.9  8165.0  0.0  8166.0  0.2  8165.0  0.0  
R.400.100.60  15228.0  0.0  15228.0  0.0  15228.0  0.0  15228.0  0.0  
R.400.1000.1  1504.7  29.9  1528.6  24.8  1521.9  22.6  1561.3  31.9  
R.400.1000.15  42436.7  632.7  39854.6  317.6  41258.3  516.9  39293.0  204.3  
R.400.1000.30  85320.8  117.5  85157.5  62.9  85188.6  77.1  85128.6  3.3  
R.400.1000.60  140816.0  0.0  140816.0  0.0  140816.0  0.0  140816.0  0.0  
R.500.100.1  25.4  3.3  11.6  2.6  25.3  4.4  9.9  2.6  
R.500.100.15  5801.2  91.3  5417.1  54.5  5717.8  86.4  5449.8  254.7  
R.500.100.30  9709.9  20.4  9668.3  5.4  9696.0  13.2  9740.5  87.2  
R.500.100.60  18255.3  5.2  18240.0  0.0  18253.4  7.7  18240.0  0.0  
R.500.1000.1  1524.8  38.8  1556.9  34.3  1542.5  34.1  1591.3  38.3  
R.500.1000.15  54644.0  760.2  50503.3  360.3  53719.0  632.9  50310.4  420.0  
R.500.1000.30  99465.1  209.0  99038.0  42.4  99166.6  78.9  99022.0  160.9  
R.500.1000.60  178247.6  108.6  178212.0  0.0  178268.9  130.1  178212.0  0.0  
R.600.100.1  21.3  4.0  4.9  2.1  22.6  2.9  14.3  8.8  
R.600.100.15  6280.1  114.5  5621.5  40.9  6224.2  100.5  5797.5  511.5  
R.600.100.30  12504.9  15.7  12467.9  5.2  12499.7  11.8  12569.7  34.9  
R.600.100.60  23293.0  0.0  23293.0  0.0  23293.0  0.0  23293.0  0.0  
R.600.1000.1  1582.7  36.0  1679.3  40.6  1657.7  104.0  1749.3  47.2  
R.600.1000.15  61492.4  964.8  56638.3  389.9  62859.3  3426.5  61581.3  7760.0  
R.600.1000.30  127460.6  384.8  126798.6  3.5  128011.6  305.2  129319.9  505.6  
R.600.1000.60  214608.0  0.0  214608.0  0.0  214641.1  59.0  214608.0  0.0  
R.700.100.1  16.6  3.5  1.7  0.7  26.4  6.0  30.3  7.5  
R.700.100.15  7835.3  110.4  7195.4  65.3  8964.0  775.0  7865.6  854.7  
R.700.100.30  14585.7  28.2  14510.3  0.8  14693.2  36.5  14792.3  37.7  
R.700.100.60  24108.7  7.3  24102.0  0.0  24131.3  13.6  24102.0  0.0  
R.700.1000.1  1539.8  45.9  1644.1  36.0  1718.5  129.8  1930.4  83.8  
R.700.1000.15  72271.5  934.4  66738.6  479.7  79815.4  7934.4  83061.5  9842.3  
R.700.1000.30  135922.5  441.5  134495.7  40.0  136468.1  347.6  138703.9  477.9  
R.700.1000.60  245602.3  49.5  245589.0  0.0  245848.2  154.1  245589.0  0.0 
Even though some of the algorithms can be seen as generally more efficient than others, this is not true in every case, as can be observed in Tab. 5, in which the sample mean and sample standard deviation values are presented for the EACS and EACSSA algorithms. The two most efficient, in terms of solution quality, were the EACS with the SOP3exchange and the EACSSA with the SOP3exchange LS. While the former achieved lower mean values for more problem instances, the latter performed particularly well for instances of a size up to 500. For the largest instances (R.600.* and R.700.*), the EACS with the SOP3exchange obtained the lowest mean values in 14 out of 16 cases. This suggests that the EACSSA algorithm did not have enough time to converge within the specified time limit.
Similar observations can be made from the analysis of the best solutions found by the algorithms presented in Tab. 6. The table also contains the values of the bestknown solutions; some of which were obtained by Gouveia and Ruthmair by using an exact method (branchandcut) a13 and by Papapanagiotou et al. papapanagiotou2015comparison , while the rest by metaheuristics, including the EACS with the SOP3exchange a12 . For the 18 SOP instances, all four algorithms were able to find the bestknown solution at least once per 30 runs. For the 10 SOP instances new best solutions were found by the proposed algorithms. The EACS with the SOP3exchangeSA found the new best solutions for 6 instances, i.e. R.300.1000.15, R.500.100.15, R.500.100.15, R.600.1000.15, R.700.100.15, and R.700.1000.15. The EACSSA with the SOP3exchangeSA found the new best solutions for 4 instances, namely: R.300.100.15, R.400.100.15, R.400.1000.15 and R.500.1000.15. Overall, the best known or improved solutions were obtained by at least one of the algorithms in 37 out of 48 cases (77%). All algorithms struggled most with instances in which the number of precedence constraints was smallest, i.e. 1% (instances R.*.*.1) which suggests that there is still some room for improvement of the LS algorithms.
Instance 







R.200.100.1  61  67  64  63  64  
R.200.100.15  1792  1890  1796  1868  1792  
R.200.100.30  4216  4216  4216  4216  4216  
R.200.100.60  71749  71749  71749  71749  71749  
R.200.1000.1  1404  1426  1423  1416  1437  
R.200.1000.15  20481  21113  20481  20946  20481  
R.200.1000.30  41196  41196  41196  41196  41196  
R.200.1000.60  71556  71556  71556  71556  71556  
R.300.100.1  26  39  31  32  28  
R.300.100.15  3161  3251  3154  3235  3152  
R.300.100.30  6120  6120  6120  6120  6120  
R.300.100.60  9726  9726  9726  9726  9726  
R.300.1000.1  1294  1363  1382  1389  1413  
R.300.1000.15  29183  30295  29068  30099  29111  
R.300.1000.30  54147  54147  54147  54147  54147  
R.300.1000.60  109471  109471  109471  109471  109471  
R.400.100.1  13  28  17  26  14  
R.400.100.15  3906  4139  3918  4088  3883  
R.400.100.30  8165  8165  8165  8165  8165  
R.400.100.60  15228  15228  15228  15228  15228  
R.400.1000.1  1343  1459  1488  1491  1496  
R.400.1000.15  43268  41067  39148  40457  38963  
R.400.1000.30  85128  85128  85128  85128  85128  
R.400.1000.60  140816  140816  140816  140816  140816  
R.500.100.1  4  17  8  17  6  
R.500.100.15  5361  5615  5305  5575  5315  
R.500.100.30  9665  9687  9665  9675  9665  
R.500.100.60  18240  18240  18240  18240  18240  
R.500.1000.1  1316  1464  1487  1476  1519  
R.500.1000.15  50725  53082  49907  52630  49719  
R.500.1000.30  98987  99072  98987  99002  98987  
R.500.1000.60  178212  178212  178212  178212  178212  
R.600.100.1  1  15  2  16  4  
R.600.100.15  5684  6087  5548  6008  5568  
R.600.100.30  12465  12481  12465  12484  12496  
R.600.100.60  23293  23293  23293  23293  23293  
R.600.1000.1  1337  1500  1611  1518  1644  
R.600.1000.15  57237  59660  55725  60194  56283  
R.600.1000.30  126798  126798  126798  127531  128214  
R.600.1000.60  214608  214608  214608  214608  214608  
R.700.100.1  1  10  1  18  16  
R.700.100.15  7311  7629  7109  8011  7191  
R.700.100.30  14510  14534  14510  14620  14714  
R.700.100.60  24102  24102  24102  24114  24102  
R.700.1000.1  1231  1462  1583  1588  1713  
R.700.1000.15  66837  70219  66008  71484  68009  
R.700.1000.30  134474  134712  134474  135841  137784  
R.700.1000.60  245589  245589  245589  245589  245589 
The results as presented above confirm that the proposed incorporation of the SA into the main algorithm (EACS) and into the local search (SOP3exchange) is able to improve the quality of the generated solutions to the SOP. In order to further clarify the differences between the existing approach, i.e. the EACS with the SOP3exchange LS, and the proposed EACSSA with the SOP3exchangeSA, both were run on SOP instances from the SOPLIB2006 repository, however the computation time was increased to 600 seconds. This is a fivefold increase vs the time limit used in the experiments presented above. By giving the algorithms more time, we lower the risk of one algorithm dominating an other because of the limited time. The results are presented in Tab. 7. In most cases the results of the EACSSA with the SOP3exchangeSA were of a better quality than those obtained for the EACS with the SOP3exchange, although the relative differences between the algorithms depended on the SOP instance that was being solved. The results were checked for a statistically significant differences using the nonparametric Wilcoxon rank sum test at a significance level of (the respective values are reported in the table). In 33 out of 48 (69%) cases (instances) the solutions generated by the EACSSA with the SOP3exchangeSA were of a significantly better quality than those generated by the EACS with the SOP3exchange. In 4 cases (8%) the results of the former algorithm were significantly worse and in 11 cases (23%) no significant differences were observed.
Taking into account the best solutions generated during 30 executions of the algorithms for each of the SOP instances considered, the proposed algorithm reached the bestknown results in 31 cases, and in 10 cases new best solutions were found. Because of the increased computation time limit, in 7 out of those 10 cases the results were improvement over those presented in Tab. 6. To summarize, the bestknown or improved results were generated for 41 out of the 48 (85%) SOP instances considered here. The EACS with the SOP3exchange found the best known results in 18 cases; however, no new best solutions were found in any case.
All of the SOP instances for which the EACSSA with the SOP3exchangeSA generated significantly worse results than the EACS with the SOP3exchange are of the form R.*.1000.1 what suggests either an overall inferior convergence of the former algorithm for this kind of instances, or an insufficient time limit to match the convergence of the latter algorithm. In fact, the second possibility seems to be true because for the smallest of the R.*.1000.1 instances, i.e. R.200.1000.1, the EACSSA with the SOP3exchangeSA generated significantly better results, and for the second smallest instance, i.e. R.300.1000.1, there were no significant differences between the results of the two algorithms. To confirm our assumption, both algorithms were run for the instances: R.300.1000.1, R.400.1000.1, R.500.1000.1, R.600.1000.1, and R.700.1000.1 but with the time limit increased to 1200 seconds (doubled) per run. The results are presented in Tab. 8. As can be seen, the advantage of the EACS with the SOP3exchange over the EACSSA with the SOP3exchangeSA disappeared, and both algorithms generated results of a similar quality. Statistical comparison based on the nonparametric Wilcoxon rank sum test showed no significant differences for the R.400.1000.1, R.500.1000.1 and R.600.1000.1 instances. Surprisingly, the increased time limit allowed the EACSSA to obtain significantly better results for the two remaining instances, i.e. R.300.1000.1, and R.700.1000.1, although the advantage was small relative to the optimum values.
Considering all the results, the efficiency of the proposed algorithm (in terms of the quality of solutions) was statistically significantly better than the original approach for approx. 73% of the SOP instances, while never being worse. However, a sufficient computation time is necessary to reach this level of performance. In most cases 600 seconds was enough, whereas for a few instances the limit of 1200 seconds was necessary. In practical applications, the algorithm could be sped up by using parallel computations.
Instance 

EACS + SOP3exchange (I)  EACSSA + SOP3exchangeSA (II)  value 



Avg.  Std. dev.  Best  Avg.  Std. dev.  Best  
R.200.100.1  61  71.8  2.8  67  63  0.9  62  <0.0001  II  
R.200.100.15  1792  1915.7  33.6  1849  1821.6  47.3  1792  <0.0001  II  
R.200.100.30  4216  4216  0  4216  4216  0  4216  1    
R.200.100.60  71749  71749  0  71749  71749  0  71749  1    
R.200.1000.1  1404  1445.4  15.3  1408  1423.2  6.8  1407  <0.0001  II  
R.200.1000.15  20481  21586.1  373.6  20837  20639.2  200.2  20481  <0.0001  II  
R.200.1000.30  41196  41196  0  41196  41196  0  41196  1    
R.200.1000.60  71556  71556  0  71556  71556  0  71556  1    
R.300.100.1  26  43.8  5  37  27.8  1.7  26  <0.0001  II  
R.300.100.15  3161  3297.7  29.3  3238  3174.2  47.7  3152  <0.0001  II  
R.300.100.30  6120  6120.9  1  6120  6120  0  6120  <0.0001  II  
R.300.100.60  9726  9726  0  9726  9726  0  9726  1    
R.300.1000.1  1294  1398.3  26.9  1356  1394.1  19.9  1355  0.5691    
R.300.1000.15  29183  31132.5  542.8  30013  29309.7  193.5  29026  <0.0001  II  
R.300.1000.30  54147  54179.3  50.8  54147  54147  0  54147  <0.0001  II  
R.300.1000.60  109471  109490.8  45.1  109471  109471  0  109471  0.0214  II  
R.400.100.1  13  33.2  4.2  25  14.6  1.9  13  <0.0001  II  
R.400.100.15  3906  4184.1  50.4  4084  3902.6  12.6  3883  <0.0001  II  
R.400.100.30  8165  8165.9  0.3  8165  8165  0  8165  <0.0001  II  
R.400.100.60  15228  15228  0  15228  15228  0  15228  1    
R.400.1000.1  1343  1471.3  24.7  1426  1497  25  1454  0.0002  I  
R.400.1000.15  43268  41821.2  504.7  40869  39207.1  148  38963  <0.0001  II  
R.400.1000.30  85128  85253.2  105.6  85146  85128  0  85128  <0.0001  II  
R.400.1000.60  140816  140816  0  140816  140816  0  140816  1    
R.500.100.1  4  24  4  13  4.9  0.9  4  <0.0001  II  
R.500.100.15  5361  5739  76.8  5578  5348.7  27  5284  <0.0001  II  
R.500.100.30  9665  9696.1  9.1  9687  9667.6  1  9665  <0.0001  II  
R.500.100.60  18240  18255.9  4.3  18240  18240  0  18240  <0.0001  II  
R.500.1000.1  1316  1465.6  30.9  1416  1490.7  31.5  1438  0.0043  I  
R.500.1000.15  50725  53818.4  726.2  52549  49815.3  197.8  49504  <0.0001  II  
R.500.1000.30  98987  99240.5  117.9  99018  98987  0  98987  <0.0001  II  
R.500.1000.60  178212  178259.5  123.1  178212  178212  0  178212  0.0418  II  
R.600.100.1  1  17.4  3  12  1.7  1  1  <0.0001  II  
R.600.100.15  5684  6143.7  114.4  5919  5544  30.1  5493  <0.0001  II  
R.600.100.30  12465  12484.6  13.7  12468  12465  0  12465  <0.0001  II  
R.600.100.60  23293  23293  0  23293  23293  0  23293  1    
R.600.1000.1  1337  1515.7  27.7  1446  1540.2  25.6  1492  0.001  I  
R.600.1000.15  57237  60100.8  785.7  58426  55891.4  446.3  55213  <0.0001  II  
R.600.1000.30  126798  127106.1  339.8  126798  126798  0  126798  <0.0001  II  
R.600.1000.60  214608  214608  0  214608  214608  0  214608  1    
R.700.100.1  1  10.9  2.2  7  1  0  1  <0.0001  II  
R.700.100.15  7311  7707  90.4  7560  7125.3  52.3  7021  <0.0001  II  
R.700.100.30  14510  14563.9  24.7  14516  14510  0  14510  <0.0001  II  
R.700.100.60  24102  24103.7  4.3  24102  24102  0  24102  0.0215  II  
R.700.1000.1  1231  1463  34.6  1382  1484.9  29.7  1423  0.028  I  
R.700.1000.15  66837  70778.6  784.6  69410  65935.9  436.9  65305  <0.0001  II  
R.700.1000.30  134474  135480  339.9  134942  134474  0  134474  <0.0001  II  
R.700.1000.60  245589  245589  0  245589  245589  0  245589  1   
Instance 

EACS + SOP3exchange (I)  EACSSA + SOP3exchangeSA (II)  value 



Avg.  Std. dev.  Best  Avg.  Std. dev.  Best  
R.300.1000.1  1294  1397.7  27.9  1360  1361.3  13.7  1339  <0.0001  II  
R.400.1000.1  1343  1456.0  26.1  1414  1462.1  24.9  1421  0.3254  –  
R.500.1000.1  1316  1463.1  28.8  1416  1459.1  20.7  1411  0.7006  –  
R.600.1000.1  1337  1504.2  30.2  1455  1508.8  26.0  1447  0.3477  –  
R.700.1000.1  1231  1445.6  26.3  1398  1430.2  24.1  1381  0.0281  II 
5.5 Speed comparison
The algorithms differ not only in the quality of the generated solutions but also in the relative speed. The SA component does not affect the asymptotic time complexity of the ACS and EACS but it may influence the solution search "trajectory", thus possibly impacting the runtime, particularly if a local search is used. The SOP3exchange tries to improve a solution by searching only for the improving changes (moves) and its time complexity depends on the relative order of nodes in the solution. If the solution changes slightly from iteration to iteration, the runtime shortens because of the focusing only on the changed parts of the solution. In contrast, the SOP3exchangeSA, due to the SA component, may also accept a number of worse (uphill) moves, hence the overall runtime should increase. Figure 11 shows a bar plot of the average number of iterations performed for the EACS and EACSSA with both LS variants vs the size of the SOP instance. As expected, the algorithms with the SOP3exchangeSA were slower than the algorithms with the SOP3exchange. The fastest algorithm was the EACS with the SOP3exchange, beating the EACSSA with the same LS. Interestingly, the slowest algorithm was the EACS with the SOP3exchangeSA; it was even slower than the EACSSA with the same LS. This is probably due to the fact that the EACS can relatively easily get trapped in a "deep" local minimum from which an escape is difficult even if the SOP3exchangeSA accepts a number of uphill moves. On the other hand, the EACSSA focuses on a larger number of different solutions during the search, some of which are less timeconsuming to improve by the LS. Finally, the larger the size of the instance, the lower the number of iterations performed by the algorithms.
6 Summary
The Ant Colony System and particularly its enhanced version (EACS) are competitive metaheuristics whose efficiency has been shown in a number of cases a9 ; a11 ; a12 . Nevertheless, we have shown that the search process of the ACS and EACS can still be improved with ideas taken from Simulated Annealing. Specifically, instead of increasing the pheromone values based on the current best solution, the proposed ACSSA and EACSSA algorithms increase the pheromone values based on the current active solution that is chosen from among all the solutions constructed by the ants. The active solution may not necessarily be the current best solution as it is selected probabilistically by using the Metropolis criterion from the SA. This change weakens the exploitative focus of the ACS and EACS, thus increasing the chance of escaping local optima. The computational experiments on a set of SOP instances from the TSPLIB repository and subsequent statistical analyses have shown that in most cases the resulting ACSSA and EACSSA algorithms perform significantly better than the original algorithm.
An efficient local search heuristic is necessary for stateoftheart performance in solving the SOP. Based on the same SA inspirations, we proposed an enhanced version of the stateoftheart SOP3exchange heuristic by Gambardella a11 . The resulting SOP3exchangeSA algorithm is more resilient to getting trapped in local minima, at the expense of increased computation time. The computational experiments conducted on a set of 48 SOP instances sized from 200 to 700 showed that the proposed EACS and EACSSA with the SOP3exchange and SOP3exchangeSA local searches are in many cases able to find solutions of better quality than the original EACS with the SOP3exchange (a current stateoftheart metaheuristic for the SOP), within the same computation time limit. In fact, new, best solutions were obtained for 10 instances. In total, the best known or improved solutions were obtained at least once for a total of 85% of the SOP instances considered here.
Although the proposed modifications are easy to implement and improve the performance of the original algorithms, they have some minor drawbacks. First, they increase the computation time relative to the original algorithms. Second, they require to set the values of the new parameters related to the SA cooling schedule ( and ). Also, relatively poor performance for SOP instances with a small number (1%) of precedence constraints shows that there is still room for improvement, both in the ACSSA, EACSSA and local search methods.
In the future, a more advanced cooling schedule could be used to improve the convergence of the SA component of the proposed algorithms. A good candidate seems to be the adaptive cooling schedule that was proposed by Lam a30 , although it requires a complex parameter setting and a method of controlling how much the subsequent solutions differ from one another. An interesting idea could also be to activate the SA component only if search process stagnation is detected. Because the proposed fusion between the ACS and SA is problemagnostic one could try to apply it to solve other difficult combinatorial optimization problems. The performance of the proposed algorithms in terms of computation time could also be improved with the help of parallel computations, as the ACS is susceptible to parallelization even with modern GPUs skinderowicz2016 .
Acknowledgments: This research was supported in part by PLGrid Infrastructure.
References
 [1] Davide Anghinolfi, Roberto Montemanni, Massimo Paolucci, and Luca Maria Gambardella. A hybrid particle swarm optimization approach for the sequential ordering problem. Computers & OR, 38(7):1076–1085, 2011.
 [2] Norbert Ascheuer. Hamiltonian path problems in the online optimization of flexible manufacturing systems. PhD thesis, Technische Universität Berlin, 1995.
 [3] Mir M. Atiqullah. An efficient simple cooling schedule for simulated annealing. In Antonio Laganà, Marina L. Gavrilova, Vipin Kumar, Youngsong Mun, Chih Jeng Kenneth Tan, and Osvaldo Gervasi, editors, Computational Science and Its Applications  ICCSA 2004, International Conference, Assisi, Italy, May 1417, 2004, Proceedings, Part III, volume 3045 of Lecture Notes in Computer Science, pages 396–404. Springer, 2004.
 [4] Masri Binti Ayob and Ghaith M. Jaradat. Hybrid ant colony systems for course timetabling problems. In Proceedings of the 2nd Conference on Data Mining and Optimization, DMO 2009, Universiti Kebangsaan Malaysia, 2728 October 2009, pages 120–126. IEEE, 2009.
 [5] J. Behnamian, M. Zandieh, and S.M.T. Fatemi Ghomi. Parallelmachine scheduling problems with sequencedependent setup times using an aco, {SA} and {VNS} hybrid algorithm. Expert Systems with Applications, 36(6):9637 – 9644, 2009.
 [6] Jon Louis Bentley. Fast algorithms for geometric traveling salesman problems. INFORMS Journal on Computing, 4(4):387–411, 1992.
 [7] Lyamine Bouhafs, Amir Hajjam, and Abder Koukam. A combination of simulated annealing and ant colony system for the capacitated locationrouting problem. In Bogdan Gabrys, Robert J. Howlett, and Lakhmi C. Jain, editors, KnowledgeBased Intelligent Information and Engineering Systems, 10th International Conference, KES 2006, Bournemouth, UK, October 911, 2006, Proceedings, Part I, volume 4251 of Lecture Notes in Computer Science, pages 409–416. Springer, 2006.
 [8] OLIVIER CATONI. Rough large deviation estimates for simulated annealing: Application to exponential schedules. The Annals of Probability, 20(3):1109–1146, 1992.
 [9] ChiaHo Chen and ChingJung Ting. A hybrid ant colony system for vehicle routing problem with time windows. Journal of the Eastern Asia Society for Transportation Studies, 6:2822–2836, 2005.
 [10] ShyiMing Chen and ChihYao Chien. Solving the traveling salesman problem based on the genetic simulated annealing ant colony system with particle swarm optimization techniques. Expert Syst. Appl., 38(12):14439–14450, 2011.
 [11] Zbigniew J. Czech, Wojciech Mikanik, and Rafal Skinderowicz. Implementing a parallel simulated annealing algorithm. In Roman Wyrzykowski, Jack Dongarra, Konrad Karczewski, and Jerzy Wasniewski, editors, Parallel Processing and Applied Mathematics, 8th International Conference, PPAM 2009, Wroclaw, Poland, September 1316, 2009. Revised Selected Papers, Part I, volume 6067 of Lecture Notes in Computer Science, pages 146–155. Springer, 2009.

[12]
Marco Dorigo and Luca Maria Gambardella.
Ant colony system: a cooperative learning approach to the traveling
salesman problem.
IEEE Trans. Evolutionary Computation
, 1(1):53–66, 1997.  [13] Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 26(1):29–41, 1996.
 [14] Marco Dorigo and Thomas Stützle. Ant colony optimization. MIT Press, 2004.
 [15] Amir Hajjam El Hassani, Abder Koukam, and Lyamine Bouhafs. A hybrid ant colony system approach for the capacitated vehicle routing problem and the capacitated vehicle routing problem with time windows. INTECH Open Access Publisher, 2008.
 [16] LF Escudero. An inexact algorithm for the sequential ordering problem. European Journal of Operational Research, 37(2):236–249, 1988.
 [17] LF Escudero, Monique Guignard, and Kavindra Malik. A lagrangian relaxandcut approach for the sequential ordering problem with precedence relationships. Annals of Operations Research, 50(1):219–237, 1994.
 [18] L.F. Escudero, Monique Guignard, and Kavindra Malik. A lagrangian relaxandcut approach for the sequential ordering problem with precedence relationships. Annals of Operations Research, 50(1):219–237, 1994.
 [19] Ahmed Ezzat and Ashraf M. Abdelbar. A less exploitative variation of the enhanced ant colony system applied to SOP. In Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2013, Cancun, Mexico, June 2023, 2013, pages 1917–1924. IEEE, 2013.
 [20] Ahmed Ezzat, Ashraf M. Abdelbar, and Donald C. Wunsch II. A barebones ant colony optimization algorithm that performs competitively on the sequential ordering problem. Memetic Computing, 6(1):19–29, 2014.
 [21] Luca Maria Gambardella and Marco Dorigo. An ant colony system hybridized with a new local search for the sequential ordering problem. INFORMS Journal on Computing, 12(3):237–255, 2000.
 [22] Luca Maria Gambardella, Roberto Montemanni, and Dennis Weyland. An enhanced ant colony system for the sequential ordering problem. In Diethard Klatte, HansJakob Lüthi, and Karl Schmedders, editors, Operations Research Proceedings 2011, Selected Papers of the International Conference on Operations Research (OR 2011), August 30  September 2, 2011, Zurich, Switzerland, pages 355–360. Springer, 2011.
 [23] Luca Maria Gambardella, Roberto Montemanni, and Dennis Weyland. Coupling ant colony systems with strong local searches. European Journal of Operational Research, 220(3):831–843, 2012.
 [24] Fred W Glover and Gary A Kochenberger. Handbook of metaheuristics, volume 57. Springer Science & Business Media, 2006.
 [25] Luis Gouveia and Pierre Pesneau. On extended formulations for the precedence constrained asymmetric traveling salesman problem. Networks, 48(2):77–89, 2006.
 [26] Luis Gouveia and Mario Ruthmair. Loaddependent and precedencebased models for pickup and delivery problems. Computers & OR, 63:56–71, 2015.
 [27] Francesca Guerriero and M. Mancini. A cooperative parallel rollout algorithm for the sequential ordering problem. Parallel Computing, 29(5):663–677, 2003.
 [28] Keld Helsgaun. An effective implementation of Kopt moves for the LinKernighan TSP heuristic. PhD thesis, Roskilde University. Department of Computer Science, 2006.
 [29] István T Hernádvölgyi. Solving the sequential ordering problem with automatically generated lower bounds. In Operations Research Proceedings 2003, pages 355–362. Springer, 2004.
 [30] Myles Hollander, Douglas A Wolfe, and Eric Chicken. Nonparametric statistical methods. John Wiley & Sons, 2013.
 [31] Lester Ingber. Simulated annealing: Practice versus theory. Mathematical and computer modelling, 18(11):29–57, 1993.
 [32] Alan R. McKendall Jr. and Jin Shang. Hybrid ant systems for the dynamic facility layout problem. Computers & OR, 33:790–803, 2006.
 [33] Jimmy Lam. An efficient simulated annealing schedule. Yale University, New Haven, CT, 1988.
 [34] Gregory A Mack and John H Skillings. A friedmantype rank test for main effects in a twofactor anova. Journal of the American Statistical Association, 75(372):947–951, 1980.
 [35] Roberto Montemanni, D. H. Smith, and Luca Maria Gambardella. A heuristic manipulation technique for the sequential ordering problem. Computers & OR, 35(12):3931–3944, 2008.
 [36] Vassilis Papapanagiotou, J Jamal, Roberto Montemanni, Ghassan Shobaki, and Luca Maria Gambardella. A comparison of two exact algorithms for the sequential ordering problem. In Systems, Process and Control (ICSPC), 2015 IEEE Conference on, pages 73–78. IEEE, 2015.
 [37] Martín Pedemonte, Sergio Nesmachnow, and Héctor Cancela. A survey on parallel ant colony optimization. Appl. Soft Comput., 11(8):5181–5197, 2011.
 [38] Marcus Randall and James Montgomery. Candidate set strategies for ant colony optimisation. In Ant Algorithms, Third International Workshop, ANTS 2002, Brussels, Belgium, September 1214, 2002, Proceedings, pages 243–249, 2002.
 [39] F Romeo, Vincentelli Ak Sangiovanni, and Md Huang. An efficient general cooling schedule for simulated annealing. PROCEEDING OF IEEE INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, 1986.
 [40] David J Sheskin. Handbook of parametric and nonparametric statistical procedures. crc Press, 2003.
 [41] Rafal Skinderowicz. The gpubased parallel ant colony system. J. Parallel Distrib. Comput., 98:48–60, 2016.
 [42] Tiago Sousa, Zita Vale, Joao Paulo Carvalho, Tiago Pinto, and Hugo Morais. A hybrid simulated annealing approach to handle energy resource management considering an intensive use of electric vehicles. Energy, 67:81 – 96, 2014.
 [43] Guendouzi Wassila and Boukra Abdelmadjid. A hybrid intrusion detection approach using ant colony system and simulated annealing (acssa). In Proceedings of the International Conference on Intelligent Information Processing, Security and Advanced Communication, IPAC ’15, pages 55:1–55:5, New York, NY, USA, 2015. ACM.
 [44] Qi Xu, Song Chen, and Bin Li. Combining the ant system algorithm and simulated annealing for 3d/2d fixedoutline floorplanning. Appl. Soft Comput., 40:150–160, 2016.