The Travelling Salesman Problem (TSP) is one of the most investigated problems in the Combinatorial Optimization (CO) field. This is partly due to the fact that it belongs to the set of NP-Hard problems, which makes it particularly challenging. Moreover, the many practical problems that can be reduced to this – such as in Ratnesh et al.  where models of the TSP are presented to be used in the manufacture of microchips – make it even more attractive. At the same time, the full potentials of Machine Learning (ML) and Deep Learning (DL) techniques are becoming increasingly recognized in the CO field .
Mele et al.  recently introduced ML-Constructive, a promising constructive approach that computes fast solutions in two separate phases. The first phase uses ML to create a sub-solution with the most reliable edges. Whereas, the second phase employs a classic heuristic to complete the tour. Here we introduce an extension to the original idea to enhance the performance of the ML-Constructive algorithm.
In section 1 we formally state the Travelling Salesman Problem, and present a brief literature review. A high-level description of the plain method is presented in section 2. In section 3 we present all the changes we made to improve the algorithm. Finally, in section 4 the results of the new approach we propose are shown and discussed.
1.1 The Travelling Salesman Problem
Let consider the complete graph , where is a set of nodes, and is a set of edges connecting nodes to each other. Also, let be the cost for edge connecting node to node . The objective of the Travelling Salesman Problem is to find the shortest possible tour that visits each node exactly once, and then gets back to the first node . The largest TSP instance () solved optimally required more than 136 years of CPU time; the computation has been carried out with the fast Concorde solver . The NP-hard nature of the problem makes it fundamental the development of algorithms that compute approximate solutions with a good confidence even on large instances.
An effective way to heuristically reduce the complexity of a TSP is to consider only subsets of edges when building a feasible tour. A Candidate List for node is defined as the set of edges that contains the most likely edges to be part of the optimal tour. There exist different methods to create candidate lists, the simplest one of which is to consider only the edges connecting the closest nodes to each node .
1.2 A Brief Literature Review
It is well known that an efficient way to solve large Combinatorial Optimization problems is to employ the Divide-and-Conquer paradigm. Such a paradigm is promising for addressing CO problems with Machine Learning as well. Since ML models suffer from a intrinsic generalization problem trying to scale up to large instances  due to well-known ML limits (e.g. imbalanced training) .
Valuable surveys describing recent approaches using ML to generate solutions for CO problems are in [16, 2]. Different approaches suggesting DL networks to solve the TSP with end-to-end methodology have been presented among which the studies carried out by Miki et al. , Kool et al.  and Mele et al. .
The best proposal at the moment is theML-Constructive heuristic , which focuses on the development of an efficient interaction between Machine Learning and Combinatorial Optimization techniques. It uses candidate lists (CLs) as input to the ML model, and is able to scale up with satisfactory results. Other approaches attempting to solve the scalability issue were introduce by Fu et al.  and by Fitzpatrick et al. , where Machine Learning is used to construct CLs and then classical heuristics for the tour construction are applied.
2 The Original ML-Constructive Heuristic
The ML-Constructive heuristic is a constructive hybrid algorithm composed by two phases. The first phase exploits Machine Learning’s ability in detecting specific patterns to create an initial partial solution. This solution comprises the edges most likely to be part of an optimal tour according to the ML learnt patterns. The second phase instead uses a well-known heuristic to complete the solution. In fact, some difficulties may arise with ML where data is not adequate, further details can be found in Mele et al. .
In order to initialize the problem, reduce the search space and create valid inputs for the Machine Learning model, ML-Constructive initially computes a candidate list for each node. Then, a list of promising edges is created, such that it contains all the edges connecting the closest two vertices for each CL. The Machine Learning is in charge of checking when an edge
has to be used or not for the solution. To do so, it learns the probability that these edges have of being optimal by considering just the CL of the considerededge as input. Initially the insertion feasibility of edge is checked considering the current partial solution, then the ML predicts the probability of being an optimal edge. If such probability is greater than a certain threshold, the edge will be inserted in the current partial solution.
The order in which these requests are tackled is a fundamental choice. In ML-Constructive, the list is sorted according to the positions in the CL and the non-decreasing cost values. The edges connecting the nearest node in the CL are placed before, then those which are second closest follow.
To complete the tour obtained during the Machine Learning phase, the Clarke-Wright (CW) heuristic  was used. Note that no change is made to the edges inserted during the first phase.
3 Improvements to ML-Constructive
The original algorithm uses a ResNet architecture  to confirm the addition of an edge in the solution. Such an architecture carries a high computational cost we would like to avoid. Our first contribution is to attempt to reduce it by replacing the ResNet with a different ML model. Several alternative ML models were examined.
In addition, since ML-Constructive has some shortcomings in the heuristic part too, a different CL constructor and a third phase are introduced as well. The new CL constructor exploits the Delaunay triangolarization  to speed up the creation of the lists. The third phase instead increases the quality of the complete solution by introducing a local search on the most uncertain edges since the CW solution can be largely improved. We point out that the second phase is kept unchanged here from the original algorithm. As shown by Mele et al. , even when the ML-Constructive is able to predict all the optimal edges in , sometimes it does not reach the complete optimal solution in the end.
3.1 First Phase: Machine Learning Models
To find a ML model that works accurately and in a short time, several ML models were tried out and tested. Their performances in terms of predictions quality and tour construction are shown in Table 1 and 2, respectively. Five-thousand instances were randomly generated to train these models, with . The points were sampled in the unit side square, and the optimal solutions were computed with the Concorde solver . The cost between each node in the CL of node
were employed as input of the ML, where the cost is the euclidean distance between vertices. In addition, it is also provided a vector indicating whether that edge is in the current partial solution or not. Given such vector of dimension, the ML model is asked to predict if the first or second neighbor in the CL of node is optimal.
With the aim of preserving a consistent balance between training and testing, each CL in the training set were filtered according to the partial optimal solution found using ML-Constructive constraints and iterations .
|1st edge||Baseline ||0,782||0,499||0,867||0,885||0,886|
|Linear US [21, 12]||0,436||0,650||0,975||0,359||0,059|
|Ensemble [20, 3]||0,525||0,679||0,962||0,456||0,099|
|2nd edge||Baseline ||0,501||0,500||0,511||0,512||0,512|
|Ensemble [20, 3]||0,411||0,514||0,722||0,075||0,047|
To accomplish the task several approaches were engaged: the baseline predictor which randomly predicts using the empirical probabilities of the CL positions , the same ResNet architecture introduced by Mele et al. 
, a linear classifier, a linear SVM , and finally an Ensemble 
voting classifier including also an XGBoost. The latter shows the best performance on the test set. Since the first edge occurrence is quite over-represented we applied an under-sampling technique as well . More details on the training settings can be found in the online compendium111All the code for the experiments, the data creation, the training and testing, along with the online Compendium can be found in the GitHub repository at https://github.com/tommivitali/ML-Constructive_LS..
3.2 Third Phase: Local Search
The ML-Constructive provides good approximated tours, which can however still be improved. These tours have some flaws, since it is possible to get some crossing edges in them. To obtain better solutions we extend the heuristic with a further step, which employs a 2-opt local search . Generally, such local search compares every possible couple of edges. However, since we are pretty confident about what has been done in the first phase, here we try to improve only the edges obtained during the second phase. The edges that have been inserted by the ML models (first phase) will not be modified.
To compare the results obtained by the original version of ML-Constructive  and what it is proposed in this work, experiments were carried out on the same 54 instances selected by Mele et al.  from the TSPLIB library . The size of the instances varies between 100 and 1748. A brief recap of the results – with the heuristic executed using several ML models in the first phase – is shown in Table 2. A more detailed version of this table can be found in the online compendium, where the results of each instance are shown and discussed.
The first column B is the baseline, while NN confirms an edge if it connects the nearest node in the CL. The other columns show the performance using some ML models; the column ML-C shows the results of the execution of the original ML-Constructive algorithm. On the two columns indicated by “LS" is performed also the 2-opt local search as a third phase of the algorithm. Clearly, this leads overall to better performance: 2-opt moves are applied only if they bring a better tour length.
The introduction of new ML models has brought an improvement in terms of computational burden for the first phase. In terms of quality, the use of SVM also brought an improvement (not significant) compared to ML-C. The result is attractive as it also leads to an improvement in speed of about 4x. More work must be carried out to improve the accuracy of the Machine Learning decision-taker, and we noticed that keeping low FPR is preferable to having high TPR.
The local research introduced shows an improvement in terms of solution quality as well, although more effort is required to bring the gap of the tour established after the local search to zero. Overall, the changes we made led to better performance with respect to the original ML-Constructive, apart from a few particular instances. The promising results obtained by the “optimal” ML policy (OPT) suggest that there’s room for improvement along this direction. Recall that the OPT policy is derived on the assumption that the ML decision-taker can correctly predict all the optimal edges in without making any mistakes.
Umberto Junior Mele was supported by the Swiss National Science Foundation through grants 200020-182360: “Machine learning and sampling-based metaheuristics for stochastic vehicle routing problems”.
-  Applegate DL, Bixby RE, Chvátal V, Cook WJ The Traveling Salesman Problem: A Computational Study. Princeton University Press (2006).
-  Bengio Y, Lodi A, Prouvost A Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research (2020).
-  Chen T, Guestrin C Xgboost: a scalable tree boosting system. Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, (2016), pp. 785-794.
-  Clarke G, Wright JW Scheduling of vehicles from a central depot to a number of delivery points. Operations research, (1964), Vol 12(4), pp. 568-581.
-  Fitzpatrick J, Ajwani D, Carroll P Learning to sparsify travelling salesman problem instances. International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research, Springer, (2021), pp. 410-426.
-  Fu ZH, Qiu KB, Zha H Generalize a small pre-trained model to arbitrarily large TSP instances. arXiv preprint:2012.10658, (2020).
-  He K, Zhang X, Ren S, Sun J Deep residual learning for image recognition.
-  Hearst MA, Dumais ST, Osuna E, Platt J, Scholkopf B (1998) Support vector machines. IEEE Intelligent Systems and their applications, Vol. 13(4), pp. 18-28.
-  Kool W, Van Hoof H, Welling M (2018) Attention, learn to solve routing problems! arXiv preprint: 1803.08475.
-  Kumar R, Luo Z (2003) Optimizing the operation sequence of a chip placement machine using TSP model. IEEE Transactions on Electronics Packaging Manufacturing, Vol. 26(1), pp. 14-21.
-  Lee DT, Schachter BJ (1980) Two algorithms for constructing a delaunay triangulation. International Journal of Computer & Information Sciences, pp. 219-242.
-  Liu XY, Wu J, Zhou ZH (2008) Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, pp. 539-550.
-  Marcus G (2018) Deep learning: a critical appraisal. arXiv preprint: 1801.00631.
Martin O, Otto SW, Felten EW (1992)
Large-step markov chains for the TSP incorporating local search heuristics.Operations Research Letters, pp. 219-224.
-  Mele UJ, Chou X, Gambardella LM, Montemanni R (2021) Reinforcement learning and additional rewards for the traveling salesman problem. Proceedings of the 8th International Conference on Industrial Engineering and Applications, ACM, (in press).
-  Mele UJ, Gambardella LM, Montemanni R (2021) Machine Learning Approaches for the Traveling Salesman Problem: A Survey. Proceedings of the 8th International Conference on Industrial Engineering and Applications, ACM, (in press).
-  Mele UJ, Gambardella LM, Montemanni R (2021) A new constructive heuristic driven by machine learning for the travelling salesman problem. Submitted for publication.
-  Miki S, Yamamoto D, Ebara H (2018) Applying deep learning and reinforcement learning to traveling salesman problem. International Conference on Computing, Electronics & Communications Engineering, IEEE, pp. 65–70.
-  Reinelt, G. (1991) TSPLIB—A traveling salesman problem library. ORSA journal on computing, Vol. 3(4), pp. 376-384.
-  Wolpert DH (1992) Stacked generalization. Neural Networks, Vol. 5, pp. 241-259.
Yu HF, Huang FL, Lin CJ (2011)
Dual coordinate descent methods for logistic regression and maximum entropy models.Machine Learning, Vol. 85, pp. 41-75.