Attention Solves Your TSP

03/22/2018
by   W. W. M. Kool, et al.
0

We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75 0.33

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/09/2023

BQ-NCO: Bisimulation Quotienting for Generalizable Neural Combinatorial Optimization

Despite the success of Neural Combinatorial Optimization methods for end...
research
05/06/2021

Solve routing problems with a residual edge-graph attention neural network

For NP-hard combinatorial optimization problems, it is usually difficult...
research
01/05/2020

Learning fine-grained search space pruning and heuristics for combinatorial optimization

Combinatorial optimization problems arise in a wide range of application...
research
06/09/2015

Pointer Networks

We introduce a new neural architecture to learn the conditional probabil...
research
07/04/2022

The Neural-Prediction based Acceleration Algorithm of Column Generation for Graph-Based Set Covering Problems

Set covering problem is an important class of combinatorial optimization...
research
07/03/2019

Co-training for Policy Learning

We study the problem of learning sequential decision-making policies in ...
research
09/14/2021

Rationales for Sequential Predictions

Sequence models are a critical component of modern NLP systems, but thei...

Please sign up or login with your details

Forgot password? Click here to reset