DeepAI
Log In Sign Up

Reinforcement Learning with Combinatorial Actions: An Application to Vehicle Routing

10/22/2020
by   Arthur Delarue, et al.
Google
MIT
0

Value-function-based methods have long played an important role in reinforcement learning. However, finding the best next action given a value function of arbitrary complexity is nontrivial when the action space is too large for enumeration. We develop a framework for value-function-based deep reinforcement learning with a combinatorial action space, in which the action selection problem is explicitly formulated as a mixed-integer optimization problem. As a motivating example, we present an application of this framework to the capacitated vehicle routing problem (CVRP), a combinatorial optimization problem in which a set of locations must be covered by a single vehicle with limited capacity. On each instance, we model an action as the construction of a single route, and consider a deterministic policy which is improved through a simple policy iteration algorithm. Our approach is competitive with other reinforcement learning methods and achieves an average gap of 1.7 state-of-the-art OR methods on standard library instances of medium size.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

05/29/2022

Representation Gap in Deep Reinforcement Learning

Deep reinforcement learning gives the promise that an agent learns good ...
02/12/2018

Deep Reinforcement Learning for Solving the Vehicle Routing Problem

We present an end-to-end framework for solving Vehicle Routing Problem (...
03/08/2021

Real-world Ride-hailing Vehicle Repositioning using Deep Reinforcement Learning

We present a new practical framework based on deep reinforcement learnin...
07/18/2019

Combinatorial Keyword Recommendations for Sponsored Search with Deep Reinforcement Learning

In sponsored search, keyword recommendations help advertisers to achieve...
06/12/2016

Deep Reinforcement Learning with a Combinatorial Action Space for Predicting Popular Reddit Threads

We introduce an online popularity prediction and tracking task as a benc...
07/22/2021

A reinforcement learning approach to resource allocation in genomic selection

Genomic selection (GS) is a technique that plant breeders use to select ...
06/09/2022

Towards Safe Reinforcement Learning via Constraining Conditional Value-at-Risk

Though deep reinforcement learning (DRL) has obtained substantial succes...

1 Introduction

Reinforcement learning (RL) is a powerful tool that has made significant progress on hard problems. For instance, in games like Atari mnih2013playing or Go silver2017mastering , RL algorithms have devised strategies that significantly surpass the performance of experts, even with little to no knowledge of problem structure. The success of reinforcement learning has carried over to other applications such as robotics OpenAI2020 and recommender systems gauci2018horizon . As they encompass ever more application areas, RL algorithms need to be adjusted and expanded to account for new problem-specific features.

One area in which RL has yet to make a convincing breakthrough is combinatorial optimization. There has been significant effort in recent years to apply RL frameworks to NP-hard combinatorial optimization problems vinyals2015pointer ; bello2016neural ; khalil2017learning , including the traveling salesman problem (TSP) or more general vehicle routing problem (VRP) (see bengio2018machine for a recent survey). Solving these problems produces a practical impact for many industries, including transportation, finance, and energy. Yet in contrast to mnih2013playing ; silver2017mastering , RL methods have yet to match the performance of expert implementations in this domain, such as the state-of-the-art Concorde TSP solver applegate2006traveling ; applegate2006concorde .

A major difficulty of designing an RL framework for combinatorial optimization is formulating the action space. Value-based RL algorithms typically require an action space small enough to enumerate, while policy-based algorithms are designed with continuous action spaces in mind Silver2014

. These twin requirements severely limit the expressiveness of the action space, thus placing the entire weight of the problem on the machine learning model. For instance, in

nazari2018reinforcement , the selected action space models advancing the current vehicle one city at a time. RL algorithms for combinatorial optimization must therefore rely on complex architectures such as pointer networks vinyals2015pointer ; bello2016neural ; nazari2018reinforcement or graph embeddings khalil2017learning .

This paper presents a different approach, in which the combinatorial complexity is captured not only by the learning model, but also by the formulation of the underlying decision problem. We focus on the Capacitated Vehicle Routing Problem, a classical combinatorial problem from operations research toth2014vehicle

, where a single capacity-limited vehicle must be assigned one or more routes to satisfy customer demands while minimizing total travel distance. Despite its close relation to the TSP, the CVRP is a much more challenging problem and optimal approaches do not scale past hundreds of cities. Our approach is to formulate it as a sequential decision problem where the state space is the set of unvisited cities, and the action space consists of feasible routes. We estimate the value function of a state (i.e., its cost-to-go) using a small neural network. The action selection problem is then itself combinatorial, with the structure of a Prize Collecting Traveling Salesman Problem (PC-TSP) with a knapsack constraint and a nonlinear cost on unvisited cities. Crucially, the PC-TSP is a much easier combinatorial problem than the CVRP, allowing us to tractably exploit some of the combinatorial structure of the problem.

The contributions in this paper are threefold.

  1. We present a policy iteration algorithm for value-based reinforcement learning with combinatorial actions. At the policy improvement

    step, we train a small neural network with ReLU activations to estimate the value function from each state. At the

    policy evaluation step, we formulate the action selection problem from each state as a mixed-integer program, in which we combine the combinatorial structure of the action space with the neural architecture of the value function by adapting the branch-and-cut approach described in anderson2020strong .

  2. We apply this technique to develop a reinforcement learning framework for combinatorial optimization problems in general and the Capacitated Vehicle Routing Problem in particular. Our approach significantly differs from existing reinforcement learning algorithms for vehicle routing problems, and allows us to obtain comparable results with much simpler neural architectures.

  3. We evaluate our approach against several baselines on random and standard library instances, achieving an average gap against the OR-Tools routing solver of 1.7% on moderately-sized problems. While we do not yet match state-of-the-art operations research methods used to solve the CVRP, we compare favorably with heuristics using oracles of equal strength to our action selection oracle and to other RL approaches for solving CVRP

    nazari2018reinforcement .

2 Background and related work

Combinatorial action spaces.

Discrete, high-dimensional action spaces are common in applications such as natural language processing

He2016 and text-based games Zahavy2018 , but they pose a challenge for standard RL algorithms Dulac-Arnold2019 , chiefly because enumerating the action space when choosing the next action from a state becomes impossible. Recent remedies for this problem include selecting the best action from a random sample He2016 , approximating the discrete action space with a continuous one Dulac-Arnold2015 ; He2016a , or training an additional machine learning model to wean out suboptimal actions Zahavy2018 . None of these approaches guarantees optimality of the selected action, and projection-based approaches in particular may not be applicable when the structure of the action space is more complex. In fact, even continuous action spaces can prove difficult to handle: for example, safety constraints can lead to gradient-based methods providing infeasible actions. Existing ways to deal with this issue include enhancing the neural network with a safety layer Dalal2018 , or modifying the policy improvement algorithm itself Chow2018 . More recently, ryu2019caql propose a framework to explicitly formulate the action selection problem using optimization in continuous settings. We note that though combinatorial action spaces pose a challenge in deep reinforcement learning, they have been considered before in approximate dynamic programming settings with linear or convex learners powell2007approximate . In this paper, we formulate and solve the problem of selecting an optimal action from a combinatorial action space using mixed-integer optimization.

Combinatorial optimization and reinforcement learning. In recent years, significant work has been invested in solving NP-hard combinatorial optimization problems using machine learning, notably by developing new architectures such as pointer networks vinyals2015pointer and graph convolutional networks Joshi2019 . Leveraging these architectures, reinforcement learning approaches have been developed for the TSP bello2016neural ; khalil2017learning and some of its vehicle routing relatives Kool2019 , including the CVRP nazari2018reinforcement . Crucially, the CVRP is significantly more challenging than the closely related TSP. While TSPs on tens of thousands of cities can be solved to optimality applegate2009certification , CVRPs with more than a few hundred cities are very hard to solve exactly uchoa2017new , often requiring cumbersome methods such as branch-and-cut-and-price, and motivating the search for alternative solution approaches. This work adopts a hybrid approach, casting a hard combinatorial problem (CVRP) as a sequence of easier combinatorial problems (PC-TSP) in an approximate dynamic programming setting.

Optimizing over trained neural networks. A major obstacle for solving the action selection problem with an exponential discrete action space is the difficulty of finding a global extremum for a nonlinear function (as a neural network typically is) over a nonconvex set. One possible solution is to restrict the class of neural networks used to guarantee convexity Amos2016 , but this approach also reduces the expressiveness of the value function. A more general approach is to formulate the problem as a mixed-integer program (MIP), which can be solved using general-purpose algorithms anderson2020strong . Using an explicit optimization framework allows greater modeling flexibility and has been successfully applied in planning settings say2017nonlinear ; Wu2019 as well as in RL problems with a continuous action space ryu2019caql . Mixed-integer optimization has also proven useful in neural network verification cheng2017maximum ; tjeng2018evaluating . In this paper, we use neural networks with ReLU activations, leveraging techniques developed in anderson2020strong to obtain a strong formulation of the action selection problem.

3 Reinforcement learning model for CVRP

3.1 Problem formulation

A CVRP instance is defined as a set of cities, indexed from to . Each city is associated with a demand ; the distance from city to city is denoted by . City is called the depot and houses a single vehicle with capacity (or equivalently, a fleet of identical vehicles). Our goal is to produce routes that start and end at the depot such that each non-depot city is visited exactly once, the total demand in the cities along one route does not exceed the vehicle capacity , and the total distance of all routes is minimized. We assume the number of routes we can serve is unbounded, and note that the distance-minimizing solution does not necessarily minimize the number of vehicles used.

We formulate CVRP as a sequential decision problem, where a state corresponds to a set of as-yet-unvisited cities, and an action is a feasible route starting and ending at the depot and covering at least one city. We represent states using a binary encoding where 0 indicates a city has already been visited and 1 indicates it has not, i.e., (though by convention, we never mark the depot as visited). The action space corresponds to the set of all partial permutations of cities. Note that the sizes of both the state space and the action space are exponential in , even if we only consider actions with the shortest route with respect to their cities.

The dynamics of the decision problem are modeled by a deterministic transition function , where is the set of remaining unvisited cities in after serving route , and a cost function , indicating the cost incurred by taking action . Since cities cannot be visited twice, is undefined if visits a city already marked visited in . For clarity we define as the set of feasible actions from state . The unique terminal state corresponds to no remaining cities except the depot. Finding the best CVRP solution is equivalent to finding a least-cost path from the initial state (where all cities are unvisited) to .

We consider a deterministic policy specifying the action to be taken from state , and we let designate the set of all such policies. The value of a state is the cost incurred by applying repeatedly starting from state , i.e., . An optimal policy satisfies .

Given a starter policy , we repeatedly improve it using a simple policy iteration scheme. In the -th policy evaluation step, we repeatedly apply the current policy from randomly selected start states . For each random start state , we obtain a sample path, i.e. a finite sequence of action-state pairs such that , and , as well as the cumulative cost incurred from each state in the sample path. In the policy improvement step, we use this data to train a small neural network to approximate the value function . This yields a new policy defined using the Bellman rule:

(1)

3.2 Value function learning

The approximate policy iteration approach described above is both on-policy and model-based:111In fact, the state transition model is deterministic and we have perfect knowledge of it. in each iteration, our goal is to find a good approximation for the value function of the current policy . The value function of a state is the cost of visiting all remaining cities in , i.e., the cost of solving a smaller CVRP instance over the unvisited cities in using policy .

In our approximate dynamic programming approach, the value function captures much of the combinatorial difficulty of the vehicle routing problem, so we model

as a small neural network with a fully-connected hidden layer and rectified linear unit (ReLU) activations. We train this neural network to minimize the mean-squared error (MSE) on the cumulative cost data gathered at the policy evaluation step. To limit overfitting, we randomly remove some data (between 10% and 20%) from the training set and use it to evaluate the out-of-sample MSE.

We select initial states by taking a single random action from , obtained by selecting the next city uniformly at random (without replacement) until we select a city that we cannot add without exceeding vehicle capacity. If this procedure does not allow for enough exploration, we may gather data on too few states and overfit the value function. We therefore adapt a technique from approximate policy iteration Lagoudakis2003 ; Scherrer2014 , in which we retain data from one iteration to the next, and exponentially decay the importance of data gathered in previous iterations bertsekas1996neuro . Calling the -th data point (out of ) from iteration , the training objective in iteration becomes , where denotes the retention factor.

3.3 Policy evaluation with mixed-integer optimization

The key innovation in our approach comes at the policy evaluation step, where we repeatedly apply policy to random start states until we reach the terminal state. Applying policy to state involves solving the optimization problem in Eq. (1), corresponding to finding a capacity-feasible route minimizing the sum of the immediate cost (length of route) and the cost-to-go (value function of new state). This is a combinatorial optimization problem (PC-TSP) that cannot be solved through enumeration but that we can model as a mixed-integer program. Without loss of generality we consider problem (1) for a state such that for , and for , i.e. such that only the first cities remain unvisited. We then obtain the following equivalent formulation for problem (1):

(2a)
s.t. (2b)
(2c)
(2d)
(2e)
(2f)
(2h)
(2i)
(2j)
(2k)

The binary decision variable is 1 if city is included in the route, and 0 otherwise, and the binary decision variable is 1 if city immediately precedes city in the route (we constrain to be 1 because the depot must be included in any route). The binary decision variable is 1 if city remains in the new state and 0 otherwise. The objective (2a) contains the same two terms as (1), namely the total distance of the next route, plus the future cost associated with leaving some cities unvisited.

Constraint (2b) (resp. (2c)) ensures that if city is included in the route, it must be immediately followed (resp. preceded) by exactly one other city (flow conservation constraints). Constraint (2d) imposes that the total demand among selected cities does not exceed the vehicle’s capacity. Constraints (2e) and (2f) ensure that the route consists of a single tour starting and ending at the depot; they are called cycle-breaking or cutset constraints and are a standard feature of MIP formulations of routing problems. Finally, constraints (2h) relate the action selection variable for each city to the corresponding new state variables .

We make two further comments regarding problem (2). First, we note that the formulation includes an exponential number of constraints of type (2e) and (2f). This issue is typically addressed via “lazy constraints”, a standard feature of mixed-integer programming solvers such as Gurobi or SCIP, in which constraints are generated as needed. In practice, the problem can be solved to optimality without adding many such constraints. Second, as currently formulated, problem (2

) is not directly a mixed-integer linear program, because

is a nonlinear function. However, because we choose ReLU-activated hidden layers, it turns out it is a piecewise linear function which can be represented using additional integer and continuous variables and linear constraints say2017nonlinear ; anderson2020strong ; ryu2019caql .

In the worst case, problem (2) can be solved in exponential time in . But modern MIP solvers such as Gurobi and SCIP use techniques including branch-and-bound, primal and dual heuristics, and cutting planes to produce high-quality solutions in reasonable time scip ; gurobi . Given enough time, MIP solvers will not just return a solution but also certify optimality of the selected action.

A MIP approach for action selection was presented in ryu2019caql , but ours differs in several key ways. First, we consider a discrete action space instead of a continuous one. Second, we estimate the cost-to-go directly from cumulative costs rather than using a Q-learning approach because (i) state transitions in our problem are linear in our action and (ii) computing cumulative costs allows us to avoid solving the combinatorial action selection problem at training time, reducing overall computation. Separating training and evaluation may yield additional computational benefits, since the training loop could benefit from a gradient-descent-friendly architecture and hardware such as a GPU/TPU, while the evaluation loop relies on CPU-bound optimization solvers. Separating the tasks allows for better parallelization and hardware specification.

To further take advantage of the combinatorial structure of the problem, we augment the objective (2a) with known lower bounds. The cost-to-go of a state cannot be lower than the distance required to serve the farthest remaining city from the depot, nor can it be exceeded by the summed lengths of each remaining city’s shortest incoming edge. Given such lower bounds , we replace in the objective with . As long as the lower bounds are linear in (as are the ones we mention), the addition does not significantly increase solve time for problem (2).

4 Computational results

We now present results on benchmark instances from the operations research literature and on random instances from nazari2018reinforcement . We compare our results to several other approaches, and analyze their sensitivity to input parameters. Our implementation is in C++ and we use SCIP 6.0.2 as a MIP solver.

4.1 Comparison with existing methods and runtime analysis

In order to compare the results of our approach to existing RL algorithms, we consider random instances described in nazari2018reinforcement , in which cities are sampled uniformly at random from the unit square, distances are Euclidean, and demands are sampled uniformly at random from . We present results on instances with 11, 21 and 51 cities in Table 1.

We note that our solutions compare favorably with both simple heuristics and existing RL approaches, and nearly match the performance of state-of-the-art operations research methods. We must, however, qualify the comparison with a caveat: existing RL approaches for CVRP nazari2018reinforcement ; Kool2019 consider a distributional setting where learning is performed over many instances and evaluated out of sample, whereas we consider a single-instance setting. Though these approaches are different, they nevertheless provide a useful comparison point for our method.

One advantage of our method is that it can be used on instances without distributional information, which may be closer to a real-world setting. In Figure 1, we evaluate our performance on standard library instances from CVRPLIB uchoa2017new ; augerat1995computational , comparing our results with those of OR-Tools. Over 50 instances, the average final gap against OR-Tools is just 1.7%.

Method
Greedy 4.90 0.03 7.16 0.03 13.55 0.04
Nazari et al. nazari2018reinforcement 4.68 0.03 6.40 0.03 11.15 0.04
Kool et al. Kool2019 - - 6.25 - 10.62 -
RLCA-16 4.55 0.03 6.16 0.03 10.65 0.04
OR-Tools or-tools 4.55 0.03 6.13 0.03 10.47 0.04
Optimal 4.55 0.03 6.13 0.03 - -
Table 1: Comparison of results with existing approaches: a “greedy” approach in which each action is selected to minimize immediate distance traveled plus a trivial upper bound on the cost-to-go; results from Nazari et al. nazari2018reinforcement and Kool et al. Kool2019

; our own approach (RLCA) with 16 neurons; OR-Tools routing solver with 60s of guided local search (300s for

); and an optimal approach using the OR-Tools CP-SAT constraint programming solver (does not solve to optimality within 6 hours for ). We report the mean total distance (

) and standard error on the mean

, over 1000 random instances for each value of .
Figure 1: Results of our approach on 50 instances from the CVRP library (A and B) uchoa2017new , ranging in size from 32 to 78 cities. We perform 100 policy iterations on each instance.
Hidden Runtime (s)
nodes SCIP Gurobi
0 (LR) 0.059 0.0085
4 0.84 0.16
16 3.1 0.38
(a)
Hidden Runtime (s)
nodes SCIP Gurobi
0 (LR) 3.19 0.097
4 132.5 11.9
16 234.7 39.3
(b)
Table 2: Average time in seconds to solve a single action selection problem, with the MIP solvers Gurobi 8.1 and SCIP 6.0.2, averaged over 50 random instances. Instances are from the first policy iteration after taking a single random action, trained using 5000 path evaluations.

It is of interest to study our method’s runtime. At training time, our two main operations are evaluating many sample paths with the current policy, and re-training the value function estimator (at evaluation time, we just evaluate one sample path). With up to 16 hidden ReLUs, neural net training is fast, so the bottleneck is solving (2). We cannot provide precise runtimes because our experiments were executed in parallel on a large cluster of machines (which may suffer from high variability), but we present average MIP runtimes given different architectures and solvers in Table 2 and, based on these values, we describe how to calculate an estimate of the runtimes. The runtime of a policy iteration is . For cities (16 hidden nodes), SCIP solves the average MIP in s (Gurobi in s), and we almost never exceed 10 MIPs per path, so computing a policy iteration with 250 sample paths takes about 2h using SCIP (15min using Gurobi). We can reduce runtime with parallelism: with as many machines as sample paths, the SCIP running time becomes about 30s (plus parallel pipeline overhead). For , SCIP is slower (s per MIP), and a policy iteration may take up to an hour in parallel. In contrast, Nazari et al.’s nazari2018reinforcement runtime bottleneck is neural net training (hours), but evaluation is much faster (seconds).

4.2 Ablation studies and sensitivity analysis

As a success metric, we evaluate the quality of a solution using the gap between and the provably optimal solution obtained by CP-SAT on the same instance, where .

(a) Gap for varying amounts of data and retention factors after 30 policy iterations ().
(b) Gap for varying amounts of data and retention factors after 30 policy iterations ().
(c) Gap as a function of total evaluated sample paths, with retention factor ().
(d) Gap as a function of total evaluated sample paths, with retention factor ().
Figure 2: Effect of the amount of data on performance. Results averaged over 50 random Euclidean instances with 11 or 21 cities, error bars indicate standard errors (SEM).

Data requirements.

At each policy evaluation step, we evaluate the current policy from randomly selected start states to obtain sample paths. By evaluating more sample paths, we can use more data to update the policy, but we also require more computing resources. In Figs. 1(a) and 1(b), we show the solution quality obtained after 30 policy iterations, as we vary the number of sample paths evaluated per iteration and the retention factor . For a fixed number of iterations, results improve with more data, with diminishing returns. Keeping data between policy iterations significantly improves the solution quality, especially with few evaluations per iteration and little to no decay.

Even with just 25 evaluations per iteration, keeping data from one iteration to the next leads to solutions with a very small gap. In Figs. 1(c) and 1(d), we show the solution quality when keeping data between iterations (with ) as a function of the cumulative number of evaluations and the number of evaluations per iteration. We obtain the best results with 250 iterations and 10 evaluations per iteration, suggesting that rapid iterations may be preferable on small instances.

(a) Gap after 50 policy iterations (n = 11).
(b) Gap after 50 policy iterations (n = 21).
Figure 3: Effect of training parameters (lasso weight and learning rate) on performance. Results averaged over 50 random Euclidean instances (11 or 21 cities), error bars indicate standard errors (SEM).

Architecture, training and regularization.

In Figure 3, we consider a neural network with 16 hidden ReLU nodes and show the effect of the batch SGD learning rate and the LASSO regularization parameter

(given all neural network weights as a vector

of length , we penalize the training objective with the LASSO term ). We notice that mild regularization does not adversely impact performance: this is particularly important in our setting, because a sparser neural network can make the action selection problem MIP formulation easier to solve. In Figure 4, we analyze the effects of increasing neural network size on the solver performance. We note that even 4 neurons leads to a significant improvement over a linear model; on small instances, a 32-neuron network leads to optimal performance.

(a) Gap after 50 policy iterations (n = 11).
(b) Gap after 250 policy iterations (n = 21).
Figure 4:

Effect of neural network size on performance (0 hidden nodes corresponds to linear regression). Results averaged over 50 random Euclidean instances, error bars show standard errors (SEM).

Combinatorial lower bounds.

As mentioned above, we augment the neural network modeling the value function with known combinatorial lower bounds. We can include them in the action selection problem, but also during training by replacing with in the training objective. Figure 5 shows that including these lower bounds provide a small but nonetheless significant improvement to the convergence of our policy iteration scheme.

(a)
(b)
Figure 5: Effect of including lower bounds in our RL framework. Results averaged over 50 instances.

5 Conclusion and Discussion

We have presented a novel RL framework for the Capacitated Vehicle Routing Problem (CVRP). Our approach is competitive with existing RL approaches, and even with problem-specific approaches on small instances. A combinatorial action space allows us to leverage the structure of the problem to develop a method that combines the best of reinforcement learning and operations research.

A big difference between this method and other RL approaches nazari2018reinforcement ; Kool2019 is that we consider a single instance at a time, instead of learning a single model for multiple instances drawn from a similar distribution. Single-instance learning is useful because the distribution of real-world problems is often unknown and can be hard to estimate because of small sample sizes. However, a natural extension of this work is to consider the multi-instance problem, in which learning is performed offline, and a new instance can be solved via a single sequence of PC-TSPs.

Another possible extension of this work is the consideration of other combinatorial optimization problems. A simple next step would be to impose an upper bound on the number of routes allowed, by augmenting the state space to keep track of the number of past routes and penalizing states where this number is too high. Beyond this simple example lies a rich landscape of increasingly complex vehicle routing problems toth2014vehicle . The success of local search methods in tackling these problems suggests an orthogonal reinforcement learning approach, in which the action space is a set of cost-improving local moves, could be successful.

Beyond vehicle routing, we believe our approach can be applied to approximate dynamic programming problems with a combinatorial action space and a neural network approximating a cost-to-go or function. Even the simpler case of continuous medium-dimensional action spaces is a challenge. In ryu2019caql , a similar approach to ours was competitive with state-of-the-art methods on the DeepMind Control Suite tassa2018deepmind .

Broader Impact

This paper presents methodological work and does not have foreseeable direct societal implications.

We would like to thank Vincent Furnon for his help in selecting OR-Tools parameters to significantly improve baseline performance.

References

Appendix A Algorithms, hyperparameters, and code

In this section, we recap key hyperparameters in our approach, as well as their default values.

  1. Overall parameters:

    • Number of policy iterations: we vary this in the paper, but our default value is 250.

    • Data retention factor: our default value is 1 (data from previous iterations is not decayed in the mean-squared error training objective).

  2. Learning parameters:

    • Neural network architecture: we use a fully-connected neural network, with an input layer of size (binary input), a single hidden layer with 16 ReLU-activated nodes, and a single linear-activated output node.

    • Learning rate and batch size: we optimize the neural network parameters using standard batch stochastic gradient descent, with a fixed learning rate (default value

      ) and batch size (default value 10).

    • Epochs: we typically pass through the entire dataset 500 times.

    • LASSO regularization: given all neural network weights as a vector , we penalize the training objective with the LASSO term , where the default value of is 0.1. We note that when turning off LASSO regularization (not recommended), we can increase the learning rate to and decrease the number of epochs to 300, affording up to almost 2x training speedup. For models with fewer than 16 neurons, we do not use any regularization (i.e. ).

  3. Evaluation parameters:

    • Number of sample paths to evaluate: we select random start states to initialize sample path computation. The larger the value of , the more data we obtain to train the cost-to-go estimator. We typically set , a very small value since our ablation studies suggest there is more to gain from many policy iterations than from many data points.

    • Zeroing threshold: when solving the action selection problem, we integrate the neural network modeling the cost-to-go into a MIP formulation. MIP solvers can run into numerical issues in the presence of very small coefficients, so we round every coefficient with absolute value less than a threshold (typically ) to zero. We have observed that this has no measurable effect on solution quality and greatly reduces the chance of the solver failing.

    • Time limit: we impose a time limit of 60 seconds on the MIP solver when solving the action selection problem. The solver will then return the best solution found so far (but not necessarily the optimal one). The time limit is rarely reached. On the large instances (the random instances with 51 cities and CVRPLIB instances), we use a time limit of 600 seconds.

    • Lower bounds: at the evaluation step, we can include simple combinatorial lower bounds. The default setting is to include them. We describe these lower bounds in more detail in a later section of this appendix.

    • Cuts: a key way to improve MIP runtime and solution quality is the addition of “cuts”, inequalities that sharpen the solver’s ability to prune the search tree using linear programming (LP) bounds. In the default configuration, we add cuts to sharpen the formulation of the neural network argmin problem following the approach detailed in [14]. We also follow a lazy constraint scheme for constraints (2f) and (2e) in which we add violated constraints at every node in the branch-and-bound tree rather than only at integer solutions. Finally, we use a linear programming preprocessing routine to update bounds.

The parameters above describe a setting in which each policy iteration is quick and we continuously improve the policy with a small amount of data. If parallel computing infrastructure is available, we consider a “high parallelism” mode, in which we increase the number of sample paths to 200 per iteration over 100 policy iterations.

The experiments on CVRPLIB and the 51 city random instances were run in high parallelism mode, while the ablations on 11 and 21 cities were run with the default settings (and the changes listed in each ablation).

Algorithm overview.

Algorithm 1 presents an overview of our policy iteration scheme in algorithm block form. We note that this is a simplified description of our method, which does not include all refinements, e.g., the mechanism to conserve data from iteration to iteration.

1:function SolveCVRP()
2:      Initialize policy
3:     for  to  do
4:          Initialize empty dataset
5:         for  to  do
6:              Select a random state
7:               EvaluatePolicyFromState(, , false) Add new data          
8:         Use dataset to train a new value function approximation and update the policy      
9:     Define as the state where all cities are unvisited
10:     return EvaluatePolicyFromState(, , true)
11:function EvaluatePolicyFromState(, , )
12:     ,
13:     while  is not terminal do
14:         
15:         
16:               
17:     if  then
18:         return the list of selected actions (routes) and the total incurred cost
19:     else
20:         return all visited states and the total cost incurred from each one, denoted      
Algorithm 1 High-level overview of our policy iteration scheme. The inputs are simply the problem parameters, namely demands , distances , capacity , and the number of policy iterations .

Our code is available as part of the tf.opt repository: https://github.com/google-research/tf-opt.

Optimizing over a trained neural network.

The second term of the objective (2a) is a nonlinear function (fully-connected one-layer neural network with ReLU activations), and nontrivial to model using mixed-integer linear programming. We use a technique developed by Anderson et al. [14] to model . For clarity, assume that has a single hidden layer, with hidden nodes, each with a ReLU activation.

Let designate the vector of weights, and the bias term, for the -th hidden node. Define and analogously for the output layer. For any vector of weights , let indicate the set of indices such that . Finally, define a modified version of the sign function, where returns 1 if , and 0 otherwise. Then we can write the problem as

(3a)
s.t. (3b)
(3c)
(3d)
(3e)
(3f)
(3g)
(3h)
(3i)

Notice that in addition to the initial binary decision variables

, we define two additional decision variables for each hidden node in the neural network. One is continuous and models the output of the hidden node, the other is binary and indicates whether the pre-activation function is positive or negative (i.e. whether the ReLU is active or not). This relationship is enforced by the “big-

” constraints (3c) and (3d). In principle, any large enough values of and can enforce this relationship, but values that are too large weaken the formulation and can decrease tractability. We therefore compute these big- values for each hidden node by maximizing (or minimizing, depending on whether computing or ) the pre-activation function over the LP relaxation of the neuron’s input domain (which includes all the problem specifications, including the PC-TSP and knapsack constraints). This operation can be performed iteratively, neuron-by-neuron, to compute suitable values for and .

The set

encapsulates any other constraints on the binary variables

(input domain of the neural network ), in particular the ones associated with the prize-collecting traveling salesman formulation (2).

We note that this formulation is not polynomial in size, as there are exponentially many constraints of type (3f). However, these constraints are not needed for correctness, they simply strengthen the formulation. When solving the action selection problem, we generate these constraints on the fly, using a linear-time separation oracle as described in [14].

Local search warm start.

In order to speed up the MIP solve in the action selection problem (2), we provide the solver with an initial primal-feasible solution, obtained via a simple local search heuristic. This warm start ensures we always obtain a feasible solution and allows the solver to spend more time in the branch-and-bound tree. As a result, we can set a lower solver time limit, and increase the number of policy iterations performed in a fixed amount of time.

Our local search heuristic (1-OPT with random start) can be described as follows. We first create a random feasible route, by randomly sampling unvisited cities until we hit the capacity constraint. We then consider every unvisited city, and compute the cost of removing this city in the route (if it is already included) or including it in the route (if it is not). For each of the unvisited cities, evaluating the cost requires one call to a TSP solver (to evaluate the route distance) and one neural network evaluation (to evaluate the future cost of unvisited cities). Both are computationally tractable, and allow us to quickly perform many iterations, possibly with several random restarts.

Combinatorial lower bounds.

In the main text, we mention refining the cost-to-go with combinatorial lower bounds linear in the decision variables of problem (2). We now describe each bound in more detail – note that they each assume the distances respect the triangle inequality (metric vehicle routing):

  1. Maximum out-and-back bound: given a state , the cost-to-go cannot be exceeded by the distance to and from the furthest city from the depot, i.e.,

    This bound is rather weak for a large number of remaining cities, but it is tight when a single city remains.

  2. Shortest-edges bound: given a state , we must take at least one edge into and out of each city, thus paying at least half the cost of the shortest edges into and out of each city, i.e.,

  3. Refined shortest-edges bound: we can refine the bound above using demand information to incorporate a bound on the minimum number of vehicles required (and thus the minimum number of times the depot must be visited), i.e.,

Appendix B CVRP instances

In this section, we describe the CVRP instances we use to evaluate our reinforcement learning framework.

Random Euclidean instances.

We follow the generation procedure of Nazari et al. (2018) to construct random instances. We sample locations uniformly at random in the unit square, then define the distance to be the Euclidean distance from city to city (symmetric). One of the cities is randomly selected to be the depot, and the demand for the remaining cities is uniformly sampled from . The vehicle capacity scales with the number of cities, with for , for , and for .

Standard library instances.

An issue with evaluating algorithms uniform Euclidean vehicle routing instances is that such instances are known to satisfy certain strong regularity properties that may not be manifested in real-world problems [41, 42]. In an effort to benchmark methods against more realistic instances, the CVRP library (CVRPLIB) is an online resource cataloging instances from the literature—either real problems, or synthetic examples inspired by certain properties of real-world instances. We include 50 instances from CVRPLIB (“A” and “B” instances, ranging in size from 32 to 78 cities) [38]. We note that in many cases, the optimal values for these problems are known, and listed on the CVRPLIB website [27]. However, we do not use these optimal values because they consider a slightly different setting in which the number of vehicles is fixed, and thus overestimate the true optimum in our unbounded-fleet setting.

Appendix C Baselines

In this section, we briefly describe the baselines against which we compare our results.

OR-Tools.

Google’s open-source OR-Tools library is an often-used benchmark for combinatorial optimization problems, and incorporates one of the best existing vehicle routing solvers

[39], which combines exact approaches with local search and other heuristics to provide high-quality solutions. On instances of moderate size such as the ones presented in this paper, OR-Tools configured to perform a few minutes of local search typically produces optimal or near-optimal solutions, motivating our use of OR-Tools as a reference point when an optimal approach does not scale. The results we report with OR-Tools are considerably better than those reported in previous reinforcement learning for vehicle routing papers [12, 25]. This reflects the fact that we configure OR-Tools for solution quality and not for speed, in contrast to other methods.

Optimal.

Vehicle routing problems can be solved to optimality up to a certain problem size using mixed-integer programming (MIP) or constraint programming (CP) approaches. In addition to a routing solver, the OR-Tools library also provides a constraint programming solver called CP-SAT. To avoid confusion and remain consistent with the literature on reinforcement learning for vehicle routing, we refer to the routing solver as “OR-Tools” and the CP-SAT solver as “CP-SAT”, even though both are technically part of OR-Tools.

We use CP-SAT to compute both feasible solutions and lower bounds. On small instances, the lower bounds coincide with the best feasible solution, yielding a certificate of optimality. When tractable, these provably optimal solutions are a reference point for our results, and we use the gap to the optimal solution as a key metric of success. Unfortunately, CP-SAT cannot prove optimality in a reasonable time for cities. For larger instances, we therefore fall back to the near-optimal OR-Tools (routing) solver as a reference point. We note that as CP-SAT and the OR-Tools routing solvers only work on integer distances, we multiply all problem distances by and round to the nearest integer, so results are accurate within .

Greedy.

One of the claims of this paper is that our method performs well on CVRP instance both because of the combinatorial structure of the action space and because of the machine learning model’s ability to learn the complexity of the value function. A useful benchmark for our approach is therefore a method which features only the combinatorial action space without the learning component. This approach involves solving problem (2), replacing the cost-to-go with the following trivial upper bound:

representing the total distance of covering all remaining cities with one route per city. We refer to this baseline as a greedy method, because it overestimates the cost-to-go and thus compels the optimal action to pack as many cities as possible (and particularly cities far from the depot).

Existing approaches.

Finally, we compare our approach with existing RL approaches for vehicle routing [12, 25]. This comparison is less direct than the two above, because both approaches consider a multi-instance learning setting, where the model is trained on a large sample of problem instances and evaluated on unseen instances from the same distribution. As a result, both papers [12, 25] report out-of-sample solution quality metrics, whereas we train and evaluate on one instance at a time. Both frameworks are valuable: on one hand, learning insights from multiple instances that can be generalized to a new problem can significantly improve solve times; on the other hand, real-world vehicle routing problems do not typically come with distributional information, rendering prior learning irrelevant. Though they are not directly comparable, we nevertheless display solution quality metrics for both of these methods along with ours, as evidence that our framework produces solutions in the same ballpark as other RL frameworks.

Appendix D Additional simulation results

In Section A of the appendix, we included a description of our hyperparameters, and in particular identified a “high parallelism” setting. We now provide some experimental justification for this setting in Figure 6. We see that increasing the number of sample paths per iteration can provide significant improvements on a per-iteration basis. Computing sample paths is a highly parallelizable task, so depending on the computing platform available, it may be more efficient to consider a small number of policy iterations, and compensate by computing many sample paths in parallel at each iteration.

(a)
(b)
Figure 6: Effect of number of sample paths per iteration on the gap at each iteration (). Results averaged over 50 random Euclidean instances, error bars show standard errors (SEM).