1 Introduction
A multiobjective optimization problem (MOP) can be defined as follows:
(1)  
where is the decision space, is composed of realvalued objective functions where is called the objective space, and for is the th objective of the MOP. Since different objectives in the MOP are usually conflicting, it is impossible to find one best solution that can optimize all objectives at the same time. Thus a tradeoff is required among different objectives.
Let , is said to dominate if and only if for every and for at least one index . A solution is called a pareto optimal solution if there is no solution such that dominates ) [22]. The set of all pareto optimal solutions is named as pareto set (PS), and the set is called the pareto front (PF) [22].
Many MOPs are NPhard, such as multiobjective travelling salesman problem (MOTSP), multiobjective vehicle routing problem, etc. It is often difficult to find the PF of a MOP using exact algorithms. There are mainly two categories of optimization algorithms for solving MOPs. The first category is heuristics, such as NSGA
[5] and MOEA/D [22]. The second category is the learning heuristic based methods [11]. Heuristics are often used to solve MOPs [18, 19, 3, 21], but there are several drawbacks for them. Firstly, it is timeconsuming for heuristics to approximate the PF of a MOP. Secondly, once there is a slight change of the problem, the heuristic may need to reperform again to compute the solutions [11]. As a problemspecific method, heuristics often need to be revised for different problems, even for the similar ones.Recently, some researchers begin to focus on deep reinforcement learning (DRL) for singleobjective optimization problem [2, 6, 9, 10, 13]
. Instead of designing specific heuristics, DRL learn heuristics directly from data on endtoend neural network. Taking travelling salesman problem (TSP) as an example, given
cities as input, the aim is to get a sequence of these cities with minimum tour length. DRL views the problem as a Markov decision problem. Then TSP can be formulated as follows: the state is defined by the features of the partial solution and unvisited cities, the action is represented by the selection of the next city, the reward is the negative path length of a solution, the policy is the heuristic that learning how to make decisions, which is parameterized by a neural network. The aim of DRL is to train the policy that maximizes the reward. Once a policy is trained, the solution can be generated directly from one feed forward pass of the trained neural network. Without repeatedly solving instances from the same distribution, DRL is more efficient and requires much less problemspecific expert knowledge than heuristics.Inspired by MOEA/D and the DRL methods proposed recently, a deep reinforcement learning multiobjective optimization algorithm (DRLMOA) [11] is proposed to learn heuristics for solving MOPs. In the DRLMOA, MOTSP is decomposed to singleobjective optimization subproblems firstly. Then modified pointer networks, each of them is similar to the pointer network in [17], are used to model these subproblems. Finally, these models are trained by REINFORCE algorithm [20] sequentially. The experiment results on MOTSP in [11] show that DRLMOA achieves better performance than NSGA and MOEA/D.
As we know, MOTSP is defined on a graph, every node in the graph not only contains its own features, but also the graph structure features such as the distances from other nodes. In DRLMOA, when the modified pointer network models the subproblems of MOTSP, it does not consider the graph structure features of the graph. Therefore, this paper proposes a multiobjective deep reinforcement learning algorithm using decomposition and attention model (MODRL/DAM) to solve MOPs. Attention model can extract the node features as well as graph structure features of MOP instances, which is helpful in making decisions. To show the effectiveness of our method, MODRL/DAM is compared with DRLMOA for solving MOTSP, and a significant improvement is observed in the overall performance of convergence and diversity.
The remainder of our paper is organized as follows. In Section 2, DRLMOA is described. MODRL/DAM is introduced in Section 3. Experiment results and analysis are presented in Section 4. Finally, conclusions are given in Section 5.
2 Brief Review of DRLMOA for MOTSP
2.1 Problem Formulation and Framework
We focus on MOTSP in this paper. Given cities and objective functions, and the th objective function of MOTSP is formulated as follows [12]:
(2) 
where route is a permutation of cities and is the th cost from city to city . The goal of MOTSP is to find a set of routes that minimize the objective functions simultaneously.
Just like MOEA/D, DRLMOA decomposes MOTSP to scalar optimization subproblems by the wellknown weighted sum approach, which considers the combination of different objectives. Let , where and
, be a weight vector corresponding to the
th scalar optimization subproblem of MOTSP, which is defined as follows:(3) 
The optimal solution of the scalar optimization problem above is a pareto optimal solution. Then, let be a set of weight vectors, each weight vector corresponds to a scalar optimization subproblem. When , the weight vectors and corresponding subproblems are spread uniformly as in Fig. 1 (a). The PS is made up of the nondominated solutions of all subproblems.
After decomposing MOTSP to a set of scalar optimization subproblems, each subproblem can be modelled by a neural network and solved by DRL methods. However, training models requires huge amount of time. Thus, to decrease the training time of the models, DRLMOA adopts a neighborhoodbased transfer strategy, which shows in Fig. 1 (b). Each model corresponds to a subproblem. When one subproblem is solved, the parameters of the corresponding model will be transferred to the model of the neighborhood subproblem, then the neighborhood subproblem will be solved quickly. By making use of the neighborhood information among subproblems, all subproblems are tackled sequentially in a quick manner. The basic idea of DRLMOA is shown in Algorithm 1. The subproblems are solved sequentially and models are trained with REINFORCE algorithm by combining DRL and neighborhoodbased transfer strategy. Finally, the PF can be approximated by a simple feed forward calculation of the models.
2.2 Model of Subproblem: Pointer Network
A subproblem instance of MOTSP can be defined in a graph with nodes, which is denoted by a set . Each node has a feature vector , which corresponds to the different objectives of MOTSP. For example, a feature used widely is the 2dimensional coordinate of Euclidean space. The solution denoted by is a permutation of the graph nodes of MOTSP. The objective is minimizing the weighted sum of different objectives like Eq. (3). The process of generating a solution can be viewed as a sequential decision process, so each subproblem can be solved by an encoderdecoder model [4] parameterized by . Firstly, the encoder maps the node features to node embeddings in a highdimensional vector space. Then the decoder generates the solution step by step. At each decoding step , one node
that has not been visited is selected. Hence, the probability of a solution can be modelled by the chain rule:
(4) 
In DRLMOA, a modified pointer network is used to compute the probability in Eq. (4). The encoder of the modified pointer network transforms each node feature to an embedding in a highdimensional vector space through a 1dimensional (1D) convolution layer. At each decoding time
, a gated recurrent unit (GRU)
[4] and a variant of attention mechanism [1]are used to produce a probability distribution over the unvisited nodes, which is used to select the next node to visit. More details of the modified pointer network can be found in
[11].3 The Proposed Algorithm: MODRL/DAM
3.1 Motivation
In DRLMOA, a modified pointer network is used to model the subproblem of MOTSP. In the modified pointer network, an encoder extracts the node features using a simple 1D convolutional layer. However, each subproblem of MOTSP is defined over a graph that is fullyconnected (with selfconnections). Such a simple encoder can not exploit the graph structure of a problem instance. At the decoding time , the decoder uses a GRU to map a partial tour to a hidden state, which is used as decoding context to calculate the probability distribution of selecting the next node. However, the partial tour can not be changed and our goal is to construct a path from to through all unvisited nodes. In other words, the selection of the next node is relevant only to the first and last node of the partial tour. Using a GRU in modified pointer network to map the total partial path to a hidden state may be not so helpful in selecting the next node, since there is much irrelevant information in the hidden state. Thus, this paper uses the attention model [10], instead of the pointer network, to model the subproblem.
3.2 Model of Subproblem: Attention Model
The attention model is also an encoderdecoder model. However, different from the modified pointer network, the encoder of attention model can be viewed as a graph attention network [16], which is used to compute the embedding of each node. As show in Fig. 2 (a), by attending over other nodes, the embedding of each node contains the node features as well as the structure features. The decoder of attention model does not use a GRU to summarize the total partial path to a decoding context vector. Instead, the decoding context vector is calculated using the graph embedding, the first and last node embeddings of the partial tour, which is more useful in selecting the next node. The details of attention model are described below.
3.2.1 Encoder of Attention Model
The encoder of attention model transforms each node feature vector in the dimensional vector space to a node embedding in the
dimensional vector space. The encoder is consisted of a linear transformation layer and
attention layers, which is similar to the encoder used in the Transformer architecture [15]. But the encoder of attention model does not use the positional encoding since the input order is not meaningful. For each node , where , the linear transformation layer with parameters and transforms the node feature vector to the initial node embedding :(5) 
Then the node embeddings are fed into
attention layers. Each attention layer contains a multihead attention sublayer and a feedforward sublayer. For each sublayer, a batch normalization
[8] layer and a skip connection [7] layer are used to accelerate the training process.MultiHead Attention Sublayer
For each node , this sublayer is used to aggregate different types of message from other nodes in the graph. Let the embedding of each node in layer be , where and . The output of multihead attention sublayer can be computed as follows:
(6) 
where BN is the batch normalization layer and is the multihead attention vector that contains different type of messages from other nodes. The number of heads is set to . For each head , the query vector , the key vector and the value vector is calculated by a transformation of the node embedding for each node (). Then the process of computing the multihead attention vector is described as follows:
(7) 
(8) 
(9) 
where are trainable attention weights of the th multihead attention sublayer. is the compatibility of the query vector of node with the key vector of node , the attention weight is calculated using a softmax function. is the combination of messages from other nodes received by node . The multihead attention vector is computed with and .
Feed Forward Sublayer
In this sublayer, the node embedding of each node is updated by making use of the output of the multihead attention layer. The feed forward sublayer (FF) is consisted of a fullyconnected layer with ReLu activation function and another fullyconnected layer. For each node
, the input of the feed forward sublayer is the output of the multihead attention layer , the output is calculated as follows:(10) 
(11) 
where and are trainable parameters.
For each node , the final node embedding is calculated by attention layers. Besides that, the graph embedding is defined as follows:
(12) 
both of the node embeddings and graph embedding will be passed to the decoder.
3.2.2 Decoder of Attention Model
At each decoding step , the decoder needs to make a decision of based on the partial tour , the embeddings of each node and the total graph. Firstly, the initial context embedding is calculated by a concatenation of the graph embedding , the node embedding of the first node and the last node . When , are replaced by two trainable parameter vectors :
(13) 
Then a new context embedding is computed with an head attention layer. The query vector comes from the previous context embedding . For each node , the key vector and the value vector are transformed from the node embedding :
(14) 
where and . Then the compatibilities of the query vector with all nodes are computed. Different from the encoder of attention model, the nodes that have been visited are masked when calculating the compatibilities:
(15) 
Then the attention weights can be obtained by a softmax function and the new context embedding can be calculated as follows:
(16) 
(17) 
where . Finally, based on the new context embedding , the probability of selecting node as the next node to visit is calculated by a singlehead attention layer:
(18) 
(19) 
(20) 
where and are trainable parameters. When we compute the compatibilities in Eq. (19), the result are limited in () by a tanh function.
The decoding process at decoding step is shown in Fig. 2 (b). Firstly, the context embedding is computed with a multihead attention layer by making use of the partial solution and unvisited nodes. Then based on the context embedding and unvisited nodes, the probability distribution over unvisited nodes can be calculated by a singlehead attention mechanism.
3.3 Framework and Training Method
The proposed algorithm uses the same MOEA/D framework as in DRLMOA (Algorithm 1). The training method is briefly described as follows.
The REINFORCE algorithm, a wellknow actorcritic training method, is used to train the model of the subproblem. For each subproblem, the training parameters is composed of an actor network and a critic network. The actor network is the attention model, which is parameterized by . The critic network parameterized by
has four 1D convolutional layers to map the embeddings of a problem instance into a single value. The output of the critic network predicts an estimation of the objective function of the subproblem.
For the actor network, the training objective is the weighted sum of different objectives of solution of a problem instance . So the gradients of parameters can be defined as follows:
(21) 
where is the objective function of the th subproblem, which is the weighted sum of different objectives. is the corresponding weight vector.
is a baseline function calculated by the critic network, which estimates the expected objective value to reduce the variance of the gradients.
In the training process, the MOTSP instances are generated from distributions . Since for each node of an instance , different features may come from different distributions . For example, can be a twodimensional coordinate in Euclidean space and
can be a uniform distribution of
. Then the gradients of parameters can be approximated by Monte Carlo sampling as follows:(22) 
where is the batch size, is a problem instance sampled from and generated by the actor network is the solution of .
Different from the actor network, the critic network aims to learn to estimate the expected objective value given an instance . Hence, the objective function of the critic network can be a mean squared error function between the estimated objective value of the critic network and the actual objective value of the solution generated by the actor network. The objective function of the critic network is formulated as follows:
(23) 
The training algorithm can be described in Algorithm 2.
4 Experiment
4.1 Problem Instances and Experimental Settings
MODRL/DAM is tested on the Euclidean instances in [11]. In the Euclidean instances, the two node features are both sampled from and both of the two cost functions between node and node are the Euclidean distance between them.
To train the models of MODRL/DAM, problem instances with 20 and 40 nodes are used. After training, two models of MODRL/DAM are obtained and the influence of different nodes in the training process can be discussed. To show the robustness of our method, the models are tested on problem instances with 20, 40, 100, 150 and 200 nodes. Besides, kroAB100, kroAB150 and kroAB200 generated from TSPLIB [14] are used to test the performance of our method.
DRLMOA is implemented and used as the baseline. Both our method and DRLMOA are trained on datasets with 20 and 40 nodes, so there are four models in total: MODRL/DAM (20), DRLMOA (20), MODRL/DAM (40), DRLMOA (40). To make the result comparison more convincing, some parameters of our method and the baseline are set to the same value. The number of subproblems is set to 100, the input dimension is set to 4, the dimension of node embedding is set to 128. In the training process, the batch size is set to 200, the size of problem instances
is set to 500000, and the model of the first subproblem is trained for 5 epochs and each model of the remaining subproblems is trained for 1 epoch. Besides these parameters, the critic network is consisted of four 1D convolutional layers. The input channels and output channels of the four convolutional layers are (4, 128), (128, 20), (20, 20) and (20, 1), where the first element of a tuple represents the input channel and the second element represents the output channel. For all convolutional layers, the kernel size and stride are set to 1.
In MODRL/DAM, the number of attention layers is set to 1, the number of heads is set to 8, the dimension of the query vector and the value vector are both set to = 16, and another dimension in the feed forward sublayer is set to 512.
4.2 Results and Discussions
Hypervolume (HV) indicator is calculated to compare the performance of our method and DRLMOA on tested instances. When computing the HV value, the objective values are normalized and the reference point is set to . The PFs obtained by MODRL/DAM and DRLMOA are also compared. Besides, the influence of different number of nodes in training process is also discussed. All test experiments are conducted by a GPU (GeForce RTX 2080Ti).
#nodes  MODRL/DAM (20)  DRLMOA (20)  MODRL/DAM (40)  DRLMOA (40)  

20  HV  0.802  0.796  0.796  0.785 
T(s)  4.6  2  4.6  1.9  
40  HV  0.813  0.773  0.821  0.815 
T(s)  8.8  4.2  8.7  4.2  
70  HV  0.834  0.803  0.856  0.842 
T(s)  15  6.7  14.7  6.5  
100  HV  0.846  0.818  0.872  0.853 
T(s)  21.4  9.6  20.5  10.3  
150  HV  0.857  0.838  0.884  0.864 
T(s)  29.5  15.9  30.7  15  
200  HV  0.866  0.853  0.894  0.878 
T(s)  39.8  18.6  38.6  20.3 
#nodes  MODRL/DAM (20)  DRLMOA (20)  MODRL/DAM (40)  DRLMOA (40)  

kroAB100  HV  0.852  0.832  0.876  0.876 
T(s)  19.8  9.2  19.2  9  
kroAB150  HV  0.874  0.855  0.891  0.884 
T(s)  27.9  13.7  27.4  13.5  
kroAB200  HV  0.87  0.85  0.885  0.882 
T(s)  37.3  18  38  17.7 
The HV values of random instances are shown in Table 1. For the random instances with 20, 40, 70, 100, 150 and 200 nodes, 10 instances are tested for each kind of random instances. The average of the HV values of each kind of random instances is calculated. In terms of the average of HV values, MODRL/DAM (40) performs better than DRLMOA (40) in all kinds of random instances. For kroAB100, kroAB150 and kroAB200, the HV values are computed in Table 2 and MODRL/DAM (40) achieves a better performance than DRLMOA (40). The calculation time of our method is longer than that of DRLMOA. It is reasonable because the graph attention encoder of attention model requires more calculation resources than a single convolutional layer.
The result of the tested instances with different nodes is shown in Fig. 3. By increasing the number of nodes, MODRL/DAM (40) is able to get better performance in terms of convergence and diversity than that of DRLMOA (40). Fig. 4 shows the performance of MODRL/DAM (40) and DRLMOA (40) on kroAB100, kroAB150 and kroAB200 instances. A significant improvement on convergence is observed for our method and the diversity achieved by our method is also slightly better.
Then, the performances of MODRL/DAM (40) and MODRL/DAM (20) are compared to investigate the influence of different number of nodes in training process. HV values in Table 1 show that MODRL/DAM (40) performs better on random instances with 40, 70, 100, 150 and 200 nodes than MODRL/DAM (20). For the random instances with 20 nodes, MODRL/DAM (40) performs similar to MODRL/DAM (20), while MODRL/DAM (40) is slightly worse. From the PFs obtained by MODRL/DAM (40) and MODRL/DAM (20) in Fig. 5, a better performance is observed in terms of convergence and diversity. When training with instances with larger number of nodes, the model of MODRL/DAM can learn to deal with more complex information about node features and structure features. Thus, a better model of MODRL/DAM can be trained with more nodes.
From the experiment results above, it is observed that MODRL/DAM has a good generalization performance in solving MOTSP. For MODRL/DAM, the model trained with 40 nodes can be used to approximate the PF of problem instances with 200 nodes. In terms of convergence and diversity, MODRL/DAM performs better than DRLMOA.
The good performance of MODRL/DAM indicates that the graph structure features are helpful in constructing solutions for MOTSP, and attention model can extract the structure information of a problem instance effectively. Thus, MODRL/DAM can also be applied to other similar combinatorial optimization problems with graph structures such as multiobjective vehicle routing problem
[18, 19]. Finally, there is still an issue that the solutions of MOTSP instances are not distributed evenly in our experiment, which needs further research.5 Conclusions
This paper proposes an multiobjective deep reinforcement learning algorithm using decomposition and attention model. MODRL/DAM adopts an attention model to model the subproblems of MOPs. The attention model can extract structure features as well as node features of problem instances. Thus, more useful structure information is used to generate better solutions. MODRL/DAM is tested on MOTSP instances, and compared with DRLMOA which uses pointer network to model the subproblems of MOTSP. The results show MODRL/DAM achieves better performance. A good generalization performance on different size of problem instances is also observed for MODRL/DAM.
Acknowledgement
This work is supported by the National Key R&D Program of China
(2018AAA0101203), and the National Natural Science Foundation of China (61673403, U1611262).
References
 [1] (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §2.2.
 [2] (2016) Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940. Cited by: §1.
 [3] (2019) The collaborative local search based on dynamicconstrained decomposition with grids for combinatorial multiobjective optimization. IEEE Transactions on Cybernetics, pp. 1–12. Cited by: §1.
 [4] (2014) Learning phrase representations using rnn encoderdecoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Cited by: §2.2, §2.2.

[5]
(2002)
A fast and elitist multiobjective genetic algorithm: NSGAII
.IEEE Transactions on Evolutionary Computation
6 (2), pp. 182–197. Cited by: §1. 
[6]
(2018)
Learning heuristics for the TSP by policy gradient.
In
International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research
, pp. 170–181. Cited by: §1. 
[7]
(2016)
Deep residual learning for image recognition.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 770–778. Cited by: §3.2.1.  [8] (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §3.2.1.
 [9] (2017) Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems, pp. 6348–6358. Cited by: §1.
 [10] (2018) Attention, learn to solve routing problems!. arXiv preprint arXiv:1803.08475. Cited by: §1, §3.1.
 [11] (2019) Deep reinforcement learning for multiobjective optimization. arXiv preprint arXiv:1906.02386. Cited by: §1, §1, §2.2, §4.1.
 [12] (2010) The multiobjective traveling salesman problem: a survey and a new approach. In Advances in MultiObjective Nature Inspired Computing, pp. 119–141. Cited by: §2.1.
 [13] (2018) Reinforcement learning for solving the vehicle routing problem. In Advances in Neural Information Processing Systems, pp. 9839–9849. Cited by: §1.
 [14] (1991) TSPLIB–traveling salesman problem library. ORSA Journal on Computing 3 (4), pp. 376–384. Cited by: §4.1.
 [15] (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: §3.2.1.
 [16] (2017) Graph attention networks. arXiv preprint arXiv:1710.10903. Cited by: §3.2.
 [17] (2015) Pointer networks. In Advances in Neural Information Processing Systems, pp. 2692–2700. Cited by: §1.

[18]
(201907)
A twostage multiobjective evolutionary algorithm for multiobjective multidepot vehicle routing problem with time windows
. IEEE Transactions on Cybernetics 49 (7), pp. 2467–2478. Cited by: §1, §4.2.  [19] (2019) Multiobjective multiple neighborhood search algorithms for multiobjective fleet size and mix locationrouting problem with time windows. IEEE Transactions on Systems, Man, and Cybernetics: Systems (), pp. 1–15. Cited by: §1, §4.2.
 [20] (1992) Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning 8 (34), pp. 229–256. Cited by: §1.

[21]
(2017)
Setbased discrete particle swarm optimization based on decomposition for permutationbased multiobjective combinatorial optimization problems
. IEEE Transactions on Cybernetics 48 (7), pp. 2139–2153. Cited by: §1.  [22] (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation 11 (6), pp. 712–731. Cited by: §1, §1.
Comments
There are no comments yet.