1. Introduction
Evolutionary diversity optimization (EDO) aims to compute a set of diverse solutions that all have high quality while maximally differing from each other. This area of research started by Ulrich and Thiele (Ulrich et al., 2010; Ulrich and Thiele, 2011) has recently gained significant attention within the evolutionary computation community, as evolution itself is increasingly regarded as a diversification device rather than a pure objective optimizer (Pugh et al., 2016). After all, in nature, deviating from the predecessors leads to finding new niches, which reduces competitive pressure and increases evolvability (Lehman and Stanley, 2013). This perspective challenges the notion that evolutionary processes are mainly adaptive with respect to some quality metrics, and that population diversity is only in service of adapting its individuals and is without intrinsic worth. In optimization, diversity optimization is a useful extension to the traditional optimization tasks, as a set of multiple interesting solutions has more practical value than a single very good solution.
Along this line of research, there have been studies that explore different relationships between quality and diversity. A trend emerging from the evolutionary robotics is Quality Diversity, which focuses on exploring diverse niches in the feature space and maximizing quality within each niche (Pugh et al., 2016; Cully and Demiris, 2018; Gravina et al., 2019; Alvarez et al., 2019). This approach maximizes diversity via niches discovery, meaning what constitutes a niche in the solution space needs to be welldefined beforehand. Other studies place more importance on diversity measured directly from solutions, applying evolutionary techniques to generate images with varying features (Alexander et al., 2017), or to compute diverse Traveling Salesperson Problem (TSP) instances (Gao et al., 2020; Bossek et al., 2019) useful for automated algorithm selection and configuration (Kerschke et al., 2019). Different indicators for measuring the diversity of sets of solutions in EDO algorithms such as the star discrepancy (Neumann et al., 2018) or popular indicators from the area of evolutionary multiobjective optimization (Neumann et al., 2019) have been investigated to create high quality sets of solutions. The study (Do et al., 2020) explores EDO for the TSP, the first study among others (Baste et al., 2020; Fomin et al., 2021, 2020; Hanaka et al., 2020)
on solution diversification for a combinatorial optimization problem.
In this study, we contribute to the understanding of evolutionary diversity optimization on combinatorial problems. Specifically, we focus on TSP and Quadratic Assignment Problem (QAP), two fundamental NPhard problems where solutions are represented as permutations, and the latter of which has also been attempted with genetic algorithms
(Tate and Smith, 1995; Tosun, 2014; Misevicius, 2004; Ahuja et al., 2000). The structures of the solution spaces associated with these problems are similar, yet different enough to merit distinct diversity measures. We use two approaches to measuring diversity: one based on the representation frequencies of “objects” (edges or assignments) in the population, and one based on the minimum distance between each solution and the rest. We consider the simple evolutionary algorithm that only uses mutation, and examine its worstcase performances in diversity maximization when various mutation operators are used. Our results reveal how properties of a population influence the effectiveness of mutations in equalizing objects’ representation frequencies. Additionally, we carried out experimental benchmark on various QAPLIB instances in unconstrained (no quality threshold) and constrained settings, using a simple mutationonly algorithm with 2opt mutation. The results indicate optimistic runtime to maximize diversity on QAP solutions, and show maximization behaviors when using different diversity measures in the algorithm. With this, we extend the investigation in (Do et al., 2020) theoretically, and experimentally with regard to QAP.The paper is structured as follows. In Section 2, we introduce the TSP and QAP in the context of evolutionary diversity optimization and describe the algorithm that is the subject of our analysis. In Section 3, we introduce the diversity measures for both problems. Section 4 consists of the runtime analysis of the introduced algorithm. We report on our experimental investigations in Section 5 and finish with some conclusions.
2. Maximizing diversity in TSP and QAP
Throughout the paper, we use the shorthand . The symmetric TSP is formulated as follow. Given a complete undirected graph with nodes, edges and the distance function , the goal is to compute a tour of minimal cost that visits each node exactly once and finally returns to the original node. Let , the goal is to find a tour represented by the permutation that minimizes the tour cost
The QAP is formulated as follow. Given facilities , locations , weights , flows , find a 11 mapping that minimizes the cost function
A problem instance is encoded with two matrices: one for and one for . Similar to TSP, we can abstract and like we do : and . Therefore, each mapping is uniquely defined by a permutation. Given that there is a 1to1 correspondence between all permutations and all mappings, the solution space is the permutation space. This is an important distinction between TSP and QAP from which lowlevel differences between the diversity measures in each case emerge. On the other hand, the high level structure of a tour is identical to that of a mapping, so the notions like distance or diversity are the same for both above a certain layer of abstraction.
In this paper, we consider diversity optimization for the TSP and the QAP. For each problem instance, we are to find a set of solutions that is diverse with respect to some diversity measure, while each solution meets a given quality threshold. Let is the value of an optimal solution, a solution satisfies the quality threshold iff , where is a parameter that determines the required quality of a desired solution. The quality criterion means that the final population should only contain approximations for a problem instance. We assume that the optimal tour is known for a given TSP or QAP instance.
We consider EA algorithm which was used to diversify TSP tours (Do et al., 2020). The algorithm is described in Algorithm 1. It uses only mutation to introduce new genes, and tries to minimize duplication in the gene pool with elitist survival selection. The algorithm slightly modifies the population in each step by mutating a random solution, essentially performing random local search in the population space. As with many evolutionary algorithms, it can be customized for different problems, in this case by modifying the mutation operator and the diversity measure. In this work, we are interested in worstcase performances of the algorithm under the assumption that any offspring is acceptable.
3. Diversity measures
The structure of a TSP tour is similar to that of a QAP mapping in the sense that they are both each defined by a set of objects: edges in tours and assignments in mappings. In fact, the size of such a set is always equal to the instance size. For this reason, diversity measures for populations of tours, and those for populations of mappings share many commonalities. In particular, we describe two measures introduced in (Do et al., 2020), customized for TSP and QAP. For consistency, we use the same notations for the same concepts between the two problems unless told otherwise. We also refer to (Do et al., 2020) for more indepth discussion on the measures, and fast implementations of the survival selection for Algorithm 1 based on these measures, which can be customized for QAP solutions.
3.1. Edge/Assignment diversity
In this approach, we consider diversity in terms of equal representations of edges/assignments in the population. It takes into account, for each object, the number of solutions containing it, among the solutions in the population.
For TSP, given a population of tours and an edge , we denote by its edge count, which is defined,
where is the set of edges used by tour
. Then in order to maximize the edge diversity we aim to minimize the vector
in the lexicographic order where sorting is performed in descending order. As shown in (Do et al., 2020), this maximizes the pairwise distances sum
Similarly for QAP, given a population of mappings , we denote by its assignment count as follow,
where is the set of assignments used by solution . The corresponding vector to be minimized in order to maximize assignment diversity is then
in the lexicographic order where sorting is performed in descending order. Similar, this maximizes the following quantity
While this diversity measure is directly related to the notion of diversity, using it to optimize populations has its drawbacks. As mentioned in (Do et al., 2020), populations containing clustering subsets of solutions can have high score, which is undesirable. For this reason, we also consider another measure that circumvents this issue.
3.2. Equalizing pairwise distances
Instead of maximizing all pairwise distances at once, this approach focuses on maximizing smallest distances, potentially reducing larger distances as a result. Optimizing for this measure reduces clustering phenomena, as well as tends to increase the distance sum. In this approach, we minimize the following vector lexicographically
where sorting is performed in descending order, and if and are TSP tours, and if they are QAP mappings. Doing this would also maximize the following quantity
or 
We can see that for any TSP tour population of size at most , we have
One of the results in this study implies that the same is true for any QAP mapping population of size at most . On the other hand, when , doesn’t necessarily imply , as shown by the following example.
Example 1 ().
For a QAP instance where and , let , , , , , , , , , we have which is the maximum. However,
Because of this, it is tricky to determine the maximum achievable diversity in such cases. For now, we rely on the upper bound of , which is relevant to our experimentation in Section 5.
4. Properties and worstcase results
We investigate the theoretical performance of Algorithm 1 in optimizing for without the quality criterion. For TSP, we consider three mutation operators: 2opt, 3opt (insertion) and 4opt (exchange). For QAP, we consider the 2opt mutation where two assignments are swapped. In particular, we are interested in the number of iterations until a population with optimal diversity is reached. Our derivation of results is predicated on the lack of local optima: it is always possible to strictly improves diversity in a single step of the algorithm.
4.1. Tsp
Let and . For each node , let be the set of edges incident to , and . For each tour , let be the tour resulted from applying 2opt to at positions and in the permutation, and be the tour from exchanging th and th elements in . We assume as the other cases are trivial. Note that we must have for 4opt to be applicable, so it is implicitly assumed when appropriate.
First, we show that any population with suboptimal diversity and of sufficiently small size presents no local optima to the Algorithm 1
with 2opt mutation, while deriving a lower bound on the probability where a strict improvement is made in a single step. Here, we regard a singlestep improvement as the reduction of either
or , as aligned with the algorithm’s convergence path.Lemma 1 ().
Given a population of tours such that and , there exist a tour and a pair , such that satisfies,
(1) 
Moreover, in each iteration, the Algorithm 1 with 2opt mutation and makes such an improvement with probability at least .
Proof.
There must be tours in containing edge such that , let be one such tour. W.l.o.g, let be represented by a permutation of nodes where . The operation trades edges and in for and . If for every such new edge we have , then satisfies (1) since and cannot reach . We show that there is always such a position . Since can only go from to , there are choices of . It’s the case that for any since each tour contributes to , and that and since uses them, thus
(2)  and  
According to the pigeonhole principle, (4.1) implies there are at least elements from to such that , where
Likewise, there are at least elements from to such that . This implies that there are at least elements from to where and . We have
proving the first part of the lemma. In each iteration, the Algorithm 1 selects a tour like with probability at least . There are at least different 2opt operations on such a tour to produce . Since there are 2opt operations in total, the probability that the Algorithm 1 obtains from is at least
∎
In Lemma 1, only one favorable scenario is accounted for where both edges to be traded in have counts less than . However, there are other situations where strict improvements would be made as well, such as when both swappedout edges have count . Furthermore, a tour to be mutated might contain more than 2 edges with such count, increasing the number of beneficial choices dramatically. Consequently, the derived probability bound is pessimistic, and the average success rate might be much higher. It also means that the bound of the range of is pessimistic and the lack of local optima is very probable at larger population sizes, albeit with reduced diversity improvement probability.
Intuitively, larger population sizes present more complex search spaces where local search approaches are more prone to reaching suboptimal results. It is reasonable to infer that small population sizes make diversity maximization easier for Algorithm 1. However, for 3opt mutation, local optima can still exist even with population size being as small as 3. Next, we show a simple construction of supposedly easy cases where 3opt fails to produce any strict improvement.
Example 2 ().
For any TSP instance of size where is a multiple of 4, we can always construct a population of 3 tours having suboptimal diversity, such that no single 3opt operation on any tour can improve diversity. Let the first tour be , we derive the second tour sharing only 2 edges with and containing edges that form a “crisscrossing” pattern on ,
The third tour shares no edge with or and contains many edges that “skip one node” on .
In order to improve diversity, the operation must exchange, on either tour, at least one edge with count 2. However, any 3opt operation with such restriction ends up trading in at least another edge used by the other tours, nullifying any improvement it makes. This population presents a local optimum for algorithms that uses 3opt as the only solution generating mechanism. Figure 1 illustrates two examples of the construction with and .
We speculate that in many cases, the insertion 3opt suffers from its asymmetrical nature. Both 2opt and 3opt operations are each defined by two decisions. For 2opt, the two decisions are which two edges to be exchanged, and only after both are made will the two new edges be fixed. For 3opt, one decision determines which set of two adjacent edges to exchanged, and the other defines the third edge. Unlike 2opt, after only one decision, one out of the three new edges is already fixed. Such limited flexibility makes it difficult to guarantee diversity improvements via 3opt without additional assumptions about the population. In contrast, 4opt is not subjected to this drawback, as the two decisions associated with it are symmetric. For this reason, we can derive another result for 4opt similar to Lemma 1.
Lemma 2 ().
Proof.
There must be tours in containing edge such that , let be one such tour. W.l.o.g, let be represented by a permutation of nodes where . The operation trades edges , , , in for , , , . If for every such new edge we have , then satisfies (1) following similar reasoning in the proof of Lemma 1. We show that there is always such a position . Since can only go from to , there are choices of . We use the fact that for any , and that since uses them, thus
(3)  and  
According to the pigeonhole principle, (4.1) implies there are at least elements from to such that , where
Likewise, there are at least elements from to such that . This implies that there are at least elements from to where and , which we will call condition 1. We denote the number by
Using , we similarly derive that there are at least element from to where . However, we only have , meaning there are at least element from to such that where
From this, we have that there are at least elements from to where and , which we will call condition 2. We denote the number by
Finally, we can infer that there are at least choices of such that both condition 1 and condition 2 are met. We have
proving the first part of the lemma. In each iteration, the Algorithm 1 selects a tour like with probability at least . There are at least different 4opt operations on such a tour to produce . Since there are 4opt operations in total, the probability that the Algorithm 1 obtains from is at least
∎
Like in Lemma 1, only one out of many favorable scenarios is considered in Lemma 2, so the lower bound is strict. The range of the population size is smaller to account for the fact that the condition for such a scenario is stronger than the one in Lemma 1. With these results, we derive runtime results for 2opt and 4opt, relying on the longest possible path from zero diversity to the optimum.
Theorem 1 ().
Given any TSP instance with nodes and , the Algorithm 1 with obtains a population with maximum diversity within expected time if

it uses 2opt mutation and ,

it uses 4opt mutation and .
Proof.
In the worst case, the algorithm begins with and . At any time, we have . Moreover, in the worst case, each improvement either reduces by , or reduces by and sets to its maximum value. With , the maximum diversity is achieved iff as proven in (Do et al., 2020). According to Lemma 1, the expected run time Algorithm 1 requires to reach maximum diversity when using 2opt mutation is at most
On the other hand, Lemma 2 implies that when , Algorithm 1 with 4opt mutation needs at most the following expected run time
∎
As expected, the simple algorithm requires only quadratic expected runtime to achieve optimal diversity from any starting population of sufficiently small size. The quadratic scaling with comes from two factors. One is the fact that Algorithm 1 needs to select the “correct” tour to mutate out of tours. The other is the fact that up to tours need to be modified to achieve the optimum, and only one is modified in each step. The cubic scaling with comes from the quadratic number of possible mutation operations, and the number of edges to modify in each tour. Additionally, most of the runtime is spent on the “last stretch” when reducing from 2 to 1, as the rest only takes up expected number of steps.
4.2. Qap
Let and . We denote the 2opt operation by where and are two positions in the permutation to be exchanged. For each , let and . Let be a shift operation such that for all permutation ,
Also, for convenience, we use the notation . We first show the achievable maximum diversity for any positive , which will be the foundation for our runtime analysis.
Theorem 2 ().
Given , there exists a size population of permutations of such that
(4) 
Proof.
We prove by constructing such a . Let be some arbitrary permutation and where is applied times. Note that . It is the case that no two solutions in share assignments, so for all , we have , and . Let where and , and where , we include in copies of each solution in and copies of each solution in . Then satisfies (4) since
∎
With maximum diversity welldefined, we can determine if it is reached with population using only information from . Therefore, we can show the guarantee of strict diversity improvement with a single 2opt on some suboptimal population, and the probability that Algorithm 1 makes such an improvement, similar to Lemma 1. For brevity’s sake, we reuse the expression (1) with notations defined in the QAP context.
Lemma 3 ().
Proof.
There must be permutations in such that , let be one such permutation, and such that . The operation trades assignments and in for and . Regardless of , if and , then satisfies (1) since and cannot reach . We show that there is always such a position . There are choices of since . It’s the case that , thus
(5)  and  
According to the pigeonhole principle, (4.2) implies there are at least elements such that , where
Likewise, there are at least elements such that . This implies that there are at least elements where and . We have
proving the first part of the lemma. In each iteration, the Algorithm 1 selects a tour like with probability at least . There are at least different 2opt operations on such a tour to produce . Since there are 2opt operations in total, the probability that the Algorithm 1 obtains from is at least
∎
Compared to Lemma 1, the range of in Lemma 3 is about twice as large, which coincides with the fact that the maximum number of disjoint solutions (sharing no assignment/edge) for any given instance size is also twice as large in QAP than it is in TSP. The result lends itself to the following runtime bound for Algorithm 1, similar to Theorem 1.
Theorem 3 ().
Given any QAP instance with and , the Algorithm 1 with 2opt mutation and obtains a population with maximum diversity within expected time .
Proof.
In the worst case, the algorithm begins with and . At any time, we have . Moreover, in the worst case, each improvement either reduces by , or reduces by and sets to its maximum value. With , the maximum diversity is achieved iff according to Theorem 2. According to Lemma 3, the expected run time Algorithm 1 requires to reach maximum diversity is at most
∎
The results in Theorem 1 and 3
are identical due to similarities between structures of TSP tours and QAP mappings, and the same intuition applies. Of note is that according to the proofs, the probability of making improvements drops as the population is closer to maximum diversity. This is a common phenomenon for randomized heuristics in general, which we expect to see replicated in experimentation.
5. Experimental investigations
We perform two sets of experiments to establish baseline results for evolving diverse QAP mappings. These involve running Algorithm 1 separately using two described measures: and . We denote these two variants by and . The mutation operator used is 2opt. Firstly, we consider the unconstrained case where no quality constraint is applied. Then, we impose constraints with varying quality thresholds on the solutions.
For our experiments, we use three QAPLIB instances: Nug30 (Nugent et al., 1968), Lipa90b (Li and Pardalos, 1992), Esc128 (Eschermann and Wunderlich, 1990). The optimal solutions for these instances are known. We vary the population size among , , , . We run each variant of the algorithm 30 times on each instance, and each run is allotted maximum iterations. It is important to note that any reported diversity score is normalized with the upper bound appropriate to the instance. For , the bound is derived from Theorem 2, while it is for as mentioned. We specify the differences in settings between unconstrained case and constrained case in the following sections.
5.1. Unconstrained diversity optimization
In the unconstrained case, we are interested in how optimizing for one measure affect the other, and how many iterations are needed to reach maximum diversity from zero diversity. To this end, we set the initial population to contain only duplicates of some random tour. Furthermore, we apply a stopping criterion that holds when the measure being optimized for reaches its upper bound. However, for , the bound is unreachable, so we expect that the algorithm does not terminate prematurely while minimizing .
Figure 2 shows the mean diversity scores and their standard deviations throughout the runs, and the average numbers of iterations till termination. Overall, when , Algorithm 1 maximizes both and well within the run time limit. The ratios between needed runtimes and corresponding total runtimes seem to correlate with the ratio . Additionally, the algorithm seems to require similar runtime to optimize for both measures, as no consistent differences are visible.
The figure also shows a notable difference in the evolutionary trajectories resulted from using and for survival selection. When is used, Algorithm 1 improves about as efficiently as when is used. On the other hand, when is used, it increases poorly during the early stages, and even noticeably decreases it in short periods. In fact, in many cases, only starts to increase quickly when reaches a certain threshold. That said, this particular difference is not observable for . Nevertheless, it indicates that even in easy cases (), highly even distributions of assignments in the population are unlikely to preclude clustering. In contrast, separating each solution from the rest of the population tends to improve overall diversity effectively.
5.2. Constrained diversity optimization
Optimizing  Optimizing  
unique percentage  unique percentage  
mean  std  mean  std  mean  std  mean  std  mean  std  mean  std  
Nug30  3  0.05  90.74%  4.23%  87.93%  4.85%  83.70%  6.00%  91.15%  3.47%  88.96%  4.08%  84.19%  5.36% 
0.2  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
0.5  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
10  0.05  84.10%  2.05%  48.06%  10.09%  23.00%  4.13%  81.71%  1.87%  74.24%  2.47%  34.39%  2.44%  
0.2  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
0.5  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
20  0.05  84.00%  0.95%  32.07%  5.53%  8.44%  1.72%  79.45%  1.22%  68.61%  1.59%  16.72%  1.28%  
0.2  99.95%  0.03%  99.09%  0.41%  98.98%  0.52%  99.93%  0.03%  98.79%  0.49%  98.58%  0.63%  
0.5  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
50  0.05  86.06%  0.71%  17.44%  2.50%  2.58%  0.49%  79.98%  0.81%  64.08%  1.01%  6.65%  0.52%  
0.2  99.97%  0.01%  90.62%  0.48%  19.79%  0.26%  99.72%  0.02%  95.74%  0.24%  22.81%  0.41%  
0.5  100.00%  0.00%  91.28%  0.39%  20.00%  0.00%  100.00%  0.00%  96.67%  0.00%  20.04%  0.04%  
1  100.00%  0.00%  91.41%  0.48%  20.00%  0.00%  100.00%  0.00%  96.67%  0.00%  20.04%  0.06%  
Lipa90b  3  0.05  17.72%  0.73%  17.01%  0.78%  10.14%  0.68%  17.75%  0.77%  17.09%  1.12%  10.36%  0.67% 
0.2  83.88%  1.32%  82.32%  1.46%  74.46%  1.78%  84.07%  1.35%  82.48%  1.54%  74.52%  1.45%  
0.5  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
10  0.05  17.44%  0.40%  13.85%  1.30%  9.11%  0.38%  17.48%  0.43%  15.48%  0.59%  9.30%  0.22%  
0.2  78.23%  1.33%  44.26%  11.52%  38.70%  8.79%  80.00%  0.62%  76.08%  0.87%  55.88%  0.55%  
0.5  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
20  0.05  17.54%  0.27%  12.00%  0.83%  8.87%  0.25%  17.52%  0.25%  14.92%  0.36%  9.26%  0.13%  
0.2  78.95%  0.78%  30.84%  7.01%  26.07%  5.65%  79.62%  0.40%  74.70%  0.40%  54.94%  0.37%  
0.5  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
50  0.05  17.74%  0.21%  9.38%  0.52%  7.98%  0.46%  17.60%  0.17%  14.69%  0.24%  9.24%  0.10%  
0.2  80.59%  0.45%  15.16%  3.04%  10.44%  2.21%  78.71%  0.27%  72.84%  0.27%  52.86%  0.29%  
0.5  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
Esc128  3  0.05  96.64%  0.98%  95.91%  1.07%  95.47%  1.11%  96.46%  0.94%  95.99%  1.04%  95.23%  1.07% 
0.2  99.19%  0.43%  98.87%  0.56%  98.78%  0.64%  99.41%  0.26%  99.11%  0.30%  99.00%  0.34%  
0.5  99.97%  0.09%  99.93%  0.18%  99.93%  0.18%  99.96%  0.10%  99.91%  0.20%  99.91%  0.20%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
10  0.05  95.69%  1.17%  85.97%  4.53%  83.07%  4.06%  95.51%  0.49%  94.14%  0.48%  88.63%  0.69%  
0.2  98.80%  0.70%  94.35%  4.32%  91.72%  4.18%  98.93%  0.22%  98.07%  0.36%  95.09%  0.55%  
0.5  99.92%  0.06%  99.48%  0.27%  99.24%  0.52%  99.95%  0.04%  99.71%  0.18%  99.64%  0.27%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
20  0.05  96.39%  0.89%  81.10%  6.59%  76.90%  7.14%  94.83%  0.34%  93.08%  0.29%  84.70%  0.41%  
0.2  99.06%  0.17%  93.57%  2.22%  89.11%  1.74%  98.68%  0.16%  97.44%  0.19%  91.23%  0.45%  
0.5  99.90%  0.05%  99.06%  0.46%  98.17%  1.02%  99.86%  0.05%  99.29%  0.08%  97.84%  0.82%  
1  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  100.00%  0.00%  
50  0.05  96.81%  1.04%  65.25%  17.04%  57.76%  17.24%  94.52%  0.23%  92.00%  0.21%  82.12%  0.18%  
0.2  98.99%  0.13%  88.62%  3.71%  83.69%  4.21%  98.38%  0.10%  96.42%  0.16%  87.93%  0.34%  
0.5  99.92%  0.02%  98.01%  0.59%  96.30%  1.05%  99.76%  0.05%  98.79%  0.09%  95.31%  0.37%  
1  100.00%  0.00%  100.00%  0.01%  100.00%  0.01%  100.00%  0.00%  99.99%  0.02%  99.99%  0.02% 
In the constrained case, we look for the final diversity scores across varying and the extent to which optimizing for mitigate clustering, especially at small . Therefore, we consider values 0.05, 0.2, 0.5, 1, and run the algorithm for steps with no additional stopping criterion. Furthermore, we initiate the population with duplicates of the optimal solution to allow flexibility for meaningful behaviors. Aside from diversity scores, we also record the percentage of assignments belonging to exactly one solution (unique) out of assignments in each final population.
Table 1 shows a comparison in terms of and scores as well as unique assignment percentages. Overall, maximum diversity is achieved reliably in most cases when . For Lipa90b, there are tremendous gaps in final diversity scores when changes from to . The differences are much smaller in other QAPLIB instances. Also, at , maximum diversity is not reached as frequently for Esc128 as for other instances. These suggest significantly different cost distributions in the solution spaces associated with these QAPLIB instances.
Comparing the diversity scores from the two approaches, we can see trends consistent with those in the unconstrained case. Each approach predictably excels at maximizing its own measure over the other. That said, the approach does not fall far behind in scores, even in cases where statistical significance is observed (at most difference). Meanwhile, the approach’s scores are much lower than those of the other, especially in hard cases (small and large ). The same differences can be seen in the percentages of unique assignments, which seem to strongly correlate with . This indicates that using the measure , Algorithm 1 significantly reduces clustering, and equalizes assignments’ representations almost as effectively as when using the measure .
6. Conclusion
We studied evolutionary diversity optimization in the Traveling Salesperson Problem and Quadratic Assignment Problem. In this type of optimization problem, the goal is to maximize diversity as quantified by some metric, and the constraint involves the solutions’ qualities. We described the similarity and difference between the structure of a TSP tour and that of a QAP mapping, and customized two diversity measures to each problem. We considered a baseline evolutionary algorithm that incrementally modifies the population using traditional mutation operators on one solution at a time. We showed that for any sufficiently small , the algorithm guarantees maximum diversity in TSP within using 2opt and 4opt within expected iterations, while 3opt suffers from local optima even with very small . We derived the same result in QAP with 2opt, where the upper bound of is more generous. Additional experiments on QAPLIB instances shed light on differences on evolutionary trajectories when optimizing for the two diversity measures. Our results show heterogeneity in the correlation between the quality constraint threshold and the achieved diversity across different instances, and that the average practical performance is much more optimistic than the worstcase suggests.
Acknowledgements.
This work was supported by the Phoenix HPC service at the University of Adelaide, and by the Australian Research Council through grant DP190103894.References
 A greedy genetic algorithm for the quadratic assignment problem. Computers & Operations Research 27 (10), pp. 917–934. External Links: Document Cited by: §1.
 Evolution of artistic image variants through feature based diversity optimisation. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 171–178. External Links: Document Cited by: §1.
 Empowering quality diversity in dungeon design with interactive constrained MAPelites. In 2019 IEEE Conference on Games (CoG), pp. 1–8. External Links: Document Cited by: §1.

Diversity of solutions: an exploration through the lens of fixedparameter tractability theory.
In
Proceedings of the TwentyNinth International Joint Conference on Artificial Intelligence
, External Links: Document Cited by: §1.  Evolving diverse TSP instances by means of novel and creative mutation operators. In Proceedings of the 15th ACM/SIGEVO Conference on Foundations of Genetic Algorithms  FOGA '19, pp. 58–71. External Links: Document Cited by: §1.
 Quality and diversity optimization: a unifying modular framework. IEEE Transactions on Evolutionary Computation <
Comments
There are no comments yet.