Optimal Experimental Design of Field Trials using Differential Evolution

02/01/2017 ∙ by Vitaliy Feoktistov, et al. ∙ Limagrain 0

When setting up field experiments, to test and compare a range of genotypes (e.g. maize hybrids), it is important to account for any possible field effect that may otherwise bias performance estimates of genotypes. To do so, we propose a model-based method aimed at optimizing the allocation of the tested genotypes and checks between fields and placement within field, according to their kinship. This task can be formulated as a combinatorial permutation-based problem. We used Differential Evolution concept to solve this problem. We then present results of optimal strategies for between-field and within-field placements of genotypes and compare them to existing optimization strategies, both in terms of convergence time and result quality. The new algorithm gives promising results in terms of convergence and search space exploration.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In plant breeding trials, field experiments are commonly used to test genotypes (e.g. maize hybrids or wheat varieties) and estimate their potential for a range of variables, such as yield, at all stages of the breeding process. Accurately estimating those variables is crucial to ensure that only the best genotypes are kept for further selection.

In a field experiment, each genotype is allocated to one or several small plots on a field. Measured performance on that plot is due to an effect of the genotype, measurement error and an effect of the field (e.g. nutrient availability). Fisher highlighted the importance of experimental design in the early 1930s [1], to reduce bias and better estimate effects of interest.

In its most basic form, experimental design rely on randomization and replication but better design structures, for instance, through the use of blocking, can further improve the reliability of the estimates. However, blocking structures rely on the strong assumption of local homogeneity and also require the design to be replicated. As a result, they can sometimes be difficult to use, especially in early generation variety trials, where the aim is to test a very large number of genotypes with constraints on space and seed availability. In recent years, model-based designs have been used more widely [2, 3, 4]. At the local scale, one example of such a design is the repchecks design and relies on a spatial correlation structure of error first assumed by Gilmour et al. [2]. This design consists of a number of unreplicated experimental genotypes and some (usually three or four) replicated and well spread-out check genotypes to capture and model any potential field effect. In plant breeding trials, it is important to test the genotypes over a range of locations to estimate their performance over a network (e.g. market). Similarly to the constraints present locally, it is often practically impossible to have all genotypes replicated over all locations. As a result, many genotypes are only present on a subset of the locations in the network; this can be achieved through multi-location designs such as the sparse repchecks or sparse p-reps designs [5, 6].

The Section II describes the optimal design of field experiments problem. Its mathematical formulation as well as modelling as combinatorial permutation-based optimization task. The problem is split into two phases to be solved efficiently. The Section III describes a general framework of Differential Evolution (DE) algorithm, and then introduces the new created algorithm for exploring the permutation space. This is followed by proposition of several search strategies, it also mentions the used method to handle constraints. The Section IV demonstrates the obtained results for both optimizations. Here we show the efficiency of the approach and compare it with the existing software packages. Real-life examples are presented. The Section V concludes the paper outlining promising trends and directions.

Ii Problem Formulation

Ii-a State of the Arts and Motivation

Two PhD theses tried to tackle this problem from a practical viewpoint [7, 8]. That resulted in two software packages: DiGGer [9], a design search tool in R [10], and OD [11], also an R-package. Other recent papers contribute to improving the mathematical model of efficient designs [4, 5, 6] and show real-case studies.

The core of both software packages is coded in Fortran and highly optimized. The problem can be considered as a HPC one and despite of all code tunings, the computations are still very expensive. To solve this problem, many metaheuristics have been tested, among them: Tabu Search, Simulated Annealing, Genetic Algorithms, and others. Nevertheless, for many real cases the mentioned algorithms are not suitable due to their computational time, in other words a poor convergence. This motivated us to develop a new approach to solve this problem more efficiently.

Ii-B Modelling Optimal Design as a Permutation Problem

Let us consider the following linear mixed model

[12]

(1)

– is vector of phenotype,

– a design matrix for the fixed effects, – are fixed effects, – a design matrix for the random genetic effect , so that is distributed multivariate normal with covariance , and – the error term is multivariate normal with covariance . , where

– is the additive genetic variance and

is the kinship which can be based on pedigree or on markers. Kinship information can be ignored in the optimization process by setting

to an identity matrix.

From this model, the mixed model equations can be written [13]

(2)

In a general case, the prediction error variance PEV is

(3)

where

(4)

For more details, see [14] and [15].

The optimization task consists in finding an optimal design matrix , which minimizes the . Let exists some start design matrix with given properties, then the optimization task can be rewritten as finding the optimal permutation , so that is the permutation of lines of matrix with respect to constraints representing desirable design properties.

Taking into account relatedness between genotypes ensures an optimal design is generated avoiding planting closely related genotypes next to each other and splitting families evenly between locations. This is a -hard problem and is hard to solve for large sized cases, i.e. real situations. Thus, the task is split into two phases. The first one (phase I) is called Between Location phase, and optimizes dispatching the genotypes among several locations (sparse design). The second one (phase II) is Within Location phase. At this stage, the optimal design inside of a location is resolved taking in consideration auto-regressive error and genotype subsets from the previous phase. As shown in Eqs. 3 and  4, for both phases, the optimization can also take into account additional fixed effects estimation such as block effects within location.

The incorporation of a spatial auto-regressive process with a so-called measurement error variance () implies a given structure in the variance-covariance matrix of the residuals of the mixed model. Assuming a rectangular field of size rows and columns, allowing the last row to be incomplete (with columns), the matrix is of size and can be defined as

(5)

where and are the autocorrelation coefficients for the rows and columns respectively, and

(6)

Iii Differential Evolution

Iii-a General Description

Since its first introduction in 1995 [16, 17] Differential Evolution is one of the most successful metaheuristic algorithm. It has won many CEC global optimization competitions. The algorithm was generalized to different branches of optimization such as constrained, multi-modal, multi-objective and large-scale optimization as well as made suitable for noisy and mixed-variables objective functions. Thus it covers with success many applications in a number of areas of science and industry. A detailed history of the algorithm can be found in Chapter 1 of [18].

Differential Evolution is a population-based approach. Its concept shares the common principles of evolutionary algorithms. Starting from an initial population

of , solutions DE intensively progresses to the global optimum using self-learning principles. The initialization can be done in different ways, the most often uniformly random solutions are sampled respecting boundary constraints. Then, each solution is evaluated through the objective function.

To be formal, let the population consists of solutions, or individuals in evolutionary algorithms terms. The individual contains variables , so-called genes. Thus and .

At the each iteration , also called generation, all individuals are affected by reproduction cycle. To this end, for each current individual a set of other individuals are randomly sampled from the population . All the DE search strategies are designed on the basis of this set.

The strategy [19] is how the information about set individuals is used to create the base and the difference vectors of the main DE operator of the reproduction cycle, often called as differential mutation by analogy with genetic algorithms, or else differentiation by functional analogy with gradient optimization methods. So the differentiation operator can be now viewed in its standardized form as the current point and the stochastic gradient step to do

(7)

where is constant of differentiation, one of the control parameters. Usually the recommended values are in range.

The next reproduction operator is crossover. It is a typical representative of genetic algorithms’ crossovers. Its main function is to be conservative when passing to the new solution preserving some part of genes from the old one. The most used case is when the trial individual

inherits the genes of the target one with some probability

(8)

for , uniform random numbers , and crossover constant , the second control parameter.

So, the trial individual is formed and the next step is to evaluate it through an objective function, also called fitness. Sometimes, constraints are handled at this stage too. Reparation methods are used before the fitness evaluation, in order to respect the search domain and/or (hard approach). Other constraints can be evaluated during the objective function computation (soft approach). It should be noted that in many industrial applications the evaluation of objective function and constraints demands significant computational efforts in comparison to other parts of the DE algorithm, thus the algorithm’s convergence speed during the first iterations is very important.

The following step is selection. Often, the elitist selection is preferred

(9)

At this moment the decisions on constraints and multi-objectives values are also influence the final choice of the candidate.

The Differential Evolution pseudo-code is summarized in Alg. 1.

Require :
control parameters
objective function
constraints
procedure DifferentialEvolution()
     Initialize
     Evaluate
     while not stopping condition do
         for all  do
              Sample from
              Reproduce
              
              
              Evaluate often costly operation
              Select vs
         end for
          go to the next generation
     end while
end procedure
Algorithm 1 Differential Evolution - a general pattern

Iii-B Adaptation to the Permutation Space

Differential Evolution is very successful in solving continuous optimization problems. Nevertheless, there were many attempts to spread the DE concept on combinatorial optimization. A good summary of DE adaptations can be found in

[20]. In this paper, we concentrate on the combinatorial problems than can be formulated as permutation-based. We propose a new technique to explore combinatorial space and apply this technique to find the optimal design of field experiments, a real application from agriculture and plant breeding.

Further we discuss how to transform the differentiation operator to handle the permutation space. There are two points we need to define

  1. what would be the distance in permutation space? and

  2. how we apply this distance knowledge to compute the DE step from the base point (see Eq. 7)?

As it was shown earlier [18], the crossover operator is optional and can be missed in many cases without degradation of convergence rate. So we decided not to use it for a permutation-based optimization.

Iii-B1 Distance

Many types of distances for the permutation space were invented, explored and used. A good overview of combinatorial distances with indicating their complexity and performance tests is presented in [21].

It is obvious that the Hamming distance is one of the best candidates because of its simplicity to compute and suitability for the DE context. For two permutations and , the Hamming distance between them is

(10)

Iii-B2 DE step

Several combinatorial operators are possible to generate the permutation . Most common are swap, interchange and shift. Swap is considered a local operator as it concerns the neighbours of some permutation position . Interchange is a global operator, it swaps two distant positions and . Shift interchanges two -length substrings and .

Many combinatorial heuristic algorithms alternate proportions of local and global actions to balance the search process in order to avoid stagnation or unnecessary exploration. We do not use the different operators to trade-off local and global techniques but chose only one global operator, namely

interchange and control it with locality factor , which can be considered as an equivalent of the differentiation constant but for the permutation space.

Thereby the size of DE steps is defined as . Thus the trial individual is computed as

(11)

where the operator is an interchange operator, which applies successive interchanges on the vector.

It should be noted that this is a general concept and any distance metric and any permutation operator may be used instead of these ones. Their right combination depends mainly on the problem to solve. Also, the locality factor may be self-learned or real-time adjusted.

Iii-C Some Strategies and their Properties

The strategies are key to the search space exploration. They propose different techniques on how to choose and to construct and from the set of sampled individuals . Here we give three examples having different search properties.

Iii-C1 rand3

For each individual a set of three other random individuals is sampled from the population . The vector is defined by metric on and , that is , in other words, the distance is calculated as (see Eq. 10). . So the trial

(12)

This strategy is good for most cases, especially for large-scale problems when an intensive exploration is needed. When , the strategy turns into a random search.

Iii-C2 rand2best

Let is the current best individual, that is . Two additional random individuals are extracted, , to compute . . Thus

(13)

This strategy is inspired from social behaviour, when there is an alternated leader organizing and directing the others, and others have a tendency to be attracted by the leader. It also has some analogies with Particle Swarm Optimization

[22]. This strategy is efficient for mid-sized problems. It provides a good balance between exploration and exploitation.

Iii-C3 dir2best

Here, the individual and two others randomly extracted are selected, but this time the individuals are ordered as follow: and . , so

(14)

This strategy uses stochastic gradient information, that is the distance to the individual. It has excellent convergence properties on small- and mid- size problems. In large-scale tasks, because of an intensive exploitation, it has a tendency to premature convergence (Chapter of [18]).

Iii-D Constraints

The initial matrix (see Eq. 3) possesses some design properties to keep. This represents the set of constraints to respect . Thereby a hard approach for constraints handling is most suitable. We apply it at the stage of the differential operator. As the objective function is long to compute, it is easier and more efficient is to evaluate only feasible individual. This way, we significantly reduce the space of exploration without doing unnecessary evaluations.

Iv Results

Iv-a Phase I – Between Locations

In this section, we present the results of the allocation of genotypes across locations. To illustrate the impact of accounting for the kinship matrix , we use an extreme case of a population of genotypes and five locations (of plots each). The genotypes are split into three check genotypes (present and replicated times on each location) and experimental genotypes. We further impose for these genotypes to each be present once on three out of the five locations. We then define them as belonging to three independent families (sizes , and ) of full siblings, by using a block matrix for the kinship matrix calculated using pedigree.

(15)

where , and are square matrices whose off-diagonal elements are and diagonal elements are .

Following the constraints imposed on them, it is clear that the check genotypes can be excluded from this first optimization phase. Because of the small size of family , a random allocation of genotypes across locations can lead to a very unbalanced spread as shown in Table I below, where this family is under-represented in location , therefore causing a risk of biasing the corresponding genotypes estimates (confounding between location and family effects).

Family 1 Family 2 Family 3
Location 1 9 111 120
Location 2 9 116 115
Location 3 1 116 123
Location 4 13 110 117
Location 5 10 108 122
TABLE I: Phase I - Initial family spread across locations

Here, we therefore have, using Eq. 3 and Eq. 4,

(16)

and is the design matrix, size .

Fig. 1 shows the convergence of the objective function for this first phase. This was obtained using restarts, each with steps (number of function evaluations) and convergence was reached in

seconds. The objective function was approximated by single value decomposition, using the first three eigenvalues, the population size

was set at and the search strategy used was . For each simulation, we allocated only six cores (threads) of a Intel Xeon E5-4627 v3 processor under Windows Server 2012 R2 operating system.

The convergence value of the process shows a good improvement over the objective function value of the design randomly generated. This can be seen in Table II, where the spread of each family across locations is better balanced.

Fig. 1: Phase I - Convergence rate: nfe - number of function evaluations.

However, as mentioned in Section I, experimental designs can have many constraints imposed on them, depending on their complexity. Further work is ongoing to add additional constraints (e.g. repetition of experimental genotypes within location).

Family 1 Family 2 Family 3
Location 1 9 121 110
Location 2 13 105 122
Location 3 7 117 116
Location 4 7 107 126
Location 5 6 111 123
TABLE II: Phase I - Optimized strategy

Iv-B Phase II – Within Location

After running the optimization across locations, it is necessary to run this phase independently over each of the locations, after having re-incorporated the check genotypes, to obtain an optimal field layout.

The first part of this section looks at the gain made through this approach, compared to an approach currently used widely, DiGGer. In the second part of this section, we investigate the effect of using the kinship information in the local (i.e. within location) optimization. To do so and compare this algorithm to the existing algorithm (DiGGer), we use an extreme case of a field with plots in twelve rows and columns where we aim at allocating a repchecks design with experimental genotypes and three check genotypes, where the three checks are repeated nine, eight and nine times respectively. Further, we define the genotypes as belonging to three independent families of size , and respectively, each with one of the check genotypes. In both examples presented below, we used restarts, each with steps (number of function evaluations), the population size was set at and the search strategy used was .

Iv-B1 Ignoring kinship

As well as defining the and matrices in the same way as during the between locations optimization, it is important to account for the local autocorrelation defined by . The matrix used in this case is derived using autocorrelation coefficients and of , based on data collected on existing trials. We further assume a trait with a large heritability (analogous to the noise/signal ratio) to ensure that a good spatial spread of the genotypes is critical during the trial set-up. To allow a direct comparison between the method proposed here and DiGGer, we define the kinship matrix as the identity matrix, therefore ignoring relatedness between genotypes.

Fig. 2: Phase II - Allocation: (A) Checks at start; (B) Checks optimized - DiGGer; (C) Checks optimized - New Algorithm
Fig. 3: Phase II - Convergence rate: nfe - number of function evaluations.
The red lines correspond to the value of the objective function for the design as optimized by DiGGer and the number of functions needed for our algorithm to better it

In Fig. 2, we present an example of a field within a location, where each small rectangle is a plot and corresponds to a genotype. The first figure shows the starting design, where we purposely grouped all three check genotypes at one end of the field, to see the impact of the optimization. The middle figure shows the spatial spread of those after the optimization procedure derived using DiGGer. Unsurprisingly, because of the equal weight given to both dimensions in the autoregressive process in DiGGer, the optimized strategy shows clear diagonals of check genotypes. Further, a common feature of DiGGer can also be observed: those diagonals are often made of adjacent plots (two diagonals length four in the example presented). The third plot of the figure presents the optimization obtained through our algorithm and shows a visually better spread of the check genotypes; although the pattern of diagonals is still present, it is better spread out throughout the field, with no more than two adjacent plots. Fig. 3 presents the convergence speed of the algorithm, together with the value of the objective function reached by the DiGGer algorithm. In this example, convergence was reached in seconds using our algorithm and in seconds using DiGGer. Further, note that, unlike our algorithm, DiGGer is mono-thread and cannot use the benefit of multi-core servers.

The visually better design is confirmed when looking at the value of the objective function produced by DiGGer and the new algorithm . However, it is important to note that the algorithm used in DiGGer is based on the Reactive Tabu Search [7]. Note that, in this case, using only steps, whilst keeping everything else identical, led to a convergence time of seconds without dramatically affecting the value of the objective function (, i.e. still improving on the convergence value produced by DiGGer).

Iv-B2 Accounting for kinship

In this second case, we include, in the optimization phase, a non-identity kinship matrix . The matrices and are defined as in IV-A. Further, the 122 genotypes are split in three families of full siblings:

(17)

where , and are square matrices whose off-diagonal elements are 0.5 and diagonal elements are .

Fig. 4: Phase II - Allocation using kinship information: (A) - Families at start; (B) - Families optimized using DiGGer; (C) - Families optimized using the New Algorithm.
Fig. 5: Phase II (incl. kinship information) - Convergence rate: nfe - number of function evaluations.
The red lines correspond to the value of the objective function for the design as optimized by DiGGer, when the experimental genotypes are randomized at start, and the number of functions needed for our algorithm to better it

Fig 4 shows the initial design, the design as optimised by DiGGer and the optimised design produced by our new algorithm. Because DiGGer does not account for kinship information, its optimisation is not affected by this additional information. As a result, because the strong structure present in the initial design is not altered, the value of the optimisation function, accounting for kinship, is fairly poor in this case (1.16285241). On the other hand, we can see that the new optimization procedure incorporates the kinship information well by producing a check pattern where all three families are interwoven. This is confirmed by a well-improved optimization function (1.15030587) as shown by the algorithm convergence speed (convergence reached in seconds) in Fig. 5. This highlights the importance, when using DiGGer, to randomize the families of genotypes within the field before running the optimization. Doing so on this example leads to a much improved value (1.15371275) of the optimisation function of the design produced by DiGGer compared to when the experimental genotypes are not randomized, although our algorithm still betters it (Fig. 5).

V Conclusion

Previous strategies of allocating genotypes between and within location in experimental designs such as the sparse and repchecks designs both ignore kinship information. When allocating the genotypes between locations, this was done at random (i.e. without necessary ensuring a good spread of families across locations) and, within location, this was done by only ensuring a good spread of the repeated check genotypes (without accounting for the relatedness of the experimental genotypes).

Through the case studies presented in this paper, we have shown that our new algorithm ensures that the kinship is well accounted for, therefore limiting the risks of confounding between locations and genotypes families. Within location, the gain from the current strategy is two-fold. First, we have shown that the convergence time from the existing algorithm is reduced (approx. 50%) when kinship is not accounted for and that existing patterns of close check genotypes are no more present. Second, accounting for kinship ensures that no more confounding is present within a location, even if a strong pattern of families is present in the starting design matrix.

This paper has concentrated on the repchecks design; however, it could easily be generalized to other designs by defining new design matrices and including additional, potentially complex, constraints. Such examples include uniqueness of some genotypes within location or, in the case of p-reps designs [6], fixed proportions of duplicated genotypes within location. Further work is needed to incorporate these generalizations.

Although the examples shown in this paper were restricted to cases where the families of genotypes were simple (independent families of full siblings), the results presented can be extended, without loss of generality, to much more complex kinship matrices, e.g. based on molecular data, to ensure a more accurate definition of the relationships between genotypes.

This paper has concentrated on comparing the new approach to one existing approach, DiGGer. Work is ongoing to further compare it to the other widespread approach, OD, for which the objective function used is closer to that used in our approach than that used in DiGGer. However, early comparisons show a very clear benefit, in computation time for using our approach compared to OD, whose convergence times often make it impractical to use in real-life cases, with hundreds of genotypes and plots over several locations.

In summary, new ideas of Differential Evolution adaptation to combinatorial permutation-based optimization are presented in this paper. This is the first time Differential Evolution is applied for generating efficient experimental designs. Although this problem is hard to solve, good convergence properties of the new algorithm allow to find promising sparse designs for real large-size problems. The software is extremely optimized and tuned for the last generations of Intel processors (multi-threaded, vectorized, …) making the computations faster than existing software. This opens horizons for larger and more efficient designs in the future.

References

  • [1] R. A. Fisher, The design of experiments, 9th ed.   Hafner Publishing Company, 1971.
  • [2] A. R. Gilmour, B. R. Cullis, and A. P. Verbyla, “Accounting for Natural and Extraneous Variation in the Analysis of Field Experiments,” Journal of Agricultural, Biological, and Environmental Statistics, vol. 2, no. 3, p. 269, 1997.
  • [3] B. R. Cullis, A. B. Smith, and N. E. Coombes, “On the design of early generation variety trials with correlated data,” Journal of Agricultural, Biological, and Environmental Statistics, vol. 11, no. 4, pp. 381–393, 2006.
  • [4] D. G. Butler, A. B. Smith, and B. R. Cullis, “On the Design of Field Experiments with Correlated Treatment Effects,” Journal of Agricultural, Biological, and Environmental Statistics, vol. 19, no. 4, pp. 539–555, 2014.
  • [5] E. R. Williams, J. John, and D. Whitaker, “Construction of more Flexible and Efficient P-rep Designs,” Australian {&} New Zealand Journal of Statistics, vol. 56, no. March 2013, 2014.
  • [6] E. Williams, H. P. Piepho, and D. Whitaker, “Augmented p-rep designs,” Biometrical Journal, vol. 53, no. 1, pp. 19–27, 2011.
  • [7] N. E. Coombes, “The Reactive TABU Search for Efficient Correlated Experimental Designs,” Ph.D. dissertation, Liverpool John Moores University, 2002.
  • [8] D. G. Butler, “On The Optimal Design of Experiments Under the Linear Mixed Model,” Ph.D. dissertation, The University of Queensland, 2013.
  • [9] N. E. Coombes, “DiGGer design search tool in R,” pp. 1–34, 2009. [Online]. Available: http://www.austatgen.org/files/software/downloads
  • [10] R Development Core Team, “R: A Language and Environment for Statistical Computing,” R Foundation for Statistical Computing Vienna Austria, 2016. [Online]. Available: http://www.r-project.org/
  • [11] D. Butler, B. Cullis, and J. Taylor, “Extensions in Linear Mixed Models and Design of Experiments,” in Statistics for the Australian Grains Industry, 2014, pp. 32–61.
  • [12] S. R. Searle, G. Casella, and C. E. McCulloch., Variance components.   Hoboken: John Wiley, 1992.
  • [13] C. R. Henderson, Applications of linear models in animal breeding.   Guelph, ontario: University of Guelph, 1984, no. 1.
  • [14] ——, “Chapter 11 MIVQUE of Variances and Covariances,” in Applications of linear models in animal breeding.   Guelph, Ontario: University of Guelph, 1984, vol. 1, no. 3, pp. 1–32.
  • [15] D. Laloe, “Precision and information in linear models of genetic evaluation.” Genetics Selection Evolution, vol. 25, no. 6, pp. 557–576, 1993.
  • [16] R. Storn and K. Price, “Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spaces,” Science, vol. 11, no. TR-95-012, pp. 1–15, 1995.
  • [17] ——, “Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces,” pp. 341–359, 1997.
  • [18] V. Feoktistov, Differential Evolution: In Search of Solutions.   Springer USA, 2006, vol. 5. [Online]. Available: http://www.springer.com/mathematics/book/978-0-387-36895-5
  • [19] V. Feoktistov and S. Janaqi, “Generalization of the strategies in differential evolution,” in 18th International Parallel and Distributed Processing Symposium, 2004. Proceedings., vol. 00, no. 4, 2004, pp. 165–170. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1303160
  • [20] G. Onwubolu and D. Davendra, Eds., Differential Evolution: A Handbook for Global Permutation-Based Combinatorial Optimization.   Studies in Computational Intelligence Vol 175, Springer Berlin Heidelberg, 2009.
  • [21] M. Zaefferer, J. Stork, and T. Bartz-Beielstein, “Distance Measures for Permutations in Combinatorial Efficient Global Optimization,” Parallel Problem Solving from Nature–PPSN XIII, vol. 8672, pp. 373–383, 2014.
  • [22] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in

    Proceedings of the 1995 IEEE International Conference on Neural Networks

    , vol. 4, Perth, Australia, IEEE Service Center, Piscataway, NJ, 1995, pp. 1942–1948.