Meta-heuristic algorithms are normally accompanied by some parameters which can influence their search behaviour on various optimisation problems. Parameter optimisation (PO) aims to find a best possible parameter configuration from the parameter space , which consists of all possible configurations, of the target algorithm and helps it achieve its peak performance on a black-box optimisation problem. Formally, given an algorithm, PO can be defined as the following black-box meta-optimisation problem:
where is the optimisation problem under consideration, and is a decision variable. is the performance measure associated with a configuration of the target algorithm. In particular, it can either be the runtime cost (e.g. the CPU wall time and/or the number of function evaluations) or the error of the solution found by the target algorithm.
PO is a challenging black-box meta-optimisation problem. First, its landscape is complex and change with the target algorithm when solving different problems. Second, the parameters associated with the target algorithm can have various types (e.g. numerical, integer and categorical) and the number of parameters can be potentially large depending on the algorithm specification. In addition, PO is intrinsically expensive as it requires to explore
by running the target algorithm with different configurations, where evaluating the effectiveness of a configuration will in turn cost a large amount of function evaluations and/or CPU wall time. In the evolutionary computation (EC) community, constructing a cheap-to-evaluate surrogate in lieu of calling the physically expensive objective function has been widely accepted as an effective way for expensive optimisation. The design and analysis of computer experiments in statistics also uses surrogate models to either fit a global model of the overall landscape or sequentially identify the global optimum of the underlying function . In the automatic parameter configuration field, sequential model-based Bayesian optimisation methods [3, 4, 5] have shown strong performance in PO, compared to some traditional methods like grid search and random search  and can compete or even surpass the results tuned by experienced human experts. Moreover, regression models have been extensively used in meta-learning to predict the algorithm performance across various datasets 
. It is worth to note that all these lines of research need to construct surrogate models of a computationally expensive and complex function in order to inform an active learning criterion that identifies new inputs to evaluate.
The problem of PO has a long history dating back to the 90s . Recently, it becomes increasingly popular in both meta-heuristics (e.g. [3, 4, 9, 10, 11]) and machine learning (e.g. [12, 5, 13, 14, 15, 16, 17]) communities, especially with the development of emerging automated machine learning . In this paper, instead of developing new algorithms for PO, we focus on studying surrogate models, which sit in the core of the model-based PO framework. We take the differential evolution (DE) [19, 20], one of the most popular black-box optimiser in the EC community, as the baseline algorithm. To obtain the empirical performance data on a given optimisation problem, we evaluate the performance of DE with respect to 5,940 parameter configurations in an expensive offline phase. The collected performance data are used to train a regression model and to validate its generalisation ability for predicting empirical performance of unseen parameter configurations. Here we consider four off-the-shelf regression algorithms for empirical performance modelling. In particular, we evaluate and compare their abilities in terms of how well they predict the empirical performance with respect to a particular parameter configuration, and also how well they approximate the parameter configuration versus the empirical performance landscapes. We envisage that this aspect will shed light on the study of the characteristics of surrogate models in future.
The rest of this paper is organised as follows. sec:method describes the methodologies that we used to setup the experiments. sec:experiments presents and analyses the experimental results. Finally, sec:conclusion concludes this paper and provides some future directions.
This section mainly describes the benchmark problems chosen in our empirical studies, the baseline algorithm DE and its corresponding parameters, the performance measure used to evaluate the quality of a particular parameter configuration, the method used to collect the algorithm performance data, and the regression algorithms used to build surrogates for modelling the empirical performance.
2.1 Benchmark Problems
In this paper, we consider choosing six widely used elementary test problems (i.e. sphere, ellipsoid, rosenbrock, ackley, griewank and rastrigin) and the first fourteen test problems (i.e. excluding those hybrid composite functions) from the CEC 2005 competition  to constitute the benchmark problems. To facilitate the notation in sec:experiments, the six elementary functions are denoted as F1 to F6 and those from the CEC 2005 competition are denoted as F7 to F20. Note that these test problems have various characteristics. In particular, F1, F2 and F7 to F11 are unimodal functions while the others are multi-modal functions. All test problems have analytically defined continuous objective functions with a known global optimum. The number of variables of each test problem varies from 2 to 30 (in particular ) and the range of variables is set according to their original paper.
2.2 DE and its Parameters
DE  is one of the most popular black-box optimisation algorithm in the EC community including evolutionary multi-objective optimisation [22, 23, 24, 25, 26, 27, 28]. One of the major reasons that contributes to its success is its simple structure. For a vanilla DE, an offspring solution
is generated by a two-step procedure. First, a trial vectoris generated as:
where , known as the evolution step size, is a parameter of DE. , and are randomly chosen from the parent population. Afterwards, is generated as:
where , is an integer randomly chosen from 1 to . is the parent solution under consideration. is a random number chosen from 0 to 1, and , known as the crossover rate, is another parameter of DE. In addition, the population size is also a parameter.
Many studies have demonstrated that the performance of DE is highly sensitive to its parameter settings . During the past decade, many efforts have been devoted to the development of advanced DE variants that are able to adaptively set the parameters on the fly [30, 31, 32] and/or find a good configuration in an offline manner . Since the major purpose of this paper is to investigate the ability of building the surrogate for modelling the empirical performance of an algorithm with respect to its corresponding parameter configurations, we focus on the vanilla DE  which is simple yet without losing the generality of the observations. Obviously, is an integer parameter, while and are numerical parameters.
2.3 Performance Measure
As the global optimum of each test problem is known a priori, this paper uses the approximation error to evaluate the empirical performance of a particular parameter configuration. Specifically, it is computed as:
where is a parameter configuration of DE, is the best-so-far solution found by the DE with the parameter configuration , and is the global optimum. Since DE is a stochastic algorithm, each parameter configuration needs to be repeated more than one time in practice. Thus, the performance of a parameter configuration is measured as an averaged approximation error:
where is the approximation error of a configuration at the -th run and is the number of repetitions of experiments with where we set in our experiments.
2.4 Data Collection
In principle, algorithm performance data used to construct the surrogate model of an algorithm’s empirical performance can be obtained by any means. Since this paper aims to investigate the overall surrogate modelling ability of an algorithm’s performance with respect to its parameter space, we are interested in every corner of the space. To this end, the parameter space is sampled in a grid manner, where we chose 9 different settings, i.e. , , 60 different values for with a step size 0.05, and 11 different values for with a step size 0.1. Therefore, there are 5,940 different parameter configurations in total.
2.5 Regression Algorithms for Surrogate Modelling
In this paper, four regression algorithms, i.e. Gaussian process (GP), random forest (RF), support vector machine for regression (SVR), radial basis function networks (RBFN), are considered as the candidates for surrogate modelling of DE’s empirical performance. Note that these regression algorithms have been widely used in the model-based PO in the algorithm configuration literature[34, 35, 36].
To construct a surrogate model on a particular problem instance, each of these four models is trained on the performance data (only 70% of them are used for training while the remaining 30% are used for testing) collected by running the DE algorithm with various parameter configurations on each problem instance as introduced in sec:data. Note that learning a surrogate model is no free lunch, as each regression algorithm also requires some hyper-parameters to be tuned. To identify the best possible configurations for each regression algorithm, we apply the random search  to explore the hyper-parameter space. Specifically, as for GP, we need to choose an appropriate kernel among RBF, rational quadratic and Matérn; as for RF, the number of trees in a forest is chosen from 2 to 100, the minimum number of samples required to split an internal node is chosen from 2 to 11, the number of features to consider when looking for the best split is set in the range , the criterion used to measure the quality of a split is either mean squared error or mean absolute error and the minimum number of samples required to be at a leaf node is chosen from 1 to 11; as for SVR, the kernel is chosen between RBF and Sigmoid, the maximal margin is chosen from , the regularisation parameter is set in between 1 and 10, and is chosen from if RBF is used as the kernel. A 5-fold cross-validation (using 80% of the training data for training and the remaining 20% data for testing) is used to evaluate the training performance of a particular hyper-parameter configuration of a regression algorithm. To have a fair comparison, all surrogate modelling procedures are implemented by scikit-learn, a machine learning toolbox in Python222https://scikit-learn.org/stable/.
3 Experiments and Results
In this section, we will present and compare experimental evaluations of the quality of surrogates constructed by different regression algorithms introduced in sec:surrogate. The experimental results are analysed according to the following three research questions (RQs).
Which surrogate model works best for empirical performance modelling on various kinds of benchmark problems?
Does the empirical performance predicted by a surrogate model follow the order as the ground truth?
How does the empirical performance landscape fit by a surrogate model compare with the ground truth?
3.1 Comparisons of Different Surrogate Models
Bearing the RQ1 in mind, this section empirically compares the generalisation performance of four regression algorithms on unseen parameter configurations. In particular, the root mean square error (RMSE) is used to measure the generalisation performance and it is calculated as:
where is the approximation error of a parameter configuration estimated by a surrogate model; while is the observed approximation error of , and is the number of data in the testing set.
From the results shown in Tables 1 to 3, we clearly see that GP and RF are the best regression algorithms to build the surrogate for modelling the empirical performance. RBFN is slightly worse than GP and RF, while SVR is the worst choice except on F14 when . Note that our observations of promising performance of GP and RF are also in line with some results reported in the contemporary algorithm configuration literature . Furthermore, we find that the performance of different regression algorithms are consistent across different dimensions. This makes sense as a surrogate model is built upon the parameter configurations themselves, which are independent from the problem instances. In addition, we find that the RMSE dramatically increases with the dimensionality of the underlying problem. This can be explained as the significant degeneration of the performance of DE with the dimensionality which in term largely increases the approximation errors.
To have a better understanding of the generalisation performance of different surrogate models (especially the relationship between the predicted performance and its ground truth given a particular parameter configuration), we calculate the Pearson correlation coefficient (PCC) of the results:
where represents the set of observed approximation errors of all parameter configurations in the testing set while is the set of approximation errors estimated by a surrogate model. is the covariance of and , and
are the standard deviations ofand . In particular, a higher PCC indicates a better correlation between the predicted performance and the ground truth.
From the results shown in Tables 1 to 3, we can see that the observations are in line with the RMSE. The performance of GP and RF are the most competitive regression algorithms in almost all cases, where the correlation between the predicted performance and its ground truth is relatively high. The performance of RBFN is very close to those of GP and RF, while the PCC obtained by SVR is the worst. To have a visual understanding of this point, we also provide the scatter plots of ground truth vs predicted performance in Figures 1 to 3333More comprehensive figures are moved to the supplementary document, which can be downloaded from http://coda-group.github.io/cec19-supp.pdf.. According to the observations from these figures and Tables 1 to 3, we summarise our findings as follows.
As shown in Tables 1 to 3, the RMSEs of all four regression algorithms are huge (over ) on F9 and F12. This is because the performance of DE are miserable on these two test problems with almost all sampled 5,940 parameter configurations. Accordingly, the deviations of the predicted empirical performance are in a relatively large scale. This also explains the increase of RMSEs with the problem dimensionality. However, according to PCCs, we find that the correlation between the predicted empirical performance and the ground truth of GP, RBFN and RF are acceptable.
The RMSEs of the first six elementary test problems (i.e. F1 to F6), which are relatively simple, are better than those from CEC 2005 competition. Accordingly, the deviations between the predicted performance and the ground truth are small. This indicates that most parameter configurations are able to lead to an acceptable performance of DE. In other words, DE is not sensitive to its configurations on these problems.
As shown in fig:F8, we find that SVR largely underestimates the approximation error on F8. Similar observations can be found on F7, F9, F10, F12 and F18 as shown in the supplementary document.
As shown in fig:F14, we find that scatter plots are crowded in the middle region of the diagonal line. This implies that all parameter configurations fail to lead to a decent result. Similar observations can be found on F13 and F20 when the number of variables becomes large in the supplementary document.
Based on the above discussions, we come up with the following response to RQ1:
Response to RQ1: GP and RF are the best regression algorithms for building the surrogate model of empirical performance. In addition, the quality of the surrogate model depend on the quality of the performance data.
3.2 Comparisons of Performance Ranks Obtained by Different Surrogate Models
When using a surrogate in a sequential model-based PO, the prediction accuracy of this model is not utterly important. Instead, reliably differentiating the promising ones with respect to their unpromising counterparts can also provide useful information to guide the optimisation process. In other words, for a set of parameter configurations, we expect that the ranks (or the order) of the empirical performance predicted by a surrogate model can follow those of the ground truth. To this end, we consider using the Spearman’s rank correlation coefficient (SRCC) to measure the statistical dependence between the ranks of the predicted performance and the ground truth. Note that the calculation of SRCC is almost the same as that of PCC, except that the raw data is replaced by the corresponding ranks.
where indicates the ranks of the observed approximation errors of all parameters configurations in the testing set while is the ranks of those estimated approximation errors. A higher SRCC indicates a better dependency between the predicted performance and the ground truth.
From the results shown in Tables 1 to 3, we can still come up with the conclusion that GP and RF are the most reliable regression algorithms for building the surrogate model of the empirical performance. They almost dominate the top two positions in terms of SRCC. It is interesting to note that the SRCCs obtained by SVR are not as poor as its performance on RMSE and PCC. It is even comparable with GP and RF in some cases, e.g. on F20. This suggests that the prediction made by SVR has a decent chance to differentiate the order between two parameter configurations. In this case, SVR might be useful in a model-based PO process where it can be used as a comparison-based surrogate . Furthermore, we also notice that RBFN does not show a good performance on SRCC. It is even sometimes worse than SVR. This indicates that although the prediction made by RBFN is numerically close to the ground truth, it may still mislead a model-based PO as it messes up the order of similar parameter configurations.
Based on the above discussion, we come up with the following response to RQ2:
Response to RQ2: GP and RF are able to preserve the order of the empirical performance of different parameter configurations. In particular, SVR, which performs poorly on predicting the empirical performance, shows comparable performance for order preservation.
3.3 Comparisons of Landscape Approximation
In previous subsections, we mainly focus on investigating the quality of surrogate models from the approximation accuracy perspective. For the last RQ, we plan to study of the quality of surrogate models from a landscape analysis perspective. Considering the testing data set, we compare the landscapes of the empirical performance predicted by different regression algorithms to the landscape of the ground truth. To this end, we use the kernel density estimation (KDE) method444https://uk.mathworks.com/help/stats/ksdensity.html
to estimate a probability density function (PDF) of the empirical performance. To have a visual comparison, Figs4 to 6 shows the plots of the estimated PDFs of four different regression algorithms and the ground truth. From these figures, we can see that the prediction made by GP, RF and RBFN almost fit the distribution of the ground truth. In contrast, the estimated PDF of SVR deviates from the ground truth in many cases. This becomes more evident when the dimensionality of the underlying problem becomes large.
Since the surrogate model considered in this paper is a mapping between a parameter configuration and its corresponding empirical performance, it is interesting to consider a more complex landscape that is a joint probability distribution of parameter configuration and empirical performance. As it is non-trivial to visualise a multi-dimensional distribution, we try to understand the proximity of the landscape approximated by the surrogate model and that of the ground truth from a statistical distance perspective. To this end, we apply the earth mover’s distance (EMD), also known as Wasserstein metric, to evaluate the dissimilarity between two multi-dimensional distributions. Generally speaking, given two distributions, the EMD measures the minimum cost of turning one distribution into the other. In our context, similar landscapes are expected to have a relatively small EMD whereas large EMD values will imply that the landscapes are significantly different from each other. Due to the page limit, we do not intend to elaborate the calculation procedure of EMD, interested readers can refer to  for more details. From the comparison results of EMD values shown in tab:emd, we find that GP, RF and RBFN have the same level of approximation to the ground truth whereas the divergence values obtained by SVR are relatively large in almost all cases. All these observations are also in line with the RMSEs discussed in sec:RMSE_comparison.
Based on the above discussion, we come up with the following response to RQ3:
Response to RQ3: The landscapes of the empirical performance predicted GP, RF and RBFN well approximate the ground truth; while the landscapes obtained by SVR deviate from the ground truth to a certain extent.
4 Conclusions and Future Directions
It is not uncommon a meta-heuristic algorithm is accompanied by some parameters, the settings of which largely influence its performance on various problems. Tweaking the parameter configuration of a meta-heuristic algorithm to achieve its peak performance on a certain problem can be treated as an optimisation process, as known as PO. Due to the stochastic property of most meta-heuristic algorithms, evaluating the quality of a particular parameter configuration usually requires to run the target algorithms several times. Therefore, it is inarguably that PO is computationally expensive. Building a cheap-to-evaluate surrogate model in lieu of a computationally expensive experiment has been widely accepted as a major approach for expensive optimisation. Instead of developing a new algorithm for PO, this paper aims to study a fundamental issue — investigating the ability of four prevalent regression algorithms for building a surrogate model of empirical performance. From our extensive experiments, we find that surrogate models built by GP and RF have shown promising generalisation ability for predicting the empirical performance of unseen parameter configurations. In particular, the prediction accuracy depends on the quality of the original performance data. This implies that it needs to be careful to use a surrogate model in the early stage of a PO process. Furthermore, we find that although SVR does not show a promising performance for predicting the approximation error of a parameter configuration, it is able to differentiate the order of two parameter configurations.
Generally speaking, we hope this work will be useful to a wide variety of researchers who seek to model algorithm performance for algorithm analysis, scheduling, algorithm portfolio construction, automated algorithm configuration, and other applications. As for the coming next step, we plan to explore the following three aspects.
We would like to apply the regression algorithms investigated in this paper in the context of model-based PO. Although using design and analysis of computer experiments in the context of PO has already been studied in some previous work (e.g. sequential PO ), it is still worthwhile to see whether the observations in the offline training are directly applicable to online PO.
Here we set the PO as a per-instance scenario. In the prevalent algorithm configuration literature , it is more interesting to combine the problem feature into the surrogate modelling process so that we can generalise the PO to a range of similar problems.
In addition, assessing the performance of evolutionary multi-objective optimisation algorithms, e.g. [40, 41, 42, 43, 44], is even more difficult. Therefore, it is also interesting to investigate appropriate surrogate modelling methods to analyse and understanding the parameter versus algorithm performance in the context of multi-objective optimisation.
This work was supported by the Royal Society (Grant No. IEC/NSFC/170243).
-  Y. Jin, “Surrogate-assisted evolutionary computation: Recent advances and future challenges,” Swarm and Evol. Comput., vol. 1, no. 2, pp. 61–70, 2011.
-  T. J. Santner, B. J. Williams, and W. I. Notz, The Design and Analysis of Computer Experiments. Springer, 2003.
-  T. Bartz-Beielstein, C. Lasarczyk, and M. Preuss, “Sequential parameter optimization,” in CEC’05: Proc. of the 2005 IEEE Congress on Evol. Comput., 2005, pp. 773–780.
-  F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Sequential model-based optimization for general algorithm configuration,” in LION’11: Proc. of 5th International Conference on Learning and Intelligent Optimization, 2011, pp. 507–523.
C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms,” inKDD’13: Proc. of 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2013, pp. 847–855.
-  J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” J. Machine Learning Research, vol. 13, pp. 281–305, 2012.
M. Reif, F. Shafait, M. Goldstein, T. Breuel, and A. Dengel, “Automatic classifier selection for non-experts,”Pattern Anal. Appl., vol. 17, no. 1, pp. 83–96, 2014.
-  R. Kohavi and G. H. John, “Autmatic parameter selection by minimizing estimated error,” in ICML’95: Proc. of 12th International Conference on Machine Learning, 1995, pp. 304–312.
-  A. Blot, H. H. Hoos, L. Jourdan, M. Kessaci-Marmion, and H. Trautmann, “Mo-paramils: A multi-objective automatic algorithm configuration framework,” in LION’16: Proc. of 10th International Conference on Learning and Intelligent Optimization, 2016, pp. 32–47.
-  M. López-Ibáñez, J. Dubois-Lacoste, L. Pérez Cáceres, T. Stützle, and M. Birattari, “The irace package: Iterated racing for automatic algorithm configuration,” Oper. Res Perspectives, vol. 3, pp. 43–58, 2016.
K. Li, Á. Fialho, S. Kwong, and Q. Zhang, “Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition,”IEEE Trans. Evolutionary Computation, vol. 18, no. 1, pp. 114–130, 2014.
-  J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” in NIPS’12: Proc. of 26th Annual Conference on Neural Information Processing Systems, 2012, pp. 2960–2968.
-  S. Sanders and C. G. Giraud-Carrier, “Informing the use of hyperparameter optimization through metalearning,” in ICDM’17: Proc. of 2017 IEEE International Conference on Data Mining, 2017, pp. 1051–1056.
-  J. Cao, S. Kwong, R. Wang, and K. Li, “A weighted voting method using minimum square error based on extreme learning machine,” in ICMLC’12: Proc. of the 2012 International Conference on Machine Learning and Cybernetics, 2012, pp. 411–414.
-  K. Li, R. Wang, S. Kwong, and J. Cao, “Evolving extreme learning machine paradigm with adaptive operator selection and parameter control,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 21, pp. 143–154, 2013.
-  J. Cao, S. Kwong, R. Wang, and K. Li, “AN indicator-based selection multi-objective evolutionary algorithm with preference for multi-class ensemble,” in ICMLC’14: Proc. of the 2014 International Conference on Machine Learning and Cybernetics, 2014, pp. 147–152.
-  J. Cao, S. Kwong, R. Wang, X. Li, K. Li, and X. Kong, “Class-specific soft voting based multiple extreme learning machines ensemble,” Neurocomputing, vol. 149, pp. 275–284, 2015.
-  “Neurips 2018 challenge: The 3rd automl challenge: Automl for lifelong machine learning,” https://www.4paradigm.com/competition/nips2018.
-  R. Storn and K. V. Price, “Differential evolution - A simple and efficient heuristic for global optimization over continuous spaces,” J. Global Optimization, vol. 11, no. 4, pp. 341–359, 1997.
-  K. Li, S. Kwong, R. Wang, J. Cao, and I. J. Rudas, “Multi-objective differential evolution with self-navigation,” in SMC’12: Proc. of the 2012 IEEE International Conference on Systems, Man, and Cybernetics, 2012, pp. 508–513.
-  P. N. Suganthan, N. Hansen, K. Deb, J. J. Liang, Y.-P. Chen, A. Anger, and S. Tiwari, “Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization,” NTU and IIT Kanpur, Technical Report 2005005, 2005.
-  K. Li, Q. Zhang, S. Kwong, M. Li, and R. Wang, “Stable matching-based selection in evolutionary multiobjective optimization,” IEEE Trans. Evolutionary Computation, vol. 18, no. 6, pp. 909–923, 2014.
-  K. Li, S. Kwong, and K. Deb, “A dual-population paradigm for evolutionary multiobjective optimization,” Inf. Sci., vol. 309, pp. 50–72, 2015.
-  K. Li, S. Kwong, Q. Zhang, and K. Deb, “Interrelationship-based selection for decomposition multiobjective optimization,” IEEE Trans. Cybernetics, vol. 45, no. 10, pp. 2076–2088, 2015.
-  K. Li, K. Deb, Q. Zhang, and S. Kwong, “An evolutionary many-objective optimization algorithm based on dominance and decomposition,” IEEE Trans. Evolutionary Computation, vol. 19, no. 5, pp. 694–716, 2015.
-  K. Li, K. Deb, Q. Zhang, and Q. Zhang, “Efficient nondomination level update method for steady-state evolutionary multiobjective optimization,” IEEE Trans. Cybernetics, vol. 47, no. 9, pp. 2838–2849, 2017.
-  R. Chen, K. Li, and X. Yao, “Dynamic multiobjectives optimization with a changing number of objectives,” IEEE Trans. Evolutionary Computation, vol. 22, no. 1, pp. 157–171, 2018.
-  K. Li, R. Chen, G. Min, and X. Yao, “Integration of preferences in decomposition multiobjective optimization,” IEEE Trans. Cybernetics, vol. 48, no. 12, pp. 3359–3370, 2018.
-  S. Das and P. N. Suganthan, “Differential evolution: A survey of the state-of-the-art,” IEEE Trans. Evol. Comput., vol. 15, no. 1, pp. 4–31, 2011.
-  J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems,” IEEE Trans. Evol. Comput., vol. 10, no. 6, pp. 646–657, 2006.
-  A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution algorithm with strategy adaptation for global numerical optimization,” IEEE Trans. Evol. Comput., vol. 13, no. 2, pp. 398–417, 2009.
-  K. Li, Á. Fialho, and S. Kwong, “Multi-objective differential evolution with adaptive control of parameters and operators,” in LION’11: Proc. of 5th International Conference on Learning and Intelligent Optimization, 2011, pp. 473–487.
-  N. Belkhir, J. Dréo, P. Savéant, and M. Schoenauer, “Feature based algorithm configuration: A case study with differential evolution,” in PPSN’16: Proc. of 14th International Conference on Parallel Problem Solving from Nature - PPSN XIV, 2016, pp. 156–166.
-  F. Hutter, L. Xu, H. H. Hoos, and K. Leyton-Brown, “Algorithm runtime prediction: Methods & evaluation,” Artif. Intell., vol. 206, pp. 79–111, 2014.
-  M. Wu, S. Kwong, Y. Jia, K. Li, and Q. Zhang, “Adaptive weights generation for decomposition-based multi-objective optimization using gaussian process regression,” in Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2017, Berlin, Germany, July 15-19, 2017, 2017, pp. 641–648.
-  M. Wu, K. Li, S. Kwong, Q. Zhang, and Q. Zhang, “Learning to decompose: a paradigm for decomposition-based multiobjective optimization,” IEEE Trans. Evolutionary Computation, 2018, accepted for publication.
-  I. Loshchilov, M. Schoenauer, and M. Sebag, “Comparison-based optimizers need comparison-based surrogates,” in PPSN’10: Proc. of 11th International Conference on Parallel Problem Solving from Nature, 2010, pp. 364–373.
Y. Rubner, C. Tomasi, and L. J. Guibas, “The earth mover’s distance as a metric for image retrieval,”
International Journal of Computer Vision, vol. 40, no. 2, pp. 99–121, 2000.
X. Sun, D. Gong, Y. Jin, and S. Chen, “A new surrogate-assisted interactive genetic algorithm with weighted semisupervised learning,”IEEE Trans. Cybernetics, vol. 43, no. 2, pp. 685–698, 2013.
-  M. Wu, S. Kwong, Q. Zhang, K. Li, R. Wang, and B. Liu, “Two-level stable matching-based selection in MOEA/D,” in SMC’15: Proc. of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, 2015, pp. 1720–1725.
-  M. Wu, K. Li, S. Kwong, Y. Zhou, and Q. Zhang, “Matching-based selection with incomplete lists for decomposition multiobjective optimization,” IEEE Trans. Evolutionary Computation, vol. 21, no. 4, pp. 554–568, 2017.
-  M. Wu, K. Li, S. Kwong, and Q. Zhang, “Evolutionary many-objective optimization based on adversarial decomposition,” IEEE Trans. Cybernetics, 2018, accepted for publication.
-  K. Li, R. Chen, G. Fu, and X. Yao, “Two-archive evolutionary algorithm for constrained multi-objective optimization,” IEEE Trans. Evolutionary Computation, 2018, accepted for publication.
-  K. Li, K. Deb, and X. Yao, “R-metric: Evaluating the performance of preference-based evolutionary multiobjective optimization using reference points,” IEEE Trans. Evolutionary Computation, vol. 22, no. 6, pp. 821–835, 2018.