Log In Sign Up

Adaptive Generation-Based Evolution Control for Gaussian Process Surrogate Models

The interest in accelerating black-box optimizers has resulted in several surrogate model-assisted version of the Covariance Matrix Adaptation Evolution Strategy, a state-of-the-art continuous black-box optimizer. The version called Surrogate CMA-ES uses Gaussian processes or random forests surrogate models with a generation-based evolution control. This paper presents an adaptive improvement for S-CMA-ES based on a general procedure introduced with the s*ACM-ES algorithm, in which the number of generations using the surrogate model before retraining is adjusted depending on the performance of the last instance of the surrogate. Three algorithms that differ in the measure of the surrogate model's performance are evaluated on the COCO/BBOB framework. The results show a minor improvement on S-CMA-ES with constant model lifelengths, especially when larger lifelengths are considered.


page 1

page 2

page 3

page 4


Two Gaussian Approaches to Black-Box Optomization

Outline of several strategies for using Gaussian processes as surrogate ...

Landscape Analysis for Surrogate Models in the Evolutionary Black-Box Context

Surrogate modeling has become a valuable technique for black-box optimiz...

Surrogate-Based Black-Box Optimization Method for Costly Molecular Properties

AI-assisted molecular optimization is a very active research field as it...

GPdoemd: a Python package for design of experiments for model discrimination

Model discrimination identifies a mathematical model that usefully expla...

Black-box optimization benchmarking of IPOP-saACM-ES on the BBOB-2012 noisy testbed

In this paper, we study the performance of IPOP-saACM-ES, recently propo...

SAFE ML: Surrogate Assisted Feature Extraction for Model Learning

Complex black-box predictive models may have high accuracy, but opacity ...


The original article appeared in Proceedings of the 17th Conference on Information Technologies—Applications and Theory (ITAT 2017) Martinské hole, Slovakia, September 22–26, 2017, published in print by CreateSpace Independent Publishing Platform and online by CEUR Workshop Proceedings.

This updated version incorporates the following changes:


Added credits to the ACM-ES algorithm.

Section 1

Added references and clarified the motivation.

Section 3

Added references.

1 Introduction

The problem of optimization of real-valued functions without a known mathematical expression, arising in many engineering tasks, is referred to as continuous black-box optimization. Evolutionary strategies, a class of randomized population-based algorithms inspired by natural evolution, are a popular choice for continuous black-box optimization. Especially the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6] is considered the state-of-the-art continuous black-box optimizer of the several past decades. Since values of a black-box function can only be obtained empirically and at considerable costs in practice, the number of function evaluations needed to obtain a desired function value is a key criterion for evaluating black-box optimizers.

The technique of surrogate modelling aims at saving function evaluations by building a surrogate model of the fitness and using that for a portion of function evaluations conducted in the course of the evolutionary search. Several surrogate model-assisted versions of the CMA-ES have been developed (see [13] for a recent comparison of some of the most notable algorithms). Surrogate CMA-ES (S-CMA-ES) [2] utilizes random forests- or Gaussian processes-based surrogate models, which possess an inherent capability to quantify uncertainty of the prediction.

In order to control surrogate model’s error, S-CMA-ES uses the surrogate model for a given number of generations before a new instance of the model is trained on a population evaluated with the fitness, which is a strategy called generation-based evolution control [9]. In [2], two values, in particular , have been benchmarked on the COCO/BBOB framework. In many cases, the higher value of outperformed the lower one in earlier phases of the optimization, but the reverse order was observed towards later phases of the optimization.

The ACM-ES algorithm [11]

introduced an adaptive evolution control adjusting surrogate hyperparameters and

lifelength, i. e., the number of model-evaluated generations, as a function of previous model’s error.

In this paper, we use the procedure for adjusting from ACM-ES in connection with three different surrogate model error measures. The three S-CMA-ES versions are compared on the COCO/BBOB framework. We restrict our attention to S-CMA-ES with Gaussian processes, since they outperformed random forest-based surrogates [2].

The remainder of this paper is organized as follows. Section 2 outlines basic concepts of S-CMA-ES. The adaptive version is described in Section 3. Experimental setup is given in Section 4. Experimental results are reported in Section 6. Section 7 concludes the paper.

2 Surrogate CMA-ES

The CMA-ES operates on a population of

candidate solutions sampled from a multivariate normal distribution:


where is the normal distribution function; and

are the mean and the covariance matrix of the estimated search distribution, respectively; and the

is the overall search step size. The candidate solutions are ranked according to their fitness values:


Upon a (weighted) selection of highest ranked points, the mean and the covariance matrix of the multivariate normal distribution are adapted according to a procedure that takes as input, among other variables, a cumulation of the past search steps [5]. The S-CMA-ES modifies the CMA-ES by replacing its sampling (1) and fitness-evaluation (2) steps with a procedure depicted in Algorithm 1.

0:   (generation) (number of model-evaluated generations) (CMA-ES internal variables) (maximal distance between and a training point) (minimal number of points for training) (maximal number of points for training) (archive), (model), (fitness)
1:   {sampling}
2:  if  is original-fitness-evaluated then
3:      {fitness evaluation}
5:      choose training points within the Mahalanobis distance from , assuring that
7:     mark as model-evaluated
8:  else
9:      {model evaluation}
10:     if  model generations have passed then
11:        mark as original-fitness-evaluated
12:     end if
13:  end if
Algorithm 1 Surrogate part of S-CMA-ES

Depending on the generation number , the procedure evaluates all candidate solutions either with the real fitness or with the model. In each case, the sampling of the estimated multivariate normal distribution is unchanged (step 1).

If the population is original-fitness-evaluated (step 3), the new evaluations are saved in an archive of known solutions (step 4). Afterwards, a new model is trained on a set of points within the Mahalanobis distance from the current CMA-ES distribution (step 5).

In model-evaluated generations, the fitness values of the whole population of candidate solutions are estimated by the model (step 9).

2.1 Gaussian Processes

A Gaussian process (GP) is a collection of random variables

, such that any finite subcollection has an -dimensional normal distribution. A Gaussian process is defined by a mean function (often assumed to be zero) and a covariance function , where

is a vector of parameters of

, hence hyperparameters of the Gaussian process. Given a set of training data , the covariance matrix of a GP prior is , where is a matrix given by for all ;

is the variance of an additive, i. i. d. noise and

is a identity matrix. Given a new point , Gaussian process regression is derived by conditioning the joint normal distribution of on the prior, which yields a univariate Gaussian (see [14] for more details). The hyperparameters of a GP regression model are estimated using the maximum likelihood estimation method.

3 Adaptive Evolution Control for Surrogate CMA-ES

The generation-based evolution strategy optimizes the fitness function and the surrogate model thereof in certain proportion. On problem areas that can be approximated well, a surrogate-assisted optimization might benefit from frequent utilization of the model, while on areas that are hard for the surrogate to approximate, frequent utilization of the model might degenerate the performance due to the model’s inaccuracy.

Adaptation of the number of model evaluated generations (in addition to other surrogate model parameters that we don’t investigate here) depending on the previous model’s error has been proposed in ACM-ES [11].

Let be a generation that is marked as original-fitness-evaluated, and a newly-trained surrogate model . If is the first surrogate trained so far, put . Otherwise, an error of a previous surrogate model is estimated on the newly evaluated population (Algorithm 2). The error is then mapped into a number of consecutive generations , for which the surrogate will be used (Algorithm 3).

We investigate three approaches for expressing surrogate model error. As the CMA-ES depends primarily on the ranking of candidate solutions, the first two approaches, Kendall correlation coefficient and Rank difference are based on ranking. The third one, previously proposed in [12], uses Kullback-Leibler divergence a. k. a. information gain to measure a difference between a multivariate normal distribution estimated from the fitness values  and a multivariate normal distribution estimated for the predicted values .

0:  error_type (one of {“Kendall”, “Rank-Difference”, “Kullback-Leibler”}) (CMA-ES generation number) (a newly sampled population) (fitness values and model predictions in generation ) (CMA-ES constants) (CMA-ES variables at generation ) (maximal error so far)
1:  if error_type = “Kendall” then
2:      Kendall rank correlation coefficient between and
4:  else if error_type = “Rank-Difference” then
6:  else if error_type = “Kullback-Leibler” then
10:     if  then
12:     end if
13:      {normalize in proportion to the historical maximum}
14:  end if
Algorithm 2 Model error estimation

Kendall rank correlation coefficient

Kendall rank correlation coefficient measures similarity between two different orderings of the same set. Let and be the sequences of the fitness values and the predicted values of a population , respectively. A pair of indices , such that , is said to be concordant, if both and or if both and . A discordant pair is one fulfilling that both and or both and . Let and denote the number of concordant and discordant pairs of indices from , respectively. The Kendall correlation coefficient between vectors and is defined as:

In the corresponding branch of Algorithm 2, the value is decreasingly scaled into interval .

Ranking difference error

The ranking difference error is a normalized version of a measure used in [10]. Given the rank of the -th element of and the rank of the -th element of , the ranking difference error is the sum of element-wise differences between and taking into account only the best-ranked points from :

where is the group of all permutations of set .

Kullback-Leibler divergence

Kullback-Leibler divergence from a continuous random variable

with probability density function

to a continuous random variable with probability density function is defined as:

For two multivariate normal distributions and with the same dimension , the Kullback-Leibler divergence from to is:

The algorithm of model error estimation (Algorithm 2) in generation computes Kullback-Leibler divergence from a CMA-estimated multivariate normal distribution w. r. t. fitness values to a CMA-estimated multivariate normal distribution w. r. t. predicted values . Procedure cma_update in steps 7 and 8 refers to one iteration of the CMA-ES from the point when a new population has been sampled. The result is normalized by the historical maximum (step 13).

Obr. 1: Model error transfer functions
0:   (estimation of surrogate model error, ) (a threshold at which the error is truncated to 1) (transfer function) (error update rate) (model error from the previous iteration) (upper bound for admissible number of model generations)
1:   {exponential smoothing}
3:   {truncation to }
4:   {scaling into the admissible interval}
4:   – updated number of model-evaluated generations
Algorithm 3 Updating the number of model generations

Adjusting the number of model generations

The model of dependence of the number of consecutive model generations on the model error (Algorithm 3) is almost identical to the approach used in [11]. The history of surrogate model errors is exponentially smoothed with a rate (step 1). The error is truncated at a threshold so that resulting for all values (step 3). In contrast to [11], we consider two different transfer functions (plotted in Figure 1) that scale the error into the admissible interval :


Both functions are defined on , moreover, and for . Transfer function

is a simple sigmoid function defined to be slightly less sensitive near the edges than in the middle. More control can thus be achieved in the regions of low and high error values. The parameter

determines the steepness of the sigmoid curve.

4 Experimental Setup

The proposed adaptive generation-based evolution control for the S-CMA-ES with three different surrogate model error measures is evaluated on the noiseless testbed of the COCO/BBOB (Comparing Continuous Optimizers / Black-Box Optimization Benchmarking) framework [7, 8] and compared with the S-CMA-ES and CMA-ES.

Each function is defined everywhere on and has its optimum in for all dimensionalities . For every function and every dimensionality,

trials of the optimizer are run on independent instances, which differ in linear transformations of the

-space or shifts of the -space. In our experiments, instances recommended for BBOB 2015 workshop, i. e., , were used. Each trial is terminated when the is reached within a small tolerance or when a given budget of function evaluations, in our case, is used up. Experiments were run for dimensionalities , , , and . The algorithms’ settings are summarized in the following subsections.

4.1 Cma-Es

The CMA-ES results in BBOB format were downloaded from the BBOB 2010 workshop archive 111 The CMA-ES used in those experiments was in version 3.40.beta and utilized a restart strategy (known as IPOP-CMA-ES), where the population size is increased by factor IncPopSize after each restart [1]. The default parameter values employed in the CMA-ES are , , , .

4.2 S-Cma-Es

The S-CMA-ES was tested with two numbers of model-evaluated generations, (further denoted as “GP-1”) and (“GP-5”). All other S-CMA-ES settings were left as described in [2]. In particular, the Mahalanobis distance was , the starting values of the Matérn covariance function were and the starting value of the GP noise parameter was . If not mentioned otherwise, the corresponding settings of adaptive versions of the S-CMA-ES are as just stated.

Parameter Discretization
 (3),  (4)
Tabuľka 1: Discretization of the A-S-CMA-ES parameters.

In order to find the most promising settings for each considered surrogate error measure, a full factorial experiment was conducted on one half of the noiseless testbed, namely on functions for . The discretization of continuous parameters is reported in Table 1. All possible combinations of the parameters were ranked on the selected functions according to the lowest achieved (see Section 6) for different numbers of function evaluations . The best settings were chosen according to the highest sum of -st rank counts. Ties were resolved according to the lowest sum of ranks. All of the best settings included maximum model-evaluated generations . The remaining of the winning values are summarized in the following paragraphs.

Kendall correlation coefficient (ADA-Kendall)

Transfer function , error threshold and update rate .

Ranking difference error (ADA-RD)

The same, except transfer function was .

Kullback-Leibler divergence (ADA-KL)

Transfer function , error threshold and update rate .

5 CPU Timing

Tabuľka 2: The time in seconds per function evaluation for the Adaptive S-CMA-ES.

In order to assess computational costs other than the number of function evaluations, we calculate CPU timing per function evaluation for each algorithm and each dimensionality. Each experiment was divided into jobs by dimensionalities, functions and instances. All jobs were run in a single thread on the Czech national grid MetaCentrum. The average time per function evaluation for each algorithm and each tested dimensionality is summarized in Table 2.

6 Results

13 1 13 1 13 1 13 1 13 1
Tabuľka 3: Mean ranks of the CMA-ES, the S-CMA-ES and all A-S-CMA-ES versions over the BBOB and the Iman-Davenport variant of the Friedman test for the 10 considered combinations of dimensionalities and evaluation budgets. The lowest value is highlighted in bold. Statistically significant results at the significance level are marked by an asterisk.

We test the difference in algorithms’ convergence for significance on the whole noiseless testbed with the non-parametric Friedman test [3]. The algorithms are ranked on each BBOB function with respect to medians of log-scaled minimal distance from the function optimum, denoted as , at a fixed budget of function evaluations.

To account for different optimization scenarios, the test is conducted separately for all considered dimensionalities of the input space and two function evaluation budgets, a higher and a lower one. Let  be the smallest number of function evaluations at which at least one algorithm reached the target, i. e., satisfied , or if the target has not been reached. We set the higher budget for the tests to  and the lower budget to .

Mean ranks from the Friedman test are given in Table 3. The critical value for the Friedman test is .

The mean ranks differ significantly for all tested scenarios except for both tested numbers of function evaluations in and the higher tested number of function evaluations in . Starting from upwards, the lowest mean rank is achieved either by ADA-Kendall or ADA-RD at both tested .

In order to show pairwise differences, we perform a pairwise comparison of the algorithms’ average ranks by the post-hoc Friedman test with the Bergmann-Hommel correction of the family-wise error [4]

in cases when the null hypothesis of equal algorithms’ performance was rejected. To better illustrate algorithms differences, we also count the number of benchmark functions at which one algorithm achieved a higher rank than the other. The pairwise score and the statistical significance of the pairwise mean rank differences are reported in Table 

4. In the post-hoc test, ADA-Kendall significantly outperforms both the CMA-ES and GP-5 in and . It also significantly outperforms GP-1 in at the higher tested .

For illustration, the average control frequency given by the ratio of the number of total original-fitness-evaluated generations to the number of total model-evaluated generations within one trial, for data from trials on (Rosenbrock’s function) in is given in Figure 2.

Obr. 2: Average control frequency (the ratio of the number of total original-fitness-evaluated generations to the number of total model-evaluated generations) in A-S-CMA-ES measured in 15 trials of each algorithm on in .

The algorithm ADA-KL led to generally lower control frequencies than its competitors, which might explain its slightly inferior performance. Similar results were observed for the remaining functions and dimensionalities.

The cases when ADA-Kendall and ADA-RD are able to switch between more exploitation-oriented and more data-gathering-oriented behaviour can be studied on the results from COCO’s postprocessing. GP-5 outperforms both GP-1 and the CMA-ES on the lower and middle parts of the empirical distribution functions (ECDFs) basically for all dimensionalities (Figure 3). On the other hand, GP-1 outperforms GP-5 especially in later phases of the search (Figure 3).

The ability of ADA-Kendall and ADA-RD to switch to a less-exploitation mode when appropriate is eminent on the ECDFs plots in , especially on the moderate and the all-function groups (top right and bottom right on Figure 3), with exception of the well structured multimodal group (middle right), when they fail in the middle part and the weakly structured multimodal group (bottom left), when they fail towards the end of the search.







13 1 13 1 13 1 13 1 13 1 13 1
CMA-ES 8 8 11 8 10 7 11 10 7 9
GP-1 16 16 12 9 13 8 13 9 9 6
GP-5 13 16 12 15 11 10 14 13 9 14
ADA-KL 14 17 11 15 13 13 16 14 12 15
ADA-Ken 13 14 11 15 10 10 8 9 7 11
ADA-RD 17 15 15 17 15 9 12 9 17 12







13 1 13 1 13 1 13 1 13 1 13 1
CMA-ES 11 9 7 13 8 10 9 9 7 8
GP-1 13 15 7 14 7 11 6 8 10 9
GP-5 17 11 17 10 15 9 13 6 14 9
ADA-KL 16 14 17 13 9 15 13 11 10 9
ADA-Ken 15 15 18 16 11 17 11 13 11 12
ADA-RD 17 16 14 15 10 14 14 15 13 10







13 1 13 1 13 1 13 1 13 1 13 1
CMA-ES 8 12 11 14 11 14 7 10 2 8
GP-1 16 12 11 12 11 12 9 7 3 4
GP-5 13 10 13 12 10 6 9 5 8 7
ADA-KL 13 10 13 11 14 18 7 10 8 5
ADA-Ken 17 14 15 17 15 19 17 14 10 9
ADA-RD 22 16 21 20 16 17 16 19 14 14







13 1 13 1 13 1 13 1 13 1 13 1
CMA-ES 7 12 13 14 10 14 1 8 1 4
GP-1 17 12 15 15 14 14 4 5 5 4
GP-5 11 10 9 9 8 11 6 4 8 5
ADA-KL 14 10 10 10 16 13 8 5 9 8
ADA-Ken 23 16 20 19 18 20 16 19 13 12
ADA-RD 23 20 19 20 16 19 15 15 11 12







13 1 13 1 13 1 13 1 13 1 13 1
CMA-ES 7 5 11 12 9 13 4 3 3 5
GP-1 17 19 14 17 12 16 7 6 9 8
GP-5 13 12 10 7 4 5 6 5 10 6
ADA-KL 15 11 12 8 20 19 9 8 13 10
ADA-Ken 20 21 17 18 18 19 15 16 17 17
ADA-RD 21 19 15 16 14 18 11 14 7 7
Tabuľka 4: A pairwise comparison of the algorithms in , , , and over the BBOB for 2 different evaluation budgets. The comparison is based on medians over runs on 15 instances for each of all the 24 functions. The number of wins of -th algorithm against -th algorithm over all benchmark functions is given in -th row and -th column. The asterisk marks the row algorithm achieving a significantly lower value of the objective function than the column algorithm according to the Friedman post-hoc test with the Bergmann-Hommel correction at family-wise significance level .
separable functions moderate functions
ill-conditioned functions multimodal functions
weakly structured multimodal functions all functions
Obr. 3: Bootstrapped empirical cumulative distribution of the number of objective function evaluations divided by dimension (FEvals/DIM) for all functions and subgroups in 20-D. The targets are chosen from such that the best algorithm from BBOB 2009 just not reached them within a given budget of DIM, with different values of chosen equidistant in logscale within the interval . The “best 2009” line corresponds to the best  observed during BBOB 2009 for each selected target.

7 Conclusion

In this paper, we implemented several modifications of the Surrogate CMA-ES (S-CMA-ES), an algorithm using generation-based evolution control in connection with GPs. We considered three measures of surrogate model error according to which an adequate number of upcoming model-evaluated generations could be estimated online. Three resulting algorithms were compared on the COCO/BBOB framework with the S-CMA-ES parametrized by two different numbers of consecutive model-evaluated generations. Since the work on the adaptive extension is still in progress, the presented results summarize the performance of all compared algorithms on the whole BBOB framework or its function groups. We found two error measures, the Kendall rank correlation and the rank difference error, that significantly outperformed the S-CMA-ES used with a higher number of model-evaluated generations, especially in higher dimensionalities of the input space. However, both of these algorithms provided only a minor improvement of the S-CMA-ES used with a lower number of model-evaluated generations and in some tested scenarios fell behind both tested settings of the S-CMA-ES. An area for further research is the adjustment of other surrogate model parameters beside the control frequency, such as the number of the training points or the radius of the area from which they are selected.

8 Acknowledgments

The research reported in this paper has been supported by the Czech Science Foundation (GAČR) grant 17-01251.

Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum, provided under the programme "Projects of Large Research, Development, and Innovations Infrastructures"(CESNET LM2015042), is greatly appreciated.


  • [1] A. Auger and N. Hansen. A restart CMA evolution strategy with increasing population size. In

    2005 IEEE Congress on Evolutionary Computation

    . IEEE, 2005.
  • [2] L. Bajer, Z. Pitra, and M. Holeňa. Benchmarking Gaussian processes and random forests surrogate models on the BBOB noiseless testbed. In Proceedings of the Companion Publication of the 2015 on Genetic and Evolutionary Computation Conference - GECCO Companion '15. Association for Computing Machinery (ACM), 2015.
  • [3] J. Demšar.

    Statistical comparisons of classifiers over multiple data sets.

    J. Mach. Learn. Res., 7:1–30, December 2006.
  • [4] S. García and F. Herrera. An extension on "statistical comparisons of classifiers over multiple data sets"for all pairwise comparisons. J. Mach. Learn. Res., 9:2677–2694, 2008.
  • [5] N. Hansen. The CMA evolution strategy: A tutorial. CoRR, abs/1604.00772, 2016.
  • [6] N. Hansen, A. Auger, R. Ros, S. Finck, and P. Pošík. Comparing results of 31 algorithms from the Black-box Optimization Benchmarking BBOB-2009. In Proceedings of the 12th annual conference comp on Genetic and evolutionary computation - GECCO '10, pages 1689–1696, New York, NY, USA, 2010. ACM.
  • [7] N. Hansen, S. Finck, R. Ros, and A. Auger. Real-parameter Black-Box Optimization Benchmarking 2009: Noiseless functions definitions. Technical report, INRIA, 2009, updated 2010.
  • [8] N. Hansen, S. Finck, R. Ros, and A. Auger. Real-parameter Black-Box Optimization Benchmarking 2012: Experimental setup. Technical report, INRIA, 2012.
  • [9] Y. Jin and B. Sendhoff. Fitness approximation in evolutionary computation–A survey. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’02, pages 1105–1112, San Francisco, CA, USA, 2002. Morgan Kaufmann Publishers Inc.
  • [10] S. Kern, N. Hansen, and P. Koumoutsakos. Local Meta-models for Optimization Using Evolution Strategies, pages 939–948. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006.
  • [11] I. Loshchilov, M. Schoenauer, and M. Sebag. Self-adaptive surrogate-sssisted Covariance Matrix Adaptation evolution strategy. In Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference - GECCO '12. Association for Computing Machinery (ACM), 2012.
  • [12] I. Loshchilov, M. Schoenauer, and M. Sebag. Kl-based control of the learning schedule for surrogate black-box optimization. CoRR, abs/1308.2655, 2013.
  • [13] Z. Pitra, L. Bajer, J. Repický, and M. Holeňa. Overview of surrogate-model versions of covariance matrix adaptation evolution strategy. In Proceedings of the Genetic and Evolutionary Computation Conference 2017, Berlin, Germany, July 15–19, 2017 (GECCO ’17). ACM, July 2017.
  • [14] C. E. Rassmusen and C. K. I. Williams.

    Gaussian processes for machine learning

    Adaptive computation and machine learning series. MIT Press, 2006.