Log In Sign Up

Reviewing and Benchmarking Parameter Control Methods in Differential Evolution

by   Ryoji Tanabe, et al.

Many Differential Evolution (DE) algorithms with various parameter control methods (PCMs) have been proposed. However, previous studies usually considered PCMs to be an integral component of a complex DE algorithm. Thus the characteristics and performance of each method are poorly understood. We present an in-depth review of 24 PCMs for the scale factor and crossover rate in DE and a large scale benchmarking study. We carefully extract the 24 PCMs from their original, complex algorithms and describe them according to a systematic manner. Our review facilitates the understanding of similarities and differences between existing, representative PCMs. The performance of DEs with the 24 PCMs and 16 variation operators is investigated on 24 black-box benchmark functions. Our benchmarking results reveal which methods exhibit high performance when embedded in a standardized framework under 16 different conditions, independent from their original, complex algorithms. We also investigate how much room there is for further improvement of PCMs by comparing the 24 methods with an oracle-based model, which can be considered to be a conservative lower bound on the performance of an optimal method.


page 1

page 14


Benchmarking Feature-based Algorithm Selection Systems for Black-box Numerical Optimization

Feature-based algorithm selection aims to automatically find the best on...

Benchmarking the Hooke-Jeeves Method, MTS-LS1, and BSrr on the Large-scale BBOB Function Set

This paper investigates the performance of three black-box optimizers ex...

Analyzing Adaptive Parameter Landscapes in Parameter Adaptation Methods for Differential Evolution

Since the scale factor and the crossover rate significantly influence th...

Benchmarking for Metaheuristic Black-Box Optimization: Perspectives and Open Challenges

Research on new optimization algorithms is often funded based on the mot...

On the Assessment of Benchmark Suites for Algorithm Comparison

Benchmark suites, i.e. a collection of benchmark functions, are widely u...

COCO: The Experimental Procedure

We present a budget-free experimental setup and procedure for benchmarki...

Towards Large Scale Automated Algorithm Design by Integrating Modular Benchmarking Frameworks

We present a first proof-of-concept use-case that demonstrates the effic...