On the Assessment of Benchmark Suites for Algorithm Comparison

04/15/2021
by   David Issa Mattos, et al.
0

Benchmark suites, i.e. a collection of benchmark functions, are widely used in the comparison of black-box optimization algorithms. Over the years, research has identified many desired qualities for benchmark suites, such as diverse topology, different difficulties, scalability, representativeness of real-world problems among others. However, while the topology characteristics have been subjected to previous studies, there is no study that has statistically evaluated the difficulty level of benchmark functions, how well they discriminate optimization algorithms and how suitable is a benchmark suite for algorithm comparison. In this paper, we propose the use of an item response theory (IRT) model, the Bayesian two-parameter logistic model for multiple attempts, to statistically evaluate these aspects with respect to the empirical success rate of algorithms. With this model, we can assess the difficulty level of each benchmark, how well they discriminate different algorithms, the ability score of an algorithm, and how much information the benchmark suite adds in the estimation of the ability scores. We demonstrate the use of this model in two well-known benchmark suites, the Black-Box Optimization Benchmark (BBOB) for continuous optimization and the Pseudo Boolean Optimization (PBO) for discrete optimization. We found that most benchmark functions of BBOB suite have high difficulty levels (compared to the optimization algorithms) and low discrimination. For the PBO, most functions have good discrimination parameters but are often considered too easy. We discuss potential uses of IRT in benchmarking, including its use to improve the design of benchmark suites, to measure multiple aspects of the algorithms, and to design adaptive suites.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/01/2020

Benchmarking for Metaheuristic Black-Box Optimization: Perspectives and Open Challenges

Research on new optimization algorithms is often funded based on the mot...
10/08/2020

Black-Box Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking

Existing studies in black-box optimization suffer from low generalizabil...
07/29/2014

A CUDA-Based Real Parameter Optimization Benchmark

Benchmarking is key for developing and comparing optimization algorithms...
09/12/2021

A Scalable Continuous Unbounded Optimisation Benchmark Suite from Neural Network Regression

For the design of optimisation algorithms that perform well in general, ...
10/02/2020

Reviewing and Benchmarking Parameter Control Methods in Differential Evolution

Many Differential Evolution (DE) algorithms with various parameter contr...
10/08/2020

Olympus: a benchmarking framework for noisy optimization and experiment planning

Research challenges encountered across science, engineering, and economi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.