A Complementarity Analysis of the COCO Benchmark Problems and Artificially Generated Problems

04/27/2021
by   Urban Škvorc, et al.
0

When designing a benchmark problem set, it is important to create a set of benchmark problems that are a good generalization of the set of all possible problems. One possible way of easing this difficult task is by using artificially generated problems. In this paper, one such single-objective continuous problem generation approach is analyzed and compared with the COCO benchmark problem set, a well know problem set for benchmarking numerical optimization algorithms. Using Exploratory Landscape Analysis and Singular Value Decomposition, we show that such representations allow us to further explore the relations between the problems by applying visualization and correlation analysis techniques, with the goal of decreasing the bias in benchmark problem assessment.

READ FULL TEXT

page 1

page 2

research
04/18/2023

Computational and Exploratory Landscape Analysis of the GKLS Generator

The GKLS generator is one of the most used testbeds for benchmarking glo...
research
11/29/2022

BBOB Instance Analysis: Landscape Properties and Algorithm Performance across Problem Instances

Benchmarking is a key aspect of research into optimization algorithms, a...
research
05/09/2018

Creative Invention Benchmark

In this paper we present the Creative Invention Benchmark (CrIB), a 2000...
research
03/08/2023

Using Affine Combinations of BBOB Problems for Performance Assessment

Benchmarking plays a major role in the development and analysis of optim...
research
08/17/2017

Comprehensive Feature-Based Landscape Analysis of Continuous and Constrained Optimization Problems Using the R-Package flacco

Choosing the best-performing optimizer(s) out of a portfolio of optimiza...
research
05/11/2016

COCO: Performance Assessment

We present an any-time performance assessment for benchmarking numerical...

Please sign up or login with your details

Forgot password? Click here to reset