BBOB Instance Analysis: Landscape Properties and Algorithm Performance across Problem Instances

11/29/2022
by   Fu Xing Long, et al.
0

Benchmarking is a key aspect of research into optimization algorithms, and as such the way in which the most popular benchmark suites are designed implicitly guides some parts of algorithm design. One of these suites is the black-box optimization benchmarking (BBOB) suite of 24 single-objective noiseless functions, which has been a standard for over a decade. Within this problem suite, different instances of a single problem can be created, which is beneficial for testing the stability and invariance of algorithms under transformations. In this paper, we investigate the BBOB instance creation protocol by considering a set of 500 instances for each BBOB problem. Using exploratory landscape analysis, we show that the distribution of landscape features across BBOB instances is highly diverse for a large set of problems. In addition, we run a set of eight algorithms across these 500 instances, and investigate for which cases statistically significant differences in performance occur. We argue that, while the transformations applied in BBOB instances do indeed seem to preserve the high-level properties of the functions, their difference in practice should not be overlooked, particularly when treating the problems as box-constrained instead of unconstrained.

READ FULL TEXT

page 6

page 10

page 12

research
04/15/2021

On the Assessment of Benchmark Suites for Algorithm Comparison

Benchmark suites, i.e. a collection of benchmark functions, are widely u...
research
04/25/2022

SELECTOR: Selecting a Representative Benchmark Suite for Reproducible Statistical Comparison

Fair algorithm evaluation is conditioned on the existence of high-qualit...
research
06/01/2023

Algorithm Instance Footprint: Separating Easily Solvable and Challenging Problem Instances

In black-box optimization, it is essential to understand why an algorith...
research
04/27/2021

A Complementarity Analysis of the COCO Benchmark Problems and Artificially Generated Problems

When designing a benchmark problem set, it is important to create a set ...
research
06/27/2022

Toward an ImageNet Library of Functions for Global Optimization Benchmarking

Knowledge of search-landscape features of BlackBox Optimization (BBO) pr...
research
03/15/2019

COCO: The Large Scale Black-Box Optimization Benchmarking (bbob-largescale) Test Suite

The bbob-largescale test suite, containing 24 single-objective functions...
research
04/18/2023

Computational and Exploratory Landscape Analysis of the GKLS Generator

The GKLS generator is one of the most used testbeds for benchmarking glo...

Please sign up or login with your details

Forgot password? Click here to reset