IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics

We present IOHexperimenter, the experimentation module of the IOHprofiler project, which aims at providing an easy-to-use and highly customizable toolbox for benchmarking iterative optimization heuristics such as evolutionary and genetic algorithms, local search algorithms, Bayesian optimization techniques, etc. IOHexperimenter can be used as a stand-alone tool or as part of a benchmarking pipeline that uses other components of IOHprofiler such as IOHanalyzer, the module for interactive performance analysis and visualization. IOHexperimenter provides an efficient interface between optimization problems and their solvers while allowing for granular logging of the optimization process. These logs are fully compatible with existing tools for interactive data analysis, which significantly speeds up the deployment of a benchmarking pipeline. The main components of IOHexperimenter are the environment to build customized problem suites and the various logging options that allow users to steer the granularity of the data records.

READ FULL TEXT VIEW PDF
07/08/2020

IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic

We propose IOHanalyzer, a new software for analyzing the empirical perfo...
12/19/2019

Benchmarking Discrete Optimization Heuristics with IOHprofiler

Automated benchmarking environments aim to support researchers in unders...
02/12/2021

Towards Large Scale Automated Algorithm Design by Integrating Modular Benchmarking Frameworks

We present a first proof-of-concept use-case that demonstrates the effic...
07/26/2018

A Linear Constrained Optimization Benchmark For Probabilistic Search Algorithms: The Rotated Klee-Minty Problem

The development, assessment, and comparison of randomized search algorit...
10/11/2018

IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics

IOHprofiler is a new tool for analyzing and comparing iterative optimiza...
07/07/2020

Benchmarking in Optimization: Best Practice and Open Issues

This survey compiles ideas and recommendations from more than a dozen re...
04/20/2022

Analyzing the Impact of Undersampling on the Benchmarking and Configuration of Evolutionary Algorithms

The stochastic nature of iterative optimization heuristics leads to inhe...

1 Introduction

In order to improve upon and to make comparisons to state-of-the-art optimization algorithms, it is important to be able to gain insights into the behaviour of these algorithms on wide ranges of problems. To do so systematically, a robust benchmarking setup has to be created that allows for rigorous testing of algorithms. While many sets of benchmarking problems exist, they are often limited in scope. These limitations can either be in application, with tools created for very specific sets of problems that are hard to extend, or in terms of reusable design, where interaction between different parts of the benchmarking pipeline is hard to achieve. Therefore, an overarching benchmarking pipeline would be highly beneficial, as this allows for easy transition from the implementation of algorithms to the analysis and comparison of performance data.

In this paper, we introduce IOHexperimenter, an easy-to-use, highly customizable and expandable tool for benchmarking iterative optimization heuristics (IOHs). IOHexperimenter is part of the overarching IOHprofiler project, which connects algorithm frameworks, problem suites, interactive data analysis, and performance repositories together in an extendable benchmarking pipeline. Within this pipeline, IOHexperimenter can be considered the interface between algorithms and problems, where it allows consistent data collection of both performance and algorithmic data such as the evolution of control parameters during the optimization process.

We consider here a benchmark process consisting of three components: problems, loggers, and algorithms. While these components interact to perform the benchmarking, they should be usable in a stand-alone manner, allowing any of these factors to be modified without impacting the behaviour of the others. Within IOHexperimenter, an interface is provided to ensure that any changes to the setup will be compatible with the other components of the benchmarking pipeline.

Figure 1: Workflow of IOHexperimenter

2 Functionality

At its core, IOHexperimenter provides a standard interface towards expandable benchmark problems and several loggers to track the performance and the behavior (internal parameters and states) of algorithms during the optimization process. The logger is integrated into a wide range of existing tools for benchmarking, including problem suites such as PBO (Doerr et al., 2020) and the W-model (Weise et al., 2020)

for discrete optimization and COCO’s BBOB 

(Hansen et al., 2021) for the continuous case. On the algorithm side, IOHexperimenter has been connected to several modular algorithm frameworks, such as modular GA (Ye et al., 2021) and modular CMA-ES (de Nobel et al., 2021). Additionally, output generated by the included loggers is compatible with the IOHanalyzer module (Wang et al., 2020) for interactive performance analysis.

Figure 1 shows the way IOHexperimenter can be placed in a typical benchmarking workflow. The key factor here is the flexibility of design: IOHexperimenter can be used with any user-provided solvers and problems given a minimal overhead, and ensures output of experimental results which follow conventional standards. Because of this, the data produced by IOHexperimenter is compatible with post-processing frameworks like IOHanalyzer, enabling an efficient path from algorithm design to performance analysis. In addition to the built-in interfaces to existing software, IOHexperimenter aims to provide an user-accessible way to customize the benchmarking setup. IOHexperimenter is built in C++, with an interface to Python. In this paper, we describe the functionality of the package on a high level, without going into implementation details.111Technical documentation for both C++ and Python can be found on the IOHprofiler wiki at https://iohprofiler.github.io/, which provides a getting-started and several use-cases. In the sequel, we introduce the typical usage of IOHexperimenter, as well as the ways in which it can be customized to fit different benchmarking scenarios.

2.1 Problems

In IOHexperimenter, a problem instance is defined as , in which is a benchmark problem (e.g., for OneMax and for the sphere function ) and and are automorphisms supported on and , respectively, representing transformations in the problem’s domain and range (e.g., translations and rotations for ). To generate a problem instance, one needs to specify a tuple of a problem , an instance identifier , and the dimension of the problem. Note that both transformations are applied to generalize the benchmark problem, where the instance id serves as the random seed for instantiating and . Any problem instance that reconciles with this definition of , can easily be integrated into IOHexperimenter, using the C++ core or the Python interface.222Note that multi-objective problems do not follow this structure, and are not yet supported within IOHexperimenter. Integration of both noisy and mixed-variable type objective functions is in development.

The transformation methods are particularly important for robust benchmarking, as they allow for the creation of multiple problem instances from the same base-function, which enables checking of invariance properties of algorithms, such as scaling invariance. Built-in transformations for pseudo-Boolean functions are available Doerr et al. (2018), as well as transformation methods for continuous optimization used by Hansen et al. (2021).

When combining several problems together, a problem suite can be created. This suite can then be used for more convenient benchmarking by providing access to built-in iterators which allow a solver to easily run on all selected problem instances within the suite. Additionally, an interface to two classes of the W-model extensions (based on the OneMax and LeadingOnes respectively) (Weise et al., 2020) for generating problems is available.

2.2 Data Logging

IOHexperimenter provides loggers to track the performance of algorithms during the optimization process. These loggers can be tightly coupled with the problems: when evaluating a problem, the attached loggers will be triggered with the relevant information to store. This information will be performance-oriented by default, with customizable levels of granularity, but can also include any algorithm parameters. This can be especially useful for tracking the evolution of self-adaptive parameters in iterative optimization algorithms.

The default logger makes use of a two-part data format: meta-information, such as function id, instance, dimension, etc., that gets written to .info-files, while the performance data itself gets written to space-separated .dat-files. A full specification of this format can be found in Wang et al. (2020). Data in this format can be used directly with the IOHanalyzer for interactive analysis of the recorded performance metrics.

In addition to the built-in loggers, custom logging functionality can be created within IOHexperimenter as well. This can be used to reduce the footprint of the data when doing massive experiments such as algorithm configuration, where only the final performance measure is relevant (Aziz-Alaoui et al., 2021).

3 Conclusions and Future Work

IOHexperimenter is a tool for benchmarking iterative optimization heuristics in an approachable manner. Its ease of use across multiple programming languages helps to reduce the barrier to performing reproducible benchmarking, while its customizability ensures broad applicability to many scenarios. While currently IOHexperimenter only supports single-objective, noiseless optimization, an extension to other types of problems would be desirable, allowing for more general usage of the IOHexperimenter. Additionally, support for arbitrary combinations of variable types would enable the creation of benchmark suites in the mixed-integer domain. Since IOHexperimenter is part of the ongoing IOHprofiler initiative, it can be slotted into a benchmarking pipeline together with the IOHanalyzer module, which provides a highly interactive analysis of the resulting benchmarking data. Custom logging functionality also allows IOHexperimenter to be used in scenarios such as algorithm configuration, and it should be maintained and extended in future to allow for easy connection with tools from these domains.

The IOHprofiler project welcomes contributions of problems from various domains and loggers with different perspectives. Its aim is to provide a platform where researchers from various domains can share their benchmarking data to allow the research community to freely combine them with their own data, and form new insights into algorithm behaviour in the process.

References

  • A. Aziz-Alaoui, C. Doerr, and J. Dréo (2021) Towards large scale automated algorithm design by integrating modular benchmarking frameworks. In

    Proc. of Genetic and Evolutionary Computation Conference (GECCO’21, Companion Material)

    ,
    pp. 1365–1374. External Links: Link, Document Cited by: §2.2.
  • J. de Nobel, D. Vermetten, H. Wang, C. Doerr, and T. Bäck (2021) Tuning as a means of assessing the benefits of new ideas in interplay with existing algorithmic modules. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’21, Companion Material), pp. 1375–1384. External Links: Link, Document Cited by: §2.
  • C. Doerr, H. Wang, F. Ye, S. van Rijn, and T. Bäck (2018) IOHprofiler: A benchmarking and profiling tool for iterative optimization heuristics. CoRR abs/1810.05281. External Links: Link, 1810.05281 Cited by: §2.1.
  • C. Doerr, F. Ye, N. Horesh, H. Wang, O. M. Shir, and T. Bäck (2020) Benchmarking discrete optimization heuristics with IOHprofiler. Applied Soft Computing 88, pp. 106027. Cited by: §2.
  • N. Hansen, A. Auger, R. Ros, O. Mersmann, T. Tušar, and D. Brockhoff (2021) COCO: a platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software 36 (1), pp. 114–144. Cited by: §2.1, §2.
  • H. Wang, D. Vermetten, F. Ye, C. Doerr, and T. Bäck (2020) IOHanalyzer: performance analysis for iterative optimization heuristic. CoRR abs/2007.03953. External Links: Link, 2007.03953 Cited by: §2.2, §2.
  • T. Weise, Y. Chen, X. Li, and Z. Wu (2020) Selecting a diverse set of benchmark instances from a tunable model problem for black-box discrete optimization algorithms. Applied Soft Computing 92, pp. 106269. External Links: Link, Document Cited by: §2.1, §2.
  • F. Ye, C. Doerr, H. Wang, and T. Bäck (2021) Automated configuration of genetic algorithms by tuning for anytime performance. CoRR abs/2106.06304. External Links: Link, 2106.06304 Cited by: §2.