Towards Large Scale Automated Algorithm Design by Integrating Modular Benchmarking Frameworks

02/12/2021
by   Amine Aziz-Alaoui, et al.
0

We present a first proof-of-concept use-case that demonstrates the efficiency of interfacing the algorithm framework ParadisEO with the automated algorithm configuration tool irace and the experimental platform IOHprofiler. By combing these three tools, we obtain a powerful benchmarking environment that allows us to systematically analyze large classes of algorithm spaces on complex benchmark problems. Key advantages of our pipeline are fast evaluation times, the possibility to generate rich data sets to support the analysis of the algorithms, and a standardized interface that can be used to benchmark very broad classes of sampling-based optimization heuristics. In addition to enabling systematic algorithm configuration studies, our approach paves a way for assessing the contribution of new ideas in interplay with already existing operators – a promising avenue for our research domain, which at present may have a too strong focus on comparing entire algorithm instances.

READ FULL TEXT

page 7

page 17

page 23

page 28

page 31

page 32

page 37

page 39

research
04/24/2021

OPTION: OPTImization Algorithm Benchmarking ONtology

Many platforms for benchmarking optimization algorithms offer users the ...
research
11/07/2021

IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics

We present IOHexperimenter, the experimentation module of the IOHprofile...
research
12/19/2019

Benchmarking Discrete Optimization Heuristics with IOHprofiler

Automated benchmarking environments aim to support researchers in unders...
research
09/28/2021

Extensible Logging and Empirical Attainment Function for IOHexperimenter

In order to allow for large-scale, landscape-aware, per-instance algorit...
research
07/13/2017

Dependency Injection for Programming by Optimization

Programming by Optimization tools perform automatic software configurati...
research
01/29/2019

A Modular Benchmarking Infrastructure for High-Performance and Reproducible Deep Learning

We introduce Deep500: the first customizable benchmarking infrastructure...
research
10/02/2020

Reviewing and Benchmarking Parameter Control Methods in Differential Evolution

Many Differential Evolution (DE) algorithms with various parameter contr...

Please sign up or login with your details

Forgot password? Click here to reset