COCO: Performance Assessment

05/11/2016
by   Nikolaus Hansen, et al.
0

We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime is the only available measure with a generic, meaningful, and quantitative interpretation. We discuss the choice of the target values, runlength-based targets, and the aggregation of results by using simulated restarts, averages, and empirical distribution functions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2016

COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting

COCO is a platform for Comparing Continuous Optimizers in a black-box se...
research
05/05/2016

Biobjective Performance Assessment with the COCO Platform

This document details the rationales behind assessing the performance of...
research
03/29/2016

COCO: The Experimental Procedure

We present a budget-free experimental setup and procedure for benchmarki...
research
05/14/2014

COCOpf: An Algorithm Portfolio Framework

Algorithm portfolios represent a strategy of composing multiple heuristi...
research
07/08/2020

IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic

We propose IOHanalyzer, a new software for analyzing the empirical perfo...
research
04/27/2021

A Complementarity Analysis of the COCO Benchmark Problems and Artificially Generated Problems

When designing a benchmark problem set, it is important to create a set ...

Please sign up or login with your details

Forgot password? Click here to reset