COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting

03/29/2016
by   Nikolaus Hansen, et al.
0

COCO is a platform for Comparing Continuous Optimizers in a black-box setting. It aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. We present the rationals behind the development of the platform as a general proposition for a guideline towards better benchmarking. We detail underlying fundamental concepts of COCO such as its definition of a problem, the idea of instances, the relevance of target values, and runtime as central performance measure. Finally, we give a quick overview of the basic code structure and the available test suites.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2016

COCO: Performance Assessment

We present an any-time performance assessment for benchmarking numerical...
research
03/29/2016

COCO: The Experimental Procedure

We present a budget-free experimental setup and procedure for benchmarki...
research
05/14/2014

COCOpf: An Algorithm Portfolio Framework

Algorithm portfolios represent a strategy of composing multiple heuristi...
research
09/24/2019

ProvMark: A Provenance Expressiveness Benchmarking System

System level provenance is of widespread interest for applications such ...
research
05/05/2016

Biobjective Performance Assessment with the COCO Platform

This document details the rationales behind assessing the performance of...

Please sign up or login with your details

Forgot password? Click here to reset