Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates

03/30/2017
by   Katharina Eggensperger, et al.
0

The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. The resulting algorithm configuration (AC) problem has attracted much attention from the machine learning community. However, the proper evaluation of new AC procedures is hindered by two key hurdles. First, AC benchmarks are hard to set up. Second and even more significantly, they are computationally expensive: a single run of an AC procedure involves many costly runs of the target algorithm whose performance is to be optimized in a given AC benchmark scenario. One common workaround is to optimize cheap-to-evaluate artificial benchmark functions (e.g., Branin) instead of actual algorithms; however, these have different properties than realistic AC problems. Here, we propose an alternative benchmarking approach that is similarly cheap to evaluate but much closer to the original AC problem: replacing expensive benchmarks by surrogate benchmarks constructed from AC benchmarks. These surrogate benchmarks approximate the response surface corresponding to true target algorithm performance using a regression model, and the original and surrogate benchmark share the same (hyper-)parameter space. In our experiments, we construct and evaluate surrogate benchmarks for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems, drawing training data from the runs of existing AC procedures. We show that our surrogate benchmarks capture overall important characteristics of the AC scenarios, such as high- and low-performing regions, from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2019

Which Surrogate Works for Empirical Performance Modelling? A Case Study with Differential Evolution

It is not uncommon that meta-heuristic algorithms contain some intrinsic...
research
09/08/2021

YAHPO Gym – Design Criteria and a new Multifidelity Benchmark for Hyperparameter Optimization

When developing and analyzing new hyperparameter optimization (HPO) meth...
research
05/30/2019

Meta-Surrogate Benchmarking for Hyperparameter Optimization

Despite the recent progress in hyperparameter optimization (HPO), availa...
research
04/28/2022

A Collection of Quality Diversity Optimization Problems Derived from Hyperparameter Optimization of Machine Learning Models

The goal of Quality Diversity Optimization is to generate a collection o...
research
10/31/2019

Evaluation of Surrogate Models for Multi-fin Flapping Propulsion Systems

The aim of this study is to develop surrogate models for quick, accurate...
research
09/14/2021

HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO

To achieve peak predictive performance, hyperparameter optimization (HPO...
research
09/29/2022

Dynamic Surrogate Switching: Sample-Efficient Search for Factorization Machine Configurations in Online Recommendations

Hyperparameter optimization is the process of identifying the appropriat...

Please sign up or login with your details

Forgot password? Click here to reset