Multi-fidelity Bayesian Optimisation with Continuous Approximations

03/18/2017
by   Kirthevasan Kandasamy, et al.
0

Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, multi-fidelity methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only a finite number of approximations. In many practical applications however, a continuous spectrum of approximations might be available. For instance, when tuning an expensive neural network, one might choose to approximate the cross validation performance using less data N and/or few training iterations T. Here, the approximations are best viewed as arising out of a continuous two dimensional space (N,T). In this work, we develop a Bayesian optimisation method, BOCA, for this setting. We characterise its theoretical properties and show that it achieves better regret than than strategies which ignore the approximations. BOCA outperforms several other baselines in synthetic and real experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2016

Multi-fidelity Gaussian Process Bandit Optimisation

In many scientific and engineering applications, we are tasked with the ...
research
03/15/2019

Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly

Bayesian Optimisation (BO), refers to a suite of techniques for global o...
research
10/24/2018

Noisy Blackbox Optimization with Multi-Fidelity Queries: A Tree Search Approach

We study the problem of black-box optimization of a noisy function in th...
research
08/11/2022

A Principled Method for the Creation of Synthetic Multi-fidelity Data Sets

Multifidelity and multioutput optimisation algorithms are of active inte...
research
06/06/2023

Learning to Do or Learning While Doing: Reinforcement Learning and Bayesian Optimisation for Online Continuous Tuning

Online tuning of real-world plants is a complex optimisation problem tha...
research
10/14/2021

Procrastinated Tree Search: Black-box Optimization with Delayed, Noisy, and Multi-fidelity Feedback

In black-box optimization problems, we aim to maximize an unknown object...
research
05/28/2017

Bayesian Unification of Gradient and Bandit-based Learning for Accelerated Global Optimisation

Bandit based optimisation has a remarkable advantage over gradient based...

Please sign up or login with your details

Forgot password? Click here to reset