Bayesian Unification of Gradient and Bandit-based Learning for Accelerated Global Optimisation

05/28/2017
by   Ole-Christoffer Granmo, et al.
0

Bandit based optimisation has a remarkable advantage over gradient based approaches due to their global perspective, which eliminates the danger of getting stuck at local optima. However, for continuous optimisation problems or problems with a large number of actions, bandit based approaches can be hindered by slow learning. Gradient based approaches, on the other hand, navigate quickly in high-dimensional continuous spaces through local optimisation, following the gradient in fine grained steps. Yet, apart from being susceptible to local optima, these schemes are less suited for online learning due to their reliance on extensive trial-and-error before the optimum can be identified. In this paper, we propose a Bayesian approach that unifies the above two paradigms in one single framework, with the aim of combining their advantages. At the heart of our approach we find a stochastic linear approximation of the function to be optimised, where both the gradient and values of the function are explicitly captured. This allows us to learn from both noisy function and gradient observations, and predict these properties across the action space to support optimisation. We further propose an accompanying bandit driven exploration scheme that uses Bayesian credible bounds to trade off exploration against exploitation. Our empirical results demonstrate that by unifying bandit and gradient based learning, one obtains consistently improved performance across a wide spectrum of problem environments. Furthermore, even when gradient feedback is unavailable, the flexibility of our model, including gradient prediction, still allows us outperform competing approaches, although with a smaller margin. Due to the pervasiveness of bandit based optimisation, our scheme opens up for improved performance both in meta-optimisation and in applications where gradient related information is readily available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2023

Representation-Driven Reinforcement Learning

We present a representation-driven framework for reinforcement learning....
research
07/09/2018

Delayed Bandit Online Learning with Unknown Delays

This paper studies bandit learning problems with delayed feedback, which...
research
02/16/2018

The N-Tuple Bandit Evolutionary Algorithm for Game Agent Optimisation

This paper describes the N-Tuple Bandit Evolutionary Algorithm (NTBEA), ...
research
11/01/2019

A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments

We introduce a fully stochastic gradient based approach to Bayesian opti...
research
02/10/2022

Bayesian Optimisation for Mixed-Variable Inputs using Value Proposals

Many real-world optimisation problems are defined over both categorical ...
research
03/18/2017

Multi-fidelity Bayesian Optimisation with Continuous Approximations

Bandit methods for black-box optimisation, such as Bayesian optimisation...
research
07/19/2021

Revisiting the Primal-Dual Method of Multipliers for Optimisation over Centralised Networks

The primal-dual method of multipliers (PDMM) was originally designed for...

Please sign up or login with your details

Forgot password? Click here to reset