An empirical evaluation of active inference in multi-armed bandits

01/21/2021
by   Dimitrije Marković, et al.
0

A key feature of sequential decision making under uncertainty is a need to balance between exploiting–choosing the best action according to the current knowledge, and exploring–obtaining information about values of other actions. The multi-armed bandit problem, a classical task that captures this trade-off, served as a vehicle in machine learning for developing bandit algorithms that proved to be useful in numerous industrial applications. The active inference framework, an approach to sequential decision making recently developed in neuroscience for understanding human and animal behaviour, is distinguished by its sophisticated strategy for resolving the exploration-exploitation trade-off. This makes active inference an exciting alternative to already established bandit algorithms. Here we derive an efficient and scalable approximate active inference algorithm and compare it to two state-of-the-art bandit algorithms: Bayesian upper confidence bound and optimistic Thompson sampling. This comparison is done on two types of bandit problems: a stationary and a dynamic switching bandit. Our empirical evaluation shows that the active inference algorithm does not produce efficient long-term behaviour in stationary bandits. However, in the more challenging switching bandit problem active inference performs substantially better than the two state-of-the-art bandit algorithms. The results open exciting venues for further research in theoretical and applied machine learning, as well as lend additional credibility to active inference as a general framework for studying human and animal behaviour.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2022

Thompson Sampling with Virtual Helping Agents

We address the problem of online sequential decision making, i.e., balan...
research
09/19/2022

Active Inference for Autonomous Decision-Making with Contextual Multi-Armed Bandits

In autonomous robotic decision-making under uncertainty, the tradeoff be...
research
09/29/2018

Dynamic Ensemble Active Learning: A Non-Stationary Bandit with Expert Advice

Active learning aims to reduce annotation cost by predicting which sampl...
research
07/04/2023

Approximate information for efficient exploration-exploitation strategies

This paper addresses the exploration-exploitation dilemma inherent in de...
research
08/23/2021

No DBA? No regret! Multi-armed bandits for index tuning of analytical and HTAP workloads with provable guarantees

Automating physical database design has remained a long-term interest in...
research
08/08/2018

(Sequential) Importance Sampling Bandits

The multi-armed bandit (MAB) problem is a sequential allocation task whe...
research
01/31/2022

Generalized Bayesian Upper Confidence Bound with Approximate Inference for Bandit Problems

Bayesian bandit algorithms with approximate inference have been widely u...

Please sign up or login with your details

Forgot password? Click here to reset