Budgeted and Non-budgeted Causal Bandits

12/13/2020
by   Vineet Nair, et al.
0

Learning good interventions in a causal graph can be modelled as a stochastic multi-armed bandit problem with side-information. First, we study this problem when interventions are more expensive than observations and a budget is specified. If there are no backdoor paths from an intervenable node to the reward node then we propose an algorithm to minimize simple regret that optimally trades-off observations and interventions based on the cost of intervention. We also propose an algorithm that accounts for the cost of interventions, utilizes causal side-information, and minimizes the expected cumulative regret without exceeding the budget. Our cumulative-regret minimization algorithm performs better than standard algorithms that do not take side-information into account. Finally, we study the problem of learning best interventions without budget constraint in general graphs and give an algorithm that achieves constant expected cumulative regret in terms of the instance parameters when the parent distribution of the reward variable for each intervention is known. Our results are experimentally validated and compared to the best-known bounds in the current literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/06/2021

Causal Bandits on General Graphs

We study the problem of determining the best intervention in a Causal Ba...
10/11/2019

Regret Analysis of Causal Bandit Problems

We study how to learn optimal interventions sequentially given causal in...
01/10/2017

Identifying Best Interventions through Online Importance Sampling

Motivated by applications in computational advertising and systems biolo...
04/08/2020

A Dynamic Observation Strategy for Multi-agent Multi-armed Bandit Problem

We define and analyze a multi-agent multi-armed bandit problem in which ...
08/18/2011

Doing Better Than UCT: Rational Monte Carlo Sampling in Trees

UCT, a state-of-the art algorithm for Monte Carlo tree sampling (MCTS), ...
11/01/2021

Intervention Efficient Algorithm for Two-Stage Causal MDPs

We study Markov Decision Processes (MDP) wherein states correspond to ca...
12/03/2021

Chronological Causal Bandits

This paper studies an instance of the multi-armed bandit (MAB) problem, ...