Budgeted and Non-budgeted Causal Bandits

12/13/2020
by   Vineet Nair, et al.
0

Learning good interventions in a causal graph can be modelled as a stochastic multi-armed bandit problem with side-information. First, we study this problem when interventions are more expensive than observations and a budget is specified. If there are no backdoor paths from an intervenable node to the reward node then we propose an algorithm to minimize simple regret that optimally trades-off observations and interventions based on the cost of intervention. We also propose an algorithm that accounts for the cost of interventions, utilizes causal side-information, and minimizes the expected cumulative regret without exceeding the budget. Our cumulative-regret minimization algorithm performs better than standard algorithms that do not take side-information into account. Finally, we study the problem of learning best interventions without budget constraint in general graphs and give an algorithm that achieves constant expected cumulative regret in terms of the instance parameters when the parent distribution of the reward variable for each intervention is known. Our results are experimentally validated and compared to the best-known bounds in the current literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2021

Causal Bandits on General Graphs

We study the problem of determining the best intervention in a Causal Ba...
research
08/26/2022

Causal Bandits for Linear Structural Equation Models

This paper studies the problem of designing an optimal sequence of inter...
research
05/08/2023

Learning Good Interventions in Causal Graphs via Covering

We study the causal bandit problem that entails identifying a near-optim...
research
01/10/2017

Identifying Best Interventions through Online Importance Sampling

Motivated by applications in computational advertising and systems biolo...
research
01/26/2023

Causal Bandits without Graph Learning

We study the causal bandit problem when the causal graph is unknown and ...
research
11/01/2021

Intervention Efficient Algorithm for Two-Stage Causal MDPs

We study Markov Decision Processes (MDP) wherein states correspond to ca...
research
12/03/2021

Chronological Causal Bandits

This paper studies an instance of the multi-armed bandit (MAB) problem, ...

Please sign up or login with your details

Forgot password? Click here to reset