Causal Bandits with Propagating Inference
Bandit is a framework for designing sequential experiments. In each experiment, a learner selects an arm A ∈A and obtains an observation corresponding to A. Theoretically, the tight regret lower-bound for the general bandit is polynomial with respect to the number of arms |A|. This makes bandit incapable of handling an exponentially large number of arms, hence the bandit problem with side-information is often considered to overcome this lower bound. Recently, a bandit framework over a causal graph was introduced, where the structure of the causal graph is available as side-information. A causal graph is a fundamental model that is frequently used with a variety of real problems. In this setting, the arms are identified with interventions on a given causal graph, and the effect of an intervention propagates throughout all over the causal graph. The task is to find the best intervention that maximizes the expected value on a target node. Existing algorithms for causal bandit overcame the Ω(√(|A|/T)) simple-regret lower-bound; however, their algorithms work only when the interventions A are localized around a single node (i.e., an intervention propagates only to its neighbors). We propose a novel causal bandit algorithm for an arbitrary set of interventions, which can propagate throughout the causal graph. We also show that it achieves O(√(γ^*(|A|T) / T)) regret bound, where γ^* is determined by using a causal graph structure. In particular, if the in-degree of the causal graph is bounded, then γ^* = O(N^2), where N is the number N of nodes.
READ FULL TEXT