Abstract
Bandit is a framework for designing sequential experiments, where a learner selects an arm A A and obtains an observation corresponding to A in each experiment. Theoretically, the tight regret lower-bound for the general bandit is polynomial with respect to the number of arms |A|, and thus, to overcome this bound, the bandit problem with side-information is often considered. Recently, a bandit framework over a causal graph was introduced, where the structure of the causal graph is available as side-information and the arms are identified with interventions on the causal graph. Existing p algorithms for causal bandit overcame the simple-regret lower-bound; however, their algorithms work only when the interventions A are localized around a single node (i.e an intervention propagates only to its neighbors). We then propose a novel causal bandit algorithm for an arbitrary set of interventions, which can propagate throughout thep causal graph. We also show that it achieves is determined by using a causal graph structure. In particular, if the maxi mum in-degree of the causal graph is a constant, then , where N is the number of nodes.