Contextual Bandits with Cross-learning

09/25/2018
by   Santiago Balseiro, et al.
0

In the classical contextual bandits problem, in each round t, a learner observes some context c, chooses some action a to perform, and receives some reward r_a,t(c). We consider the variant of this problem where in addition to receiving the reward r_a,t(c), the learner also learns the values of r_a,t(c') for all other contexts c'; i.e., the rewards that would have been achieved by performing that action under different contexts. This variant arises in several strategic settings, such as learning how to bid in non-truthful repeated auctions (in this setting the context is the decision maker's private valuation for each auction). We call this problem the contextual bandits problem with cross-learning. The best algorithms for the classical contextual bandits problem achieve Õ(√(CKT)) regret against all stationary policies, where C is the number of contexts, K the number of actions, and T the number of rounds. We demonstrate algorithms for the contextual bandits problem with cross-learning that remove the dependence on C and achieve regret O(√(KT)) (when contexts are stochastic with known distribution), Õ(K^1/3T^2/3) (when contexts are stochastic with unknown distribution), and Õ(√(KT)) (when contexts are adversarial but rewards are stochastic).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset