Episodic Bandits with Stochastic Experts

07/07/2021
by   Nihal Sharma, et al.
0

We study a version of the contextual bandit problem where an agent is given soft control of a node in a graph-structured environment through a set of stochastic expert policies. The agent interacts with the environment over episodes, with each episode having different context distributions; this results in the `best expert' changing across episodes. Our goal is to develop an agent that tracks the best expert over episodes. We introduce the Empirical Divergence-based UCB (ED-UCB) algorithm in this setting where the agent does not have any knowledge of the expert policies or changes in context distributions. With mild assumptions, we show that bootstrapping from Õ(Nlog(NT^2√(E))) samples results in a regret of Õ(E(N+1) + N√(E)/T^2). If the expert policies are known to the agent a priori, then we can improve the regret to Õ(EN) without requiring any bootstrapping. Our analysis also tightens pre-existing logarithmic regret bounds to a problem-dependent constant in the non-episodic setting when expert policies are known. We finally empirically validate our findings through simulations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset