On the Performance of Thompson Sampling on Logistic Bandits
We study the logistic bandit, in which rewards are binary with success probability (β a^θ) / (1 + (β a^θ)) and actions a and coefficients θ are within the d-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter β, we establish a regret bound for Thompson sampling that is independent of β. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is Õ(d√(T)). We also establish a Õ(√(dη T)/λ) bound that applies more broadly, where λ is the worst-case optimal log-odds and η is the "fragility dimension," a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any ϵ > 0, no algorithm can achieve poly(d, 1/λ)· T^1-ϵ regret.
READ FULL TEXT