High-Dimensional Sparse Linear Bandits

11/08/2020
by   Botao Hao, et al.
0

Stochastic linear bandits with high-dimensional sparse features are a practical model for a variety of domains, including personalized medicine and online advertising. We derive a novel Ω(n^2/3) dimension-free minimax regret lower bound for sparse linear bandits in the data-poor regime where the horizon is smaller than the ambient dimension and where the feature vectors admit a well-conditioned exploration distribution. This is complemented by a nearly matching upper bound for an explore-then-commit algorithm showing that that Θ(n^2/3) is the optimal rate in the data-poor regime. The results complement existing bounds for the data-rich regime and provide another example where carefully balancing the trade-off between information and regret is necessary. Finally, we prove a dimension-free O(√(n)) regret upper bound under an additional assumption on the magnitude of the signal for relevant features.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset