Structured Linear Contextual Bandits: A Sharp and Geometric Smoothed Analysis

02/26/2020 ∙ by Vidyashankar Sivakumar, et al. ∙ 0

Bandit learning algorithms typically involve the balance of exploration and exploitation. However, in many practical applications, worst-case scenarios needing systematic exploration are seldom encountered. In this work, we consider a smoothed setting for structured linear contextual bandits where the adversarial contexts are perturbed by Gaussian noise and the unknown parameter θ^* has structure, e.g., sparsity, group sparsity, low rank, etc. We propose simple greedy algorithms for both the single- and multi-parameter (i.e., different parameter for each context) settings and provide a unified regret analysis for θ^* with any assumed structure. The regret bounds are expressed in terms of geometric quantities such as Gaussian widths associated with the structure of θ^*. We also obtain sharper regret bounds compared to earlier work for the unstructured θ^* setting as a consequence of our improved analysis. We show there is implicit exploration in the smoothed setting where a simple greedy algorithm works.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.