Structured Linear Contextual Bandits: A Sharp and Geometric Smoothed Analysis

02/26/2020
by   Vidyashankar Sivakumar, et al.
0

Bandit learning algorithms typically involve the balance of exploration and exploitation. However, in many practical applications, worst-case scenarios needing systematic exploration are seldom encountered. In this work, we consider a smoothed setting for structured linear contextual bandits where the adversarial contexts are perturbed by Gaussian noise and the unknown parameter θ^* has structure, e.g., sparsity, group sparsity, low rank, etc. We propose simple greedy algorithms for both the single- and multi-parameter (i.e., different parameter for each context) settings and provide a unified regret analysis for θ^* with any assumed structure. The regret bounds are expressed in terms of geometric quantities such as Gaussian widths associated with the structure of θ^*. We also obtain sharper regret bounds compared to earlier work for the unstructured θ^* setting as a consequence of our improved analysis. We show there is implicit exploration in the smoothed setting where a simple greedy algorithm works.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2020

Greedy Algorithm almost Dominates in Smoothed Contextual Bandits

Online learning algorithms, widely used to power search and content opti...
research
04/10/2022

Worst-case Performance of Greedy Policies in Bandits with Imperfect Context Observations

Contextual bandits are canonical models for sequential decision-making u...
research
01/10/2018

A Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem

Bandit learning is characterized by the tension between long-term explor...
research
06/01/2018

The Externalities of Exploration and How Data Diversity Helps Exploitation

Online learning algorithms, widely used to power search and content opti...
research
06/17/2016

Structured Stochastic Linear Bandits

The stochastic linear bandit problem proceeds in rounds where at each ro...
research
09/29/2020

Online Action Learning in High Dimensions: A New Exploration Rule for Contextual ε_t-Greedy Heuristics

Bandit problems are pervasive in various fields of research and are also...
research
02/08/2015

Learning to Search Better Than Your Teacher

Methods for learning to search for structured prediction typically imita...

Please sign up or login with your details

Forgot password? Click here to reset