Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks

by   Qin Ding, et al.

Stochastic linear contextual bandit algorithms have substantial applications in practice, such as recommender systems, online advertising, clinical trials, etc. Recent works show that optimal bandit algorithms are vulnerable to adversarial attacks and can fail completely in the presence of attacks. Existing robust bandit algorithms only work for the non-contextual setting under the attack of rewards and cannot improve the robustness in the general and popular contextual bandit environment. In addition, none of the existing methods can defend against attacked context. In this work, we provide the first robust bandit algorithm for stochastic linear contextual bandit setting under a fully adaptive and omniscient attack. Our algorithm not only works under the attack of rewards, but also under attacked context. Moreover, it does not need any information about the attack budget or the particular form of the attack. We provide theoretical guarantees for our proposed algorithm and show by extensive experiments that our proposed algorithm significantly improves the robustness against various kinds of popular attacks.


page 1

page 2

page 3

page 4


When Are Linear Stochastic Bandits Attackable?

We study adversarial attacks on linear stochastic bandits, a sequential ...

Adversarial Attacks on Linear Contextual Bandits

Contextual bandit algorithms are applied in a wide range of domains, fro...

Efficient Action Poisoning Attacks on Linear Contextual Bandits

Contextual bandit algorithms have many applicants in a variety of scenar...

Stochastic Linear Bandits Robust to Adversarial Attacks

We consider a stochastic linear bandit problem in which the rewards are ...

Homomorphically Encrypted Linear Contextual Bandit

Contextual bandit is a general framework for online learning in sequenti...

Hierarchical Exploration for Accelerating Contextual Bandits

Contextual bandit learning is an increasingly popular approach to optimi...

Robust Actor-Critic Contextual Bandit for Mobile Health (mHealth) Interventions

We consider the actor-critic contextual bandit for the mobile health (mH...