Adversarial Attacks on Linear Contextual Bandits

02/10/2020
by   Evrard Garcelon, et al.
0

Contextual bandit algorithms are applied in a wide range of domains, from advertising to recommender systems, from clinical trials to education. In many of these domains, malicious agents may have incentives to attack the bandit algorithm to induce it to perform a desired behavior. For instance, an unscrupulous ad publisher may try to increase their own revenue at the expense of the advertisers; a seller may want to increase the exposure of their products, or thwart a competitor's advertising campaign. In this paper, we study several attack scenarios and show that a malicious agent can force a linear contextual bandit algorithm to pull any desired arm T - o(T) times over a horizon of T steps, while applying adversarial modifications to either rewards or contexts that only grow logarithmically as O(log T). We also investigate the case when a malicious agent is interested in affecting the behavior of the bandit algorithm in a single context (e.g., a specific user). We first provide sufficient conditions for the feasibility of the attack and we then propose an efficient algorithm to perform the attack. We validate our theoretical results on experiments performed on both synthetic and real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2021

Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks

Stochastic linear contextual bandit algorithms have substantial applicat...
research
12/10/2021

Efficient Action Poisoning Attacks on Linear Contextual Bandits

Contextual bandit algorithms have many applicants in a variety of scenar...
research
08/17/2018

Data Poisoning Attacks in Contextual Bandits

We study offline data poisoning attacks in contextual bandits, a class o...
research
10/18/2021

When Are Linear Stochastic Bandits Attackable?

We study adversarial attacks on linear stochastic bandits, a sequential ...
research
10/29/2018

Adversarial Attacks on Stochastic Bandits

We study adversarial attacks that manipulate the reward signals to contr...
research
06/04/2013

A Gang of Bandits

Multi-armed bandit problems are receiving a great deal of attention beca...

Please sign up or login with your details

Forgot password? Click here to reset