Minimalistic Attacks: How Little it Takes to Fool a Deep Reinforcement Learning Policy

11/10/2019
by   Xinghua Qu, et al.
0

Recent studies have revealed that neural network-based policies can be easily fooled by adversarial examples. However, while most prior works analyze the effects of perturbing every pixel of every frame assuming white-box policy access, in this paper, we take a more minimalistic view towards adversary generation - with the goal of unveiling the limits of a model's vulnerability. In particular, we explore highly restrictive attacks considering three key settings: (1) black-box policy access: where the attacker only has access to the input (state) and output (action probability) of an RL policy; (2) fractional-state adversary: where only several pixels are perturbed, with the extreme case being a single-pixel adversary; and (3) tactically-chanced attack: where only significant frames are tactically chosen to be attacked.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset