Log In Sign Up

Optimal Attacks on Reinforcement Learning Policies

Control policies, trained using the Deep Reinforcement Learning, have been recently shown to be vulnerable to adversarial attacks introducing even very small perturbations to the policy input. The attacks proposed so far have been designed using heuristics, and build on existing adversarial example crafting techniques used to dupe classifiers in supervised learning. In contrast, this paper investigates the problem of devising optimal attacks, depending on a well-defined attacker's objective, e.g., to minimize the main agent average reward. When the policy and the system dynamics, as well as rewards, are known to the attacker, a scenario referred to as a white-box attack, designing optimal attacks amounts to solving a Markov Decision Process. For what we call black-box attacks, where neither the policy nor the system is known, optimal attacks can be trained using Reinforcement Learning techniques. Through numerical experiments, we demonstrate the efficiency of our attacks compared to existing attacks (usually based on Gradient methods). We further quantify the potential impact of attacks and establish its connection to the smoothness of the policy under attack. Smooth policies are naturally less prone to attacks (this explains why Lipschitz policies, with respect to the state, are more resilient). Finally, we show that from the main agent perspective, the system uncertainties and the attacker can be modeled as a Partially Observable Markov Decision Process. We actually demonstrate that using Reinforcement Learning techniques tailored to POMDP (e.g. using Recurrent Neural Networks) leads to more resilient policies.


page 3

page 4

page 5

page 6

page 9

page 11

page 14

page 16


Adversarial Attacks on Neural Network Policies

Machine learning classifiers are known to be vulnerable to inputs malici...

White-Box Adversarial Policies in Deep Reinforcement Learning

Adversarial examples against AI systems pose both risks via malicious at...

Adversarial joint attacks on legged robots

We address adversarial attacks on the actuators at the joints of legged ...

Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising

Neural network policies trained using Deep Reinforcement Learning (DRL) ...

Balancing detectability and performance of attacks on the control channel of Markov Decision Processes

We investigate the problem of designing optimal stealthy poisoning attac...

How memory architecture affects performance and learning in simple POMDPs

Reinforcement learning is made much more complex when the agent's observ...

1 Introduction

Advances in Deep Reinforcement Learning (RL) have made it possible to train end-to-end policies achieving superhuman performance on a large variety of tasks, such as playing Atari games [1, 2], playing Go [3, 4], as well as controlling systems with continuous state and action spaces [5, 6, 7]. Recently, some of these policies have been shown to be vulnerable to adversarial attacks at test time see e.g. [8, 9]. Even if these attacks only introduce small perturbations to the successive inputs of the policies, they can significantly impact the average collected reward by the agent.

So far, attacks on RL policies have been designed using heuristic techniques, based on gradient methods, essentially the Fast Gradient Sign Method (FGSM). As such, they do not explicitly aim at minimizing the reward collected by the agent. In contrast, in this paper, we investigate the problem of casting optimal attacks with well-defined objectives, for example minimizing the average reward collected by the agent. We believe that casting optimal attacks is crucial when assessing the robustness of RL policies, since ideally, the agent should learn and apply policies that resist any possible attack (of course with limited and reasonable amplitude).

For a given policy learnt by the agent, we show that the problem of devising an optimal attack can be formulated as a Markov Decision Process (MDP), defined through the system dynamics, the agent’s reward function and policy. We are mainly interested in black-box attacks: to devise them, the adversary only observes the variables from which she builds her reward. For example, if the objective is to lead the system to a certain set of (bad) states, the adversary may just observe the states as the system evolves. If the goal is to minimize the cumulative reward collected by the agent, the adversary may just observe the system states and the instantaneous rewards collected by the agent. In black-box attacks, the aforementioned MDP is unknown, and optimal attacks can be trained using RL algorithms. The action selected by the adversary in this MDP corresponds to a perturbed state to be fed to the agent’s policy. As a consequence, the action space is typically very large. To deal with this issue, our optimal attack is trained using DDPG [5], a Deep RL algorithm initially tailored to continuous action space.

We train and evaluate optimal attacks for some OpenAI Gym [10] environments with discrete action spaces (discrete MountainCar, Cartpole, and Pong), and continuous state-action spaces (continuous MountainCar and continuous LunarLander). As expected, optimal attacks outperform existing attacks.

We discuss how the methods used to train RL policies impact their resilience to attacks. We show that the damages caused by attacks are upper bounded by a quantity directly related to the smoothness of the policy under attack111A policy is smooth when the action it selects smoothly varies with the state. Refer to proposition 5.1 for details.. Hence, training methods such as DDPG leading to smooth policies should be more robust. Finally, we remark that when under attack, the agent faces an environment with uncertain state feedback, and her sequential decision problem can be modelled as a Partially Observable MDP (POMDP). This suggests that training methods specifically tailored to POMDP (e.g. DRQN [11]) result in more resistant policies. These observations are confirmed by our experiments.

2 Related Work

Adversarial attacks on RL policies have received some attention over the last couple of years. As for now, attacks concern mainly the system at test time, where the agent has trained a policy using some Deep RL algorithm (except for [12], which considers an attack during training). The design of these attacks [8, 9, 13] are inspired by the Fast Gradient Sign Method (FGSM) [14]

, which was originally used to fool Deep Learning classifiers. FGSM is a gradient method that changes the input with the aim of optimizing some objective function. This function, and its gradient, are sampled using observations: in case of an agent trained using Deep RL, this function can be hence related to a distribution of actions given a state (the actor-network outputs a distribution over actions), or the

-value function (the critic outputs approximate -values). In [8]

, FGSM is used to minimize the probability of the main agent selecting the optimal action according to the agent. On the other hand, in

[9], the authors develop a gradient method that maximizes the probability of taking the action with minimal -value. The aforementioned attacks are based on the unperturbed policy of the agent, or its -value function, and do not consider optimizing the attack over all possible trajectories. Therefore, they can be considered a first-order approximation of an optimal attack. If the adversary wishes to minimize the average cumulative reward of the agent, she needs to work on the perturbed agent policy (to assess the impact of an attack, we need to compute the rewards after the attack).

Proactive and reactive defence methods have been proposed to make RL policies robust to gradient attacks. Proactive methods are essentially similar to those in supervised learning, i.e., they are based on injecting adversarial inputs during training [15, 9]. A few reactive methods have also been proposed [13]. There, the idea is to train a separate network predicting the next state. This network is then used to replace the state input by the predicted state if this input is believed to be too corrupted. But as we show in the paper, adversarial examples introduce partial observability, and the problem is not Markov anymore. It is known [16] that deterministic memory-less policies are inadequate to deal with partial observability, and some kind of memory is needed to find a robust policy. In this work, we make use of recurrent neural networks to show that we can use techniques from the literature of Partially Observable MDPs to robustify RL policies.

It is finally worth mentioning that there has been some work studying the problem of robust RL with a game-theoretical perspective [17, 18]. However, these papers consider a very different setting where the adversary has a direct impact on the system (not indirectly through the agent as in our work). In a sense, they are more related to RL for multi-agent systems.

3 Preliminaries

This section provides a brief technical background on Markov Decision Process and Deep Reinforcement Learning. The notions and methods introduced here are used extensively in the remainder of the paper.

Markov Decision Process.

A Markov Decision Process (MDP) defines a discrete-time controlled Markov chain. It is defined by a tuple

. is the finite state space, is the finite set of actions available in state , represents the system dynamics where is the distribution of the next state given that the current state is and the selected action is , is the distribution of the initial state, and characterizes rewards where is the distribution of the collected reward in state is when action is selected. We denote by the expectation of this reward. We assume that the average reward is bounded: for all , . A policy for maps the state and to a distribution of the selected action. For , denotes the probability of choosing action in state . The value function of a policy defines for every the average cumulative discounted reward under when the system starts in state : where and are the state and the selected action at time under policy , and is the discount factor. The -value function of policy maps a (state, action) pair to the average cumulative discounted reward obtained starting in state when the first selected action is and subsequent actions are chosen according to : . For a given MDP , the objective is to devise a policy with maximal value in every state.

Deep Reinforcement Learning To train policies with maximal rewards for MPDs with large state or action spaces, Deep Reinforcement Learning consists in parametrizing the set of policies or some particular functions of interest such as the value function or the -value function of a given policy by a deep neural network. In this paper, we consider various Deep RL techniques to train either the policy of the agent or the attack. These include:

DQN [1]: the network aims at approximating the -value function of the optimal policy.

DDPG [5]: an actor-critic method developed to deal with continuous state-action spaces. To this aim, the actor network returns a single action rather than a distribution over actions. DDPG is particularly well-suited to train attacks (since it corresponds to solving an MDP with large action space).

DRQN [11]: introduces memory in DQN by replacing the (post-convolutional) layer by a recurrent LSTM. The use of this recurrent structure allows us to deal with partial observability, and DRQN works better with POMDPs.

4 Optimal Adversarial Attacks

To attack a given policy trained by the main agent, an adversary can slightly modify the sequence of inputs of this policy. The changes are imposed to be small, according to some distance, so that the attack remains difficult to detect. The adversary aims at optimizing inputs with a precise objective in mind, for example, to cause as much damage as possible to the agent.

We denote by the MDP solved by the agent, i.e., the agent policy has been trained for .

4.1 The attack MDP

To attack the policy the adversary proceeds as follows. At time , the adversary observes the system state . She then selects a perturbed state , which becomes the input of the agent policy . The agent hence chooses an action according to . The adversary successively collects a random reward defined depending on her objective, which is assumed to be a function of the true state and the action selected by the agent. An attack is defined by a mapping where is the perturbed state given that the true state is .

Constrained perturbations. For the attack to be imperceptible, the input should be only slightly modified. Formally, we assume that we can define a notion of distance between two states . The state space of most RL problems can be seen as a subset of a Euclidean space, in which case is just the Euclidean distance. We impose that given a state of the system, the adversary can only select a perturbed state in . is the maximal amplitude of the state perturbation.

System dynamics and agent’s reward under attack. Given that the state is at time and that the adversary selects a modified state , the agent selects an action according to the distribution . Thus, the system state evolves to a random state with distribution

. The agent instantaneous reward is a random variable with distribution


Adversary’s reward. The attack is shaped depending on the adversary’s objective. The adversary typically defines her reward as a function of the true state and the action selected by the agent, or more concisely as a direct function of the reward collected by the agent. The adversary might be interested in guiding the agent to some specific states. In control systems, the adversary may wish to induce oscillations in the system output or to reduce its controllability (this can be realized by choosing a reward equal to the energy spent by the agent, i.e., proportional to if the agent selects ). The most natural objective for the adversary is to minimize the average cumulative reward collected by the agent. In this case, the adversary would set her instantaneous reward equal to the opposite of the agent’s reward.

We denote by the distribution of the reward collected by the adversary in state when she modifies the input to , and by its expectation. For example, when the adversary wishes to minimize the agent’s average cumulative reward, we have .

We have shown that designing an optimal attack corresponds to identifying a policy that solves the following MDP: . When the parameters of this MDP are known to the adversary, a scenario referred to as white box attack, finding the optimal attack accounts to solving the MDP, e.g., by using classical methods (value or policy iteration). More realistically, the adversary may ignore the parameters of , and only observe the state evolution and her successive instantaneous rewards. This scenario is called black-box attack, and in this case, the adversary can identify the optimal attack using Reinforcement Learning algorithms.

Observe that when the adversarial attack policy is , then the system dynamics and rewards correspond to a scenario where the agent applies the perturbed policy , which is defined by the distributions .

4.2 Minimizing agent’s average reward

Next, we give interesting properties of the MDP . When the adversary aims at minimizing the average cumulative reward of the agent, then the reward collected by the agent can be considered as a cost by the adversary. In this case, , and one can easily relate the value function (for the MDP ) of an attack policy to that of the agent policy (for the MDP ):


The —value function of an attack policy can also be related to the —value function of the agent policy under attack . Indeed: for all .

In particular, if the agent policy is deterministic ( is the action selected under in state ), we simply have:


Equation (2) has an important consequence when we train an attack policy using Deep Learning algorithms. It implies that evaluating the -value of an attack policy can be done by evaluating the -value function of the perturbed agent policy . In the case of off-policy learning, the former evaluation would require to store in the replay buffer experiences of the type ( is the observed reward), whereas the latter just stores (but it requires to observe the actions of the agent). This simplifies considerably when the state space is much larger than the action space.

4.3 Training optimal attacks

Training an optimal attack can be very complex, since it corresponds to solving an MDP where the action space size is similar to that of the state space. In our experiments, we use two (potentially combined) techniques to deal with this issue: (i) feature extraction when this is at all possible, and (ii) specific Deep RL techniques tailored to very large (or even continuous) state and action spaces, such as DDPG.

Feature extraction. One may extract important features of the system admitting a low-dimensional representation. Formally, this means that by using some expert knowledge about the system, one can identify a bijective mapping between the state in , and its features in of lower dimension. In general, this mapping can also be chosen such that its image of the ball is easy to represent in (typically this image would also be a ball around , the feature representation of ). It is then enough to work in . When is of reasonable size, one may then rely on existing value-based techniques (e.g. DQN) to train the attack. An example where features can be easily extracted is the Atari game of pong: the system state is just represented by the positions of the ball and the two bars. Finally, we assume to be very likely that an adversary trying to disrupt the agent policy would conduct an advanced feature engineering work before casting her attack.

Deep RL: DDPG. In many systems, it may not be easy to extract interesting features. In that case, one may rely on Deep RL algorithms to train the attack. The main issue here is the size of the action space. This size prevents us to use DQN (that would require to build a network with as many outputs as the number of possible actions), but also policy-gradient and actor-critic methods that parametrize randomized policies such as TRPO (indeed we can simply not maintain distributions over actions). This leads us to use DDPG, an actor-critic method that parametrizes deterministic policies. In DDPG, we consider policies parametrized by ( is the policy parametrized by ), and its approximated —value function parametrized by . We also maintain two corresponding target networks. The objective function to maximize is . is updated using the following gradient [19]:


where corresponds to with additional exploration noise, and denotes the discounted state visitation distribution under the policy . In the case where the adversary’s objective is to minimize the average reward of the agent, in view of (2), the above gradient becomes:


Finally, to ensure that the selected perturbation computed by belongs to , we add a last fixed layer to the network that projects onto the ball centred in , with radius . This is done by applying the function to , where is the output before the projection layer, and is the exploration noise. After this projection, we add the state to get .

The pseudo-code of the algorithm is provided in Algorithm 1 in the Appendix. Algorithm 2 presents a version of the algorithm using the gradient (4). A diagram of the actor and critic networks are also provided in Figure 8 of the Appendix.

Gradient-based exploration. In case the adversary knows the policy of the main agent, we can leverage this knowledge to tune the exploration noise . Similarly, to gradient methods [8, 9], we suggest using the quantity to improve the exploration process , where is the cross-entropy loss between

and a one-hot vector

that encodes the action with minimum probability in state . This allows exploring directions that might minimize the value of the unperturbed policy, which boosts the rate at which the optimal attack is learnt. At time , the exploration noise could be set to , a convex combination of a white exploration noise and , where is a normalizing function. Following this gradient introduces bias, hence we randomly choose when to use or (more details can be found in the Appendix).

5 Robustness of Policies

In this section, we briefly discuss which training methods should lead to agent policies more resilient to attacks. We first quantify the maximal impact of an attack on the cumulative reward of the agent policy, and show that it is connected to the smoothness of the agent policy. This connection indicates that the smoother the policy is, the more resilient it is to attacks. Next, we show that from the agent perspective, an attack induces a POMDP. This suggests that if training the agent policy using Deep RL algorithms tailored to POMDP yields more resilient policies.

Impact of an attack and policy smoothness. When the agent policy is attacked using , one may compute the cumulative reward gathered by the agent. Indeed, the system evolves as if the agent policy was . The corresponding value function satisfies the Bellman equation: . Starting from this observation, we derive an upper bound on the impact of an attack (see the Appendix for a proof):

Proposition 5.1.

For any , let222 is the Total Variation distance, and for any , . . We have:


Assume now that the agent policy is smooth in the sense that, for all , , then


In the above, quantifies the potential impact of applying a feasible perturbation to the state on the distribution of actions selected by the agent in this state (note that it decreases to 0 as goes to 0). The proposition establishes the intuitive fact that when the agent policy is smooth (i.e., varies smoothly with the input), it should be more resilient to attacks. Indeed, in our experiments, policies trained using Deep RL methods known to lead to smooth policies (e.g. DDPG for RL problems with continuous state and action spaces) resist better to attacks.

Induced POMDP and DRQN. When the agent policy is under the attack , the agent perceives the true state only through its perturbation . More precisely, the agent observes a perturbed state only, with probability (in general could also be a distribution over states if the attack is randomized). It is known [20, 21, 16] that solving POMDPs requires "memory", i.e., policies whose decisions depend on the past observations (non-Markovian). Hence, we expect that training the agent policy using Deep RL algorithms tailored to POMDP produces more resilient policies. In [11] they empirically demonstrate that recurrent controllers have a certain degree of robustness against missing information, even when trained with full state information. We use this to pro-actively extract robust features, not biased by a specific adversary policy , as in adversarial training. This is confirmed in our experiments, comparing the impact of attacks on policies trained by DRQN to those trained using Deep RL algorithms without recurrent structure (without memory).

6 Experimental Evaluation

We evaluate our attacks on four OpenAI Gym [10] environments (A: discrete MountainCar, B: Cartpole, C: continuous MountainCar, and D: continuous LunarLander), and on E: Pong, an Atari 2600 game in the Arcade Learning Environment [22]. In Appendix, we also present the results of our attacks for the toy example of the Grid World problem, to illustrate why gradient-based attack are sub-optimal.

Agent’s policies. The agent policies are trained using DQN or DRQN in case of discrete action spaces (environments A, B, and E), and using DDPG when the action space is continuous (environments C and D). In each environment and for each training algorithm, we have obtained from 3 policies for DQN, 1 for DRQN (achieving at least 95% of the performance of the best policy reported in OpenAI Gym).

Adversarial attack. For each environment A, B, C, and D, we train one adversarial attack using DDPG against one of the agent’s policies, and for a perturbation constraint (we use the same normalization method as in [8]

). For results presented below, we use uniformly distributed noise; results with gradient-based noise are discussed in the Appendix. To get attacks for other values of

, we do not retrain our attack but only change the projection layer in our network. For environment E, we represent the state as a 4-dimensional feature vector, and train the attack using DQN in the feature space. The full experimental setup is presented in the Appendix.

6.1 Optimal vs. gradient-based attacks

We compare our attacks to the gradient-based attacks proposed in [9] (we found that these are the most efficient among gradient-based attacks). To test a particular attack, we run it against the 3 agent’s policies, using 10 random seeds (e.g. involved in the state initialization), and 30 episodes. We hence generate up to 1800 episodes, and average the cumulative reward of the agent.

In Figure 1, we plot the performance loss for different the perturbation amplitudes in the 4 environments. Observe that the optimal attack consistently outperforms the gradient-based attack, and significantly impact the performance of the agent. Also note that since we have trained our attack for , it seems that this attack generalizes well for various values of . In Appendix, we present similar plots but not averaged over the 3 agent’s policies, and we do not see any real difference – suggesting that attacks generalize well to different agent’s policies.

From Figure 1, for environments with discrete action spaces (the top two sets of curves), the resilience of policies trained by DRQN is confirmed: these policies are harder to attack.

Policies trained using DDPG for environments with continuous action spaces (the bottom two sets of curves in Figure 1) are more difficult to perturb than those trained in environments with discrete action spaces. The gradient-based attack does not seem to be efficient at all, at least in the range of attack amplitude considered. In the case of LunarLander some state components were not normalized, which may explain why a bigger value of is needed in order to see a major performance loss (at least ). Our optimal attack performs better, but the impact of the attack is not as significant as in environments with discrete action spaces: For instance, for discrete and continuous MountainCar, our attack with amplitude decreases the performance of the agent by 30% and 13%, respectively.

Figure 1: Average performance loss vs. attack amplitude in the various environments: (top-left) discrete MountainCar, (top-right) Cartpole, (bottom-left) continuous MountainCar, (bottom-right) LunarLander. Attacks are against using DQN, DRQN, or DDPG. The attack policy has been trained only for .
Figure 2: Game of Pong. (Left part) Images and corresponding agent’s action distributions before and after the attack. (Right part) Average agent’s reward at different phases of the attack training.

6.2 Attack on Pong, based on feature extraction

We consider the game of Pong where the state is an 84x84 pixel image. The agent’s policies were trained using DQN using as inputs the successive images. As for the attack, we extracted a 4-dimensional feature vector fully representing the state: this vector encodes the position of the centre of mass of the three relevant objects in the image (the ball and the two bars). Extracting the features is done using a high pass filter to detect the contours. We need 4-dimensional vectors since the two bars have a single degree of freedom, and the ball has two.

We limit the amplitude of the attack in the feature space directly. More precisely, we impose that the adversary just changes one of the 4 components of the feature vector by one pixel only. For example, the adversary can move up one bar by one pixel. We found that the amplitude of the resulting attack in the true state space (full images) corresponds to .

The attack is trained using DQN in the feature space. In Figure 2 (left part), we present a few true and perturbed images, and below the corresponding distributions of the actions selected by the agent without and with the attack (the probabilities are calculated from the output of the network using a softmax function with temperature ). Moving objects by one pixel can perturb considerably the agent’s policy. In Figure 2 (right part), we show the evolution of the performance of the agent’s policy under our attack during the training of the latter. Initially, the agent’s policy achieves the maximum score, but after 500 episodes of training, the agent’s score has become close to the lowest possible score in this game (-20). This score can be reached in an even fewer number of episodes when tuning the training parameters. The gradient-based attacks performed in [8] cannot be directly compared to ours, since these attacks change frames at every time step, while ours modifies one only. They also modify the values of every pixel in frames (using 32-bit precision over the reals [0,1]), whereas we change only the values of just a few (using only 8-bit precision over the integers [0,255]).

7 Conclusion

In this paper we have formulated the problem of devising an optimal black-box attack that does not require access to the underlying policy of the main agent. Previous attacks, such as FGSM [8], make use of white-box assumptions, i.e., knowing the action-value function Q, or the policy, of the main agent. In our formulation, we do not assume this knowledge. Deriving an optimal attack is important in order to understand how to build RL policies robust to adversarial perturbations. The problem has been formulated as a Markov Decision Problem, where the goal of the adversary is encapsulated in the adversarial reward function. The problem becomes intractable when we step out of toy problems: we propose a variation of DDPG to compute the optimal attack. In the white-box case, instead of using FGSM, we propose to use a gradient-based method to improve exploration. Adapting to such attacks requires solving a Partially Observable MDP, hence we have to resort to non-markovian policies [16]. In deep learning this can be done by adopting recurrent layers, as in DRQN [11]. We also show that Lipschitz policies may have better robustness properties. We validated our algorithm on different environments. In all cases we have found out that our attack outperforms gradient methods. This is more evident in discrete action spaces, whilst in continuous spaces it is more difficult to perturb for small values of , which may be explained by the bound we provide for Lipschitz policies. Finally, policies that use memory seem to be more robust in general.


  • [1] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. NIPS Deep Learning Workshop, 2013.
  • [2] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In

    International conference on machine learning

    , pages 1928–1937, 2016.
  • [3] Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
  • [4] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of go without human knowledge. Nature, 550:354 EP –, Oct 2017.
  • [5] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. International Conference on Learning Representations, San Juan, Puerto Rico, 2016.
  • [6] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, pages 1889–1897, 2015.
  • [7] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334–1373, 2016.
  • [8] Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. Workshop Track of the 5th International Conference on Learning Representations, Toulon, France, 2017.
  • [9] Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary. Robust deep reinforcement learning with adversarial attacks. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 2040–2042. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
  • [10] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
  • [11] Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In 2015 AAAI Fall Symposium Series, 2015.
  • [12] Vahid Behzadan and Arslan Munir. Vulnerability of deep reinforcement learning to policy induction attacks. In

    International Conference on Machine Learning and Data Mining in Pattern Recognition

    , pages 262–275. Springer, 2017.
  • [13] Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. Tactics of adversarial attack on deep reinforcement learning agents.

    Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence

    , 2017.
  • [14] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. International Conference on Learning Representations, 2015.
  • [15] Jernej Kos and Dawn Song. Delving into adversarial attacks on deep policies. Workshop Track of the 5th International Conference on Learning Representations, Toulon, France, 2017.
  • [16] Satinder P Singh, Tommi Jaakkola, and Michael I Jordan.

    Learning without state-estimation in partially observable markovian decision processes.

    In Machine Learning Proceedings 1994, pages 284–292. Elsevier, 1994.
  • [17] Jun Morimoto and Kenji Doya. Robust reinforcement learning. Neural computation, 17(2):335–359, 2005.
  • [18] Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta. Robust adversarial reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2817–2826. JMLR. org, 2017.
  • [19] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In ICML, 2014.
  • [20] Matthijs TJ Spaan. Partially observable markov decision processes. In Reinforcement Learning, pages 387–414. Springer, 2012.
  • [21] Karl J Astrom. Optimal control of markov processes with incomplete state information. Journal of mathematical analysis and applications, 10(1):174–205, 1965.
  • [22] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013.
  • [23] Matthias Plappert. keras-rl., 2016.

8 Other experimental results

Gridworld. We use as a toy-example a gridworld, with distance, and perturbation magnitude . The goal of the game is to reach the top-left or bottom-right corner in the fewest number of steps, with reward for each step spent in the game. We implemented both the gradient methods inspired by Huang et al. [8] and Pattanaik et al. [9], and compared them with our proposed method to find the optimal perturbation policy . The implementation of those algorithms is shown in Algorithm 3, 4. In figure 3 is shown the undiscounted value of the various attacks: we can see the value of the worst attack defined in [9] does not achieve the same values that are achieved from the optimal perturbation .

Figure 3: Un-discounted value of the perturbed policy of the main agent. Left: our proposed attack, middle: attack in [8], right: attack in [9].

OpenAI Gym Environments. Figure 4 presents the results for the discrete MountainCar environment A. There, we plot the distribution of the performance loss (in %) due to the attack for , when the agent policies have been trained using DQN (top) or DRQN (bottom).

Figure 4: Performance loss distribution under the optimal and gradient-based attacks with amplitude for the discrete MountainCar environment. The attacks are against policies trained using DQN (top) and DRQN (bottom).

In Figure 5-8 we show the performance loss distribution for , and in Figure 9 the average performance loss for different values of , all computed when the adversary attacks the agent’s policy it was trained for. Additionally, for environment B, C, D, we show the loss distribution averaged across all the main agent’s policies.

Figure 5: Mountaincar environment - distribution of the average discounted reward performance decrease for , with respect to the main agent’s policy the attacker has trained for.
Figure 6: Cartpole environment - distribution of the average discounted reward performance decrease for . On the left we show the distribution with respect to the main agent’s policy the attacker was trained on, and on the right an average across all the main agent’s policies.
Figure 7: Continuous MountainCar environment - distribution of the average discounted reward performance decrease for . On the left we show the distribution with respect to the main agent’s policy the attacker was trained on, and on the right an average across all the main agent’s policies.
Figure 8: LunarLander environment - distribution of the average discounted reward performance decrease for . On the left we show the distribution with respect to the main agent’s policy the attacker was trained on, and on the right an average across all the main agent’s policies.

Figure 9: Average performance loss vs. attack amplitude in the various environments, when the adversary attacks the main agent’s policy it was trained on. (top-left) discrete MountainCar, (top-right) Cartpole, (bottom-left) continuous MountainCar, (bottom-right) LunarLander. Attacks are against using policies obtained using DQN, DRQN, or DDPG.

Gradient based exploration Gradient-based exploration tries to exploit knowledge of the unperturbed policy of the main agent. Caution should be taken, in order to reduce bias, and to minimize the risk of exploring like an on-policy method. If we follow the information contained in without restriction we would end up in a sub-optimal solution, that minimizes the value of the unperturbed policy . The exploration noise is given by the following equation:


where is a normalizing function such as , or , is a Bernoulli random variable with , and . is the cross-entropy loss between and a one-hot vector that encodes the action with minimum probability in state . If is not known, but the action-value function is known, we can compute by applying the softmax function on the action-values of a particular state .

Reward statistics No attack FGM Uniform exploration Gradient-based exploration Mean Std Min Max

Figure 10: Mountaincar environment. On the left is shown the average episode loss when exploring using a gradient-based exploration methods vs. uniform noise exploration. On the right are shown statistics regarding episodes reward for , and norm , after training.

The gradient of the cross-entropy loss computes the direction that increases the probability of taking the action with minimum value when the policy is unperturbed. The random variable , instead, is necessary in order to reduce bias induced by following this gradient, while quantifies how strongly an agent prefers the optimal action in state . Intuitively, in case is an optimal policy, the lower is, the less impact the optimal action has in that state. This guarantees that we follow the gradient of mostly when we are in critical states. In Figure 10 we show the low-pass filtered loss during the training of the attack. We clearly see an improvement in performance, although it highly depends on the selection of the parameters.

We have tested this type of exploration on Mountaincar, with , and temperature of the softmax function set to . The exploration noise is drawn from a Uniform distribution with parameters shown in Table 3 (Mountaincar). In Figure 10 we compare gradient-based exploration vs. uniform noise exploration. On the left, we show the low-pass filtered loss during the training of the attack, with and norm. On the right of Figure 10 we compare episodes reward statistics after training. Results in the table were averaged across different seeds, with episodes for each seed, for a total of episodes for each method. We see an improvement in performance, although it highly depends on the selection of the parameters.

9 Proof of Proposition 5.1




We will now proceed by bounding the quantity :


where in the last line we made use of the following equality . Now, by making use of we get:


At this point consider a random variable that can assume values only in a bounded set . Let be a bound on the absolute value of . Then, for any two distributions whose support is we have :


By using the previous inequality we can bound the reward difference:


where . The inequality becomes


Now, let , and observe that is uniformly bounded by , and by , thus


Hence, we can deduce the following inequality:


By taking the term on the left hand side, dividing by , and finally taking the supremum over we obtain the result:


Assume now the main agent’s policy is smooth, such that . We can now bound the term as follows:


where denotes the average weighted distance between elements of and , according to the distribution . Equivalently we also have


from which we obtain




10 Algorithms

10.1 Attack training algorithm

In this section we describe the DDPG algorithms discussed in Section 4.3. We will denote the projection layer and the output before the projection layer respectively by and . Algorithm 1 refers to the black-box attack, whilst Algorithm 2 makes use of , if it is known. Apart from the projection layer, and the change in the gradient step in the white-box case, both algorithms follow the same logic as the usual DDPG algorithm, explained in [5]. Moreover, projections have a regularization term in order to prevent division by .

1:procedure AttackTraining()
2:     Initialize critic and actor network with weights and .
3:     Initialize target critic and actor networks with weights .
4:     Initialize replay buffer .
5:     for episode 1:M do
6:         Initialize environment and observe state , and set initial time .
7:         repeat
8:              Select a perturbation and add exploration noise .
9:              Feed perturbation to main agent and observe reward and state .
10:              Append transition to .
11:              if  then
12:                  Sample random minibatch of elements.
13:                  Set target for the i-th element of to .
14:                  Update critic by minimizing loss .
15:                  Update actor using sampled policy gradient
16:                  Slowly update the target networks:
17:              end if
18:              .
19:         until episode is terminal.
20:     end for
21:end procedure
Algorithm 1 Training adversarial attack
1:procedure AttackTraining2()
2:     Initialize critic network and actor with weights and .
3:     Initialize target critic and actor networks with weights .
4:     Initialize replay buffer .
5:     for episode 1:M do
6:         Initialize environment and observe state .
7:         Set initial time .
8:         repeat
9:              Select a perturbation and add exploration noise .
10:              Feed perturbation to main agent and observe reward and state .
11:              Append transition to , where .
12:              if  then
13:                  Sample random minibatch of elements.
14:                  Set target for the i-th element of to .
15:                  Update critic by minimizing loss .
16:                  Update actor using sampled policy gradient
17:                  Slowly update the target networks:
18:              end if
19:              .
20:         until episode is terminal.
21:     end for
22:end procedure
Algorithm 2 Training adversarial attack 2

10.2 Gradient-based attacks

Gradient-based attacks can be adapted to MDPs by solving the optimization problem these methods are trying to solve. There are two main techniques that were developed, one by Huang et al. [8], and the other one from Pattanaik et al. [9]: the former tries to minimize the probability of the optimal action , whilst the latter maximizes the probability of taking the worst action, , defined as the action that attains the minimum value for the unperturbed policy Pseudo-code for the former is given in Algorithm 3, while for the latter is in Algorithm 4.

1:procedure AttackProcedure1()
2:       Best action in state
4:      for all  do Loop trough the -neighbor states
5:            if  then Evaluate if is not optimal in
7:            end if
8:      end for
9:      return
10:end procedure
Algorithm 3 Adversarial Attack - Adaptation of [8].
1:procedure AttackProcedure()
4:      for all  do Loop trough the -neighbor states
6:            if  then Evaluate if optimal action in is worse in
9:            end if
10:      end for
11:      return
12:end procedure
Algorithm 4 Adversarial Attack - Adaptation of [9].

11 Experimental setup

Hardware and software setup.

All experiments were executed on a stationary desktop computer, featuring an Intel Xeon Silver 4110 CPU, 48GB of RAM and a GeForce GTX 1080 graphical card. Ubuntu 18.04 was installed on the computer, together with Tensorflow 1.13.1 and CUDA 10.0.

Deep-learning framework. We set up our experiments within the Keras-RL framework [23], together with OpenAI Gym [10] as interface to the test environments used in this paper.

All the adversarial methods used in this paper, including Gradient attacks, and DRQN, have been implemented from scratch using the Keras-RL framework.

Neural network architectures.

Depending on the environment we use a different neural network architecture, as shown in table 1. The network architecture was not optimised, but was chosen large enough also to have a better comparison with Gradient methods since they depend on the dimensionality of the network, and may fail for small-sized networks. For all environments we set the frame skip to 4.

Neural network
MountainCar MountainCar LunarLander Cartpole
DQN Actor Q-Critic Actor Q-Critic DQN
Layer 1

FC(512, ReLU)

FC(400, ReLU) FC(400, ReLU) FC(400, ReLU) FC(400, ReLU) FC(256, ReLU)
Layer 2 FC(256, ReLU) FC(300, ReLU) FC(300, ReLU) FC(300, ReLU) FC(300, ReLU) FC(256, ReLU)
Layer 3 FC(64, ReLU) FC(1, Tanh) FC(1) FC(2, Tanh) FC(1) FC(2)
Layer 4 FC(3) - - - - -
Table 1:

Neural network structure for the policies of the main agent. FC stands for Fully Connected layer. Inside the parentheses is indicated how many units were used, and the activation function for that layer.

For the adversary, instead, we used a different actor-critic architecture, although we have not tried to optimise it. The model can be seen in figure 11. In the figure denotes the dimensionality of the state space. The neural network structure for the game of Pong is the same as the one used in [8], for both the adversary and the main agent.

Figure 11: Diagram of the adversarial agent perturbing observations of the environment. On the right is shown the environment model for the malicious agent. The equivalent MDP is denoted by .

Data collection and preprocessing. Every data point is an average across different seeds, where for each seed we have tested an attack against all of the main agent’s policies, and each test considers episodes ( in the case of FGSM for Continuous Mountaincar and LunarLander). In Table 2 is shown the number of policies for the main agent and the number of episodes collected. Density plots were preprocessed using the function seaborn.kdeplot with Gaussian kernel and bandwidth ( for LunarLander). Gradient-based exploration data was preprocessed using a first order acausal Butterworth filter with critical frequency .

Algorithm/Environment MountainCar MountainCar-Continuous Cartpole LunarLander
DQN 3 (1800) - 3 (1800) -
DRQN 1 (600) - 1 (600) -
DDPG - 3 (1200) - 3 (1200)
Table 2: Number of policies for the main agent for each Algorithm/Environment pair. In parenthesis is shown the total number of episodes taken.

Settings for gradient-attacks.

For gradient-attacks we have always used the same norm as the one used by the optimal attack. For the Softmax layer we have always used a temperature of

. We compute the gradient using the method from Pattanaik et al. [9], which has proven to be better than the one proposed by Huang et al.[8]. In order to speed-up the computation we do not sample the attack magnitude from a random distribution, but fix it to the maximum value .

Training procedure and parameters. All the training parameters for the main agent are shown in table 3. Parameters were chosen based on previous experience, so that the policy trains in a fixed number of training steps. Policies were trained until they were within of the optimal value referenced in the OpenAI Gym documentation [10], across an average of the last episodes. For the game of Pong we use same as [8]

Parameters/Environment MountainCar MountainCar LunarLander Cartpole Pong
Action space Discrete Continuous Continuous Discrete Discrete
Reference value
Number of steps
Number of warmup steps
Replay memory size
Discount factor
Learning rate
Optimizer Adam Adam Adam Adam Adam
Gradient clipping Not set Not set
Exploration style -greedy Gaussian noise Gaussian noise -greedy -greedy
Initial exploration rate/ Std (DDPG)
Final exploration rate/ Std (DDPG)
Exploration steps
Batch size