Designing a Multi-Objective Reward Function for Creating Teams of Robotic Bodyguards Using Deep Reinforcement Learning

01/28/2019 ∙ by Hassam Ullah Sheikh, et al. ∙ 0

We are considering a scenario where a team of bodyguard robots provides physical protection to a VIP in a crowded public space. We use deep reinforcement learning to learn the policy to be followed by the robots. As the robot bodyguards need to follow several difficult-to-reconcile goals, we study several primitive and composite reward functions and their impact on the overall behavior of the robotic bodyguards.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent progress in the field of autonomous robots makes it feasible for robots to interact with multiple humans in public spaces. In this paper, we are considering a human VIP moving in a crowded market environment who is protected from physical assault by a team of bodyguard robots. The robots must take into account the position and movement of the VIP, the bystanders and other robots. Previous work in similar problems relied on explicitly programmed behaviors.

Recent research in deep reinforcement learning (Levine et al., 2016; Silver et al., 2016)

and imitation learning 

(Rahmatizadeh et al., 2018) applied to the robotics has raised the possibility that learning algorithms might lead to better algorithms than explicit programming. In this paper we explore deep reinforcement learning approaches for our scenario. We aim to simultaneously learn communication and coordination techniques between the agent and the task oriented behavior.

We aim to develop a general task framework, which can generalize to other types of desired behaviors, beyond bodyguard protection.

In order to achieve these goals, we need to specify: the environment in which the agents will perform, the environment representation in the robot that forms the basis of learning, the reward function that describes the desired behavior, and the reinforcement algorithms deployed. For the bodyguard task, the design of the reward function is especially challenging, because it is task specific, and it needs to reconcile multiple conflicting objectives – the maximum protection, while minimizing interference with the crowd and being unobtrusive. In Section 4.2 we discuss several different reward functions that reflect the different aspects of the desired behavior.

We describe several experiments using the Multi-agent Deep Deterministic Policy Gradient (MADDPG) algorithms over several choices of reward functions. We found that communication penalization reward functions captures better the collaborative nature of the scenario, and thus it performs better in experiments.

2 Related Work

The use of robots as a bodyguards is related to several different area of research and has received significant attention lately. Several different studies such as (Richard Klima, 2016; Yasuyuki et al., 2015) considered using robots and multi-agent reinforcement learning for security related tasks such as patrolling and team coordination by placing checkpoints to provide protection against imminent adversaries. A multi-robot patrolling framework was proposed by (Khan et al., 2012) that analyzes the behavior pattern of the soldiers and the robot and generates a patrolling schedule. The control of robots for providing maximal protection to a VIP was well studied in (Bhatia et al., 2016)

where they introduced the concept of threat vector resolution and quadrant load balancing.

3 Background

We consider the problem of providing maximal physical protection as a standard reinforcement learning setup with agent interacting with the environment in discrete steps using real valued continuous actions such that . At each timestep , the agents receive an observation , takes the action , and receive a scalar reward Generally, the environment can be partially observable i.e we may need the entire past history of observations, action pairs to represent the current state such that = . For our problem, we have assumed that the environment is fully observable so we will represent

The behavior of each agent is represented by its own policy that takes the state

as an input and outputs a probability distribution over all the actions i.e

. Since the environment is stochastic, we will model it as Markov Decision Process with a state space

, action space , reward function and the transition dynamics .

The return from state at timestep is defined as the discounted cummulative reward that the agent accumulates starting from state at timestep and represented as where is the discounting factor . The goal of the reinforcement learning is find an optimum policy that maximizes the expected return starting from state We denote the trajectory for state visitation for the policy as

3.1 Policy Gradients

Policy gradient methods have been shown to learn the optimal policy in variety of reinforcement learning tasks. The main idea behind policy gradient methods is instead of parameterizing the Q-function to extract the policy, we parameterize the policy using the parameters to maximize the objective which is represented as by taking a step in the direction of where is defined as:

The policy gradient methods are prone to high variance problem. Several different methods such as 

(Wu et al., 2018; Schulman et al., 2017) have been shown to reduce the variability in policy gradient methods by introducing a critic which is a Q-function that tells about the goodness of a reward by working as a baseline.

3.2 Deep Deterministic Policy Gradients

In (Silver et al., 2014) has shown that it is possible to extend policy gradient framework to deterministic policies i.e. . In particular we can write as

Deep Deterministic Policy Gradients (Lillicrap et al., 2015) is an off-policy algorithm and is a modification of the DPG method introduced in (Silver et al., 2014) in which the policy and the critic

is approximated using deep neural networks. DDPG also uses an experience replay buffer alongside with a target network to stabilize the training.

3.3 Multi-Agent Deep Deterministic Policy Gradients

Multi-agent deep deterministic policy gradients (Lowe et al., 2017) is the extension of the DDPG for the multi-agent setting where each agent has it’s own policy. The gradient of each policy is written as

where

is a centralized action-value function that takes the actions of all the agents in addition to the state of the agent to estimate the Q-value for agent

. Since every agent has it’s own Q-function, its allows the agents to have different action space and reward functions. The primary motivation behind MADDPG is that knowing all the actions of other agents makes the environment stationary, even though their policy changes.

3.4 Policy Gradients

Policy gradient methods have been shown to learn the optimal policy in variety of reinforcement learning tasks. The main idea behind policy gradient methods is instead of parameterizing the Q-function to extract the policy, we parameterize the policy using the parameters to maximize the objective which is represented as by taking a step in the direction of where is defined as:

The policy gradient methods are prone to high variance problem. Several different methods such as (Wu et al., 2018; Schulman et al., 2017) have been shown to reduce the variability in policy gradient methods by introducing a critic which is a Q-function that tells about the goodness of a reward by working as a baseline.

3.5 Deep Deterministic Policy Gradients

In (Silver et al., 2014) has shown that it is possible to extend policy gradient framework to deterministic policies i.e. . In particular we can write as

Deep Deterministic Policy Gradients (Lillicrap et al., 2015) is an off-policy algorithm and is a modification of the DPG method introduced in (Silver et al., 2014) in which the policy and the critic is approximated using deep neural networks. DDPG also uses an experience replay buffer alongside with a target network to stabilize the training.

3.6 Multi-Agent Deep Deterministic Policy Gradients

Multi-agent deep deterministic policy gradients (Lowe et al., 2017) is the extension of the DDPG for the multi-agent setting where each agent has it’s own policy. The gradient of each policy is written as

where is a centralized action-value function that takes the actions of all the agents in addition to the state of the agent to estimate the Q-value for agent . Since every agent has it’s own Q-function, its allows the agents to have different action space and reward functions. The primary motivation behind MADDPG is that knowing all the actions of other agents makes the environment stationary, even though their policy changes.

4 Problem Formulation

The setting we are considering for providing maximal physical protection to the VIP in crowded environment is a cooperative Markov game which becomes a natural extension of the single agent MDP for multi-agent systems. A multi-agent MDP is defined as state space that decribes all the configurations of all the agents, an action space that describes the action space of every agent . The transitions are defined as . For each agent , the reward function is defined as .

In this problem, we are assuming that all bodyguards have same state space and they are following an identical policy. For this problem we are considering a finite horizon problem where each episode is terminated after T steps. Since this problem is a cooperative setting, the goal of all the agents is to find individual policies that increase the collected payoff.

4.1 The Environment Model

For the emergence of interesting behaviors in a multi-agent setting, grounded communication in a physical environment is considered to be a crucial component. For performing the experiments, we used Multi-Agent Particle Environment (Mordatch & Abbeel, 2017) which is a two-dimensional physically simulated environment in a discrete time and continuous space. The environment consists of N agent and M landmarks, both possessing physical attributes such as location, velocity and size etc. Agents can act and move independently with their own policies.

In addition to the ability of performing physical actions in the environment, the agents also have the ability to utter verbal symbols over the communication channel at every timestamp. The utterances are symbolic in nature and does not carry any meaning. At each timestamp, every agent utters a categorical variable that is observed by every other agent and it is up to the agents to infer a meaning of these symbols during the training time. Every utterance carry a small penalty and the agent can decide not to utter at every timestamp. We denote the utterance which is a one-hot vector by

c.

The complete observation of the environment is given by . The state of each agent is the physical state of all the entities in the environment and verbal utterances of all the agents. Formally, the state of each agent is defined as where is the observation of the entity from the perspective of agent and is the verbal utterance of the agent .

4.2 Reward Functions

In (Bhatia et al., 2016) has defined a metric that quantifies the threat to the VIP from each crowd member at each timestep . This metric can be extended to a reward function. Since the threat level metric gives a probability. We can conclude that when the distance between the VIP and the crowd member is 0, the threat to the VIP is maximum, i.e 1, conversely when the distance between the VIP and the crowd member is more than the safe distance, the threat to the VIP is 0. We can model this phenomenon as an exponential decay. Thus the fundamental reward function can be defined as

(1)

where

(2)

In the following we will derive reward functions and explain the motivation behind them that were derived from equation 1.

The baseline Threat-Only Reward Function penalizes each agent with the threat to the VIP at each time step as mentioned in (Bhatia et al., 2016).

(3)

The Binary Threat Reward Function penalizes each agent for the threat with a negative binary reward, in addition, each agent is also penalized for not maintaining a suitable distance from the VIP:

(4)
(5)

where is the minimum distance the bodyguard has to maintain from VIP and is the safe distance. The final reward function is represented as

(6)

The Composite Reward Function is the composition of the threat only reward function and the penalty for not maintaining a suitable distance from the VIP

(7)

The Communication Penalization Reward Function augments the composite reward by adding a small penalty every time the bodyguard performs an utterance, as recommended in  (Mordatch & Abbeel, 2017).

5 Experiments

We performed our experiments using the Multi-Agent Particle Environment (MPE) (Mordatch & Abbeel, 2017). The performance was measured using the threat metric defined in (Bhatia et al., 2016) over one episode. The experiments were performed with 2-4 bodyguards ranging from 2-4, and a constant number of 10 bystanders. For all of the experiments, we have trained the agents for 10,000 episodes and limiting the length of the episode to 25 steps.

Figure 1 shows examples of the resulting bodyguard behavior for the composite reward function (left) and the threat only reward function (right). Notice that for the threat only behavior, the bodyguards are not in the close proximity of the VIP - they have found ways to keep the threat low by “attacking” the crowd.

Figure 1: The emerging collaborative bodyguard behavior, using the composite reward function from Equation 7 (left) and the threat-only function from Equation 3 (right). The VIP is brown, bodyguards blue, bystanders red and landmarks grey.
Figure 2: The overall threat level achieved by agents trained using 4 different reward functions (threat only, composite, binary threat and communication penalty). The scenario involved 4 agents and 10 bystanders.

Figure 2 shows the threat levels obtained by different reward functions. The communication penalty function appears the clear winner, with the lowest threat level obtained over the course of the scenario.

Acknowledgement: This research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-10-2-0016. The views and conclusions contained in this document are those of the author’s and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

References

  • Bhatia et al. (2016) Bhatia, T.S., Solmaz, G., Turgut, D., and Bölöni, L. Controlling the movement of robotic bodyguards for maximal physical protection. In Proc of the 29th International FLAIRS Conference, pp. 380–385, May 2016.
  • Khan et al. (2012) Khan, S. A., Bhatia, T.S., Parker, S., and Bölöni, L. Modeling human-robot interaction for a market patrol task. In Proc. of 25th International FLAIRS Conference, pp. 50–55, May 2012.
  • Levine et al. (2016) Levine, Sergey, Finn, Chelsea, Darrell, Trevor, and Abbeel, Pieter. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(1):1334–1373, January 2016.
  • Lillicrap et al. (2015) Lillicrap, Timothy P., Hunt, Jonathan J., Pritzel, Alexander, Heess, Nicolas, Erez, Tom, Tassa, Yuval, Silver, David, and Wierstra, Daan. Continuous control with deep reinforcement learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2015.
  • Lowe et al. (2017) Lowe, Ryan, Wu, Yi, Tamar, Aviv, Harb, Jean, Abbeel, Pieter, and Mordatch, Igor. Multi-agent actor-critic for mixed cooperative-competitive environments. Neural Information Processing Systems (NIPS), 2017.
  • Mordatch & Abbeel (2017) Mordatch, Igor and Abbeel, Pieter. Emergence of grounded compositional language in multi-agent populations. arXiv preprint arXiv:1703.04908, 2017.
  • Rahmatizadeh et al. (2018) Rahmatizadeh, Rouhollah, Abolghasemi, Pooya, Bölöni, Ladislau, and Levine, Sergey. Vision-based multi-task manipulation for inexpensive robots using end-to-end learning from demonstration. In International Conference on Robotics and Automation (ICRA), 2018.
  • Richard Klima (2016) Richard Klima, Karl Tuyls, Frans Oliehoek. Markov security games: Learning in spatial security problems. In The NIPS 2016 The Learning, Inference and Control of Multi-Agent System, 2016.
  • Schulman et al. (2017) Schulman, John, Wolski, Filip, Dhariwal, Prafulla, Radford, Alec, and Klimov, Oleg. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017.
  • Silver et al. (2014) Silver, David, Lever, Guy, Heess, Nicolas, Degris, Thomas, Wierstra, Daan, and Riedmiller, Martin. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning, 2014.
  • Silver et al. (2016) Silver, David, Huang, Aja, Maddison, Christopher J., Guez, Arthur, Sifre, Laurent, van den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, Dieleman, Sander, Grewe, Dominik, Nham, John, Kalchbrenner, Nal, Sutskever, Ilya, Lillicrap, Timothy, Leach, Madeleine, Kavukcuoglu, Koray, Graepel, Thore, and Hassabis, Demis. Mastering the game of Go with deep neural networks and tree search. Nature, 529:484–503, 2016.
  • Wu et al. (2018) Wu, Cathy, Rajeswaran, Aravind, Duan, Yan, Kumar, Vikash, Bayen, Alexandre M, Kakade, Sham, Mordatch, Igor, and Abbeel, Pieter. Variance reduction for policy gradient with action-dependent factorized baselines. In International Conference on Learning Representations, 2018.
  • Yasuyuki et al. (2015) Yasuyuki, S., Hirofumi, O., Tadashi, M., and Maya, H. Cooperative capture by multi-agent using reinforcement learning application for security patrol systems. In 2015 10th Asian Control Conference (ASCC), pp. 1–6, May 2015.