Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning

by   Changan Chen, et al.

Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. Recent works have shown the power of deep reinforcement learning techniques to learn socially cooperative policies. However, their cooperation ability deteriorates as the crowd grows since they typically relax the problem as a one-way Human-Robot interaction problem. In this work, we want to go beyond first-order Human-Robot interaction and more explicitly model Crowd-Robot Interaction (CRI). We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework. Our model captures the Human-Human interactions occurring in dense crowds that indirectly affects the robot's anticipation capability. Our proposed attentive pooling mechanism learns the collective importance of neighboring humans with respect to their future states. Various experiments demonstrate that our model can anticipate human dynamics and navigate in crowds with time efficiency, outperforming state-of-the-art methods.


Robot Navigation in Crowds by Graph Convolutional Networks with Attention Learned from Human Gaze

Safe and efficient crowd navigation for mobile robot is a crucial yet ch...

Robot Navigation in a Crowd by Integrating Deep Reinforcement Learning and Online Planning

It is still an open and challenging problem for mobile robots navigating...

Multi-subgoal Robot Navigation in Crowds with History Information and Interactions

Robot navigation in dynamic environments shared with humans is an import...

Multi-Robot-Assisted Human Crowd Evacuation using Navigation Velocity Fields

This work studies a robot-assisted crowd evacuation problem where we con...

Modeling Collaboration for Robot-assisted Dressing Tasks

We investigated the application of haptic aware feedback control and dee...

Petri Net Machines for Human-Agent Interaction

Smart speakers and robots become ever more prevalent in our daily lives....

Code Repositories


Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning

view repo

I Introduction

With the rapid growth of machine intelligence, robots are envisioned to expand habitats from isolated environments to social space shared with humans. Traditional approaches for robot navigation often consider moving agents as static obstacles [1, 2, 3, 4] or react to them through a one-step lookahead [5, 6, 7], resulting in short-sighted, unsafe and unnatural behaviors. In order to navigate through a dense crowd in a socially compliant manner, robots need to understand human behavior and comply with their cooperative rules [8, 9, 10, 11].

Navigation with social etiquette is a challenging task. As communications among agents (e.g., humans) are not widely available, robots need to perceive and anticipate the evolution of the crowd, which can involve complex interactions. Research works in trajectory prediction have proposed several hand-crafted or data-driven methods to model the agent-agent interactions [12, 13, 14, 15]. Nevertheless, the integration of these prediction models in the decision-making process remains challenging.

Earlier works separate prediction and planning in two steps, attempting to identify a safe path after forecasting the future trajectories of the others [16, 17]. However, the probabilistic evolution of a crowd for a few steps can expand to the entire space in a dense environment, causing the freezing robot problem [18]. To address this issue, a large number of works have focused on obstacle avoidance methods that jointly plan plausible paths for all the decision-makers, in hope to make room for each other cooperatively [18]. Nevertheless, these methods suffer from the stochasticity of neighbors’ behaviors as well as high computational cost when applied to densely populated environments.

As an alternative, reinforcement learning frameworks have been used to train computationally efficient policies that implicitly encode the interactions and cooperation among agents. Although significant progress has been made in recent works [19, 20, 21, 22], existing models are still limited in two aspects: i) the collective impact of the crowd is usually modeled by a simplified aggregation of the pairwise interactions, such as a maximin operator [19] or LSTM [22], which may fail to fully represent all the interactions; ii) most methods focus on one-way interactions from humans to the robot, but ignore the interactions within the crowd which could indirectly affect the robot. These limitations degrade the performance of cooperative planning in complex and crowded scenes.

Fig. 1: In this work, we present a method that jointly model Human-Robot and Human-Human interactions for navigation in crowds.

In this work, we address the above issues by going beyond first-order Human-Robot interaction and dive into Crowd-Robot Interaction (CRI). We propose to: (i) rethink Human-Robot pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the reinforcement learning framework. Inspired by [13, 15, 14], our model extracts features for pairwise interactions between the robot and each human and captures the interactions among humans via local maps. Subsequently, we aggregate the interaction features with a self-attention mechanism that infers the relative importance of neighboring humans with respect to their future states. Our proposed model can naturally take into account an arbitrary number of agents, providing a good understanding of the crowd behavior for planning. An extensive set of simulation experiments shows that our approach can anticipate crowd dynamics and navigate in time efficient paths, outperforming state-of-the-art methods. We also demonstrate the effectiveness of our model on a robotic platform in real-world environments. The code of our approach is available at

Ii Background

Ii-a Related Work

Earlier works have largely leveraged well-engineered interaction models to enhance the social awareness in robot navigation. One pioneering work is the Social Force [23, 24], which has been successfully applied to autonomous robots in simulation and real-world environments [25, 26, 27]. Another method named Interacting Gaussian Process (IGP) models the trajectory of each agent as an individual Gaussian Process and proposes an interaction potential term to couple the individual GP for interaction [18, 28, 29]. In multi-agent settings, where the same policy is applied to all the agents, reactive methods such as RVO [5] and ORCA [6] seek joint obstacle avoidance velocities under reciprocal assumptions. The key challenge for these models is that they heavily rely on hand-crafted functions and cannot generalize well to various scenarios for crowd-like cooperation.

Another line of work uses imitation learning approaches to program socially compliant policies based on demonstrations of desired behaviors. Navigation policies that map raw depth images and lidar measurements to actions are developed in

[30, 31] respectively by directly mimicking expert demonstrations. Beyond behavioral cloning, inverse reinforcement learning has been used in [10, 11, 32] to learn the underlying cooperation features from human data using the maximum entropy method. The learning outcomes in these works highly depend on the scale and quality of demonstrations, which is not only resource consuming but also constrains the quality of the learned policy by human efforts. In our work, we adopt the imitation learning approach to warm start our model training.

Reinforcement Learning (RL) methods have been intensively studied over the last few years and applied to various fields since it started to achieve superior performance in video games [33]. In the field of robot navigation, recent works have used RL to learn sensorimotor policies in static and dynamic environments from the raw observations [34, 21] and socially cooperative policies with the agent-level state information [19, 20, 22]. To handle a variant number of neighbors, the method reported in [19] adapts from the two-agent to the multi-agent case through a maximin operation that picks up the best action against the worst-case for the crowd. A later extension uses an LSTM model to process the state of each neighbor sequentially in reverse order of the distance to the robot [22]

. In contrast to these simplifications, we propose a novel neural network model to capture the collective impact of the crowd explicitly.

A variety of deep neural networks architectures have been proposed in recent years to improve the modeling of Human-Human interactions, some of which are compared in [35]. The Social LSTM method models each individual by an LSTM and shares the states of neighboring LSTMs through a social pooling module [13], which is later extended to a rich set of variants for improved accuracy and efficiency [36, 15, 37]

. Some other works model the social interactions through spatio-temporal graphs, where an attention model is introduced recently to learn the relative importance of each agent

[14]. Our work built upon these models designs a social attentive pooling module to encode crowd cooperative behaviors in a deep reinforcement learning framework.

Ii-B Problem Formulation

In this work, we consider a navigation task where a robot moves towards a goal through a crowd of humans. This can be formulated as a sequential decision making problem in a reinforcement learning framework [19, 20, 22]. For each agent (robot or human), the position , velocity and radius can be observed by the others. The robot is also aware of its unobservable state including the goal position and preferred speed . We assume that the velocity of the robot can be achieved instantly after the action command , i.e. . Let denote the state of the robot and denote the state of humans at time . The joint state for robot navigation is defined as .

The optimal policy, , is to maximize the expected return:


where is the reward received at time , is a discount factor, is the optimal value function,

is the transition probability from time

to time . The preferred velocity is used as a normalization term in the discount factor for numerical reasons [19].

We follow the formulation of the reward function defined in [19, 20], which awards task accomplishments while penalizing collisions or uncomfortable distances,


where is the minimum separation distance between the robot and the humans during the time period .

Ii-C Value Network Training

The value network is trained by the temporal-difference method with standard experience replay and fixed target network techniques [33, 19]. As outlined in Algorithm 1, the model is first initialized with imitation learning using a set of demonstrator experience (line 1-3), and subsequently refined from experience of interactions (line 4-14). One distinction from the previous works [19, 20] is that the next state in line 7 is obtained by querying the environment instead of assuming a linear motion model, mitigating the issue of system dynamics in the training process. During deployment, the transition probability can be approximated by a trajectory prediction model [12, 13, 15].

1:Initialize value network with demonstration
2:Initialize target value network
3:Initialize experience replay memory
4:for episode = 1, M do
5:     Initialize random sequence
6:     repeat
8:          Store tuple () in
9:          Sample random minibatch tuples from
10:          Set target
11:          Update value network by gradient descent
12:     until terminal state or
13:     Update target network
14:end for
Algorithm 1 Deep V-learning

To tackle the problem (1) effectively, the value network model needs to accurately approximate the optimal value function

that implicitly encodes the social cooperation among agents. Previous works on this track didn’t fully model the crowd interactions, which degrades the accuracy of value estimation for a densely populated scene. In the following sections, we will present a novel Crowd-Robot Interaction model that can effectively learn to navigate in crowded spaces.

Iii Approach

When humans walk in a densely populated scene, they cooperate with others by anticipating the behaviors of their neighbors in the vicinity, particularly those who are likely to be involved in some future interactions. This motivates us to design a model that can calculate the relative importance and encode the collective impact of neighboring agents for socially compliant navigation. Inspired by the social pooling [13, 15] and attention models [38, 39, 40, 14, 41, 42], we introduce a socially attentive network that consists of three modules:

  • Interaction module: models the Human-Robot interactions explicitly and encodes the Human-Human interactions through coarse-grained local maps.

  • Pooling module

    : aggregates the interactions into a fixed-length embedding vector by a self-attention mechanism.

  • Planning module: estimates the value of the joint state of the robot and crowd for social navigation.

In the following subsections, we present the architecture and formulations of each module. The time index is omitted below for simplicity.

Fig. 2: Overview of our method for socially attentive navigation made of 3 modules: Interaction, Pooling, and Planning described in Section III. Interactions between the robot and each human are extracted from the interaction module and subsequently aggregated in the pooling module. The planning module estimates the value of the joint state of the robot and humans for navigation in crowds.

Iii-a Parameterization

We follow the robot-centric parameterization in [19, 22], where the robot is located at the origin and the x-axis is pointing toward the robot’s goal. The states of the robot and walking humans after transformation are:


where is the robot’s distance to the goal and is the robot’s distance to the neighbor .

Iii-B Interaction Module

Each human has an impact on the robot and is meanwhile influenced by his/her neighboring humans. Explicitly modeling all pairs of interactions among humans leads to complexity [14], which is not computationally-desirable for a policy to scale up in dense scenes. We tackle this problem by introducing a pairwise interaction module that explicitly models the Human-Robot interaction while using local maps as a coarse-grained representation for the Human-Human interactions.

Given a neighborhood of size , we construct a

map tensor

centered at each human to encode the presence and velocities of neighbors, which is referred as local map in Fig. 3:


where is a local state vector for human , is an indicator function which equals to only if the relative position is located in the cell , is the set of neighboring humans around the person.

We embed the state of human and the map tensor , together with the state of the robot, into a fixed length vector

using a multi-layer perceptron (MLP):



is an embedding function with ReLU activations and

is the embedding weights.

The embedding vector is fed to a subsequent MLP to obtain the pairwise interaction feature between the robot and person :


where is a fully-connected layer with ReLU non-linearity and is the network weights.

Fig. 3: Illustration of our interaction module. We use a multi-layer perceptron to extract the pairwise interaction feature between the robot and each human . The impact of the other people on the human is represented by a local map.

Iii-C Pooling Module

Since the number of surrounding humans can vary dramatically in different scenes, we need a model that can handle an arbitrary number of inputs into a fixed size output. Everett et al. [22] proposed to feed the states of all humans into an LSTM [43] sequentially in descending order of their distances to the robot. However, the underlying assumption that the closest neighbors have the strongest influence is not always true. Some other factors, such as speed and direction, are also essential for correctly estimating the importance of a neighbor. Leveraging the recent progress in self-attention mechanism [38, 40, 44], we propose a social attentive pooling module to learn the relative importance of each neighbor and the collective impact of the crowd in a data-driven fashion.

The interaction embedding is transformed into an attention score as follows:


where is a fixed-length embedding vector obtained by mean pooling all the individuals, is a MLP with ReLU activations and is the weights.

Given the pairwise interaction vector and the corresponding attention score for each neighbor , the final representation of the crowd is a weighted linear combination of all the pairs:

Fig. 4: Architecture of our pooling module. We use a multi-layer perceptron to compute the attention score for each person from the individual embedding vector together with the mean embedding vector. The final joint representation is a weighted sum of the pairwise interactions.

Iii-D Planning Module

Based on the compact representation of the crowd , we build a planning module that estimates the state value for cooperative planning:


where is an MLP with ReLU activations, the weights are denoted by .

Iii-E Implementation Details

The local map is a grid centered at each human and the side length of each cell is . The hidden units of functions are (150,100), (100,50), (100,100), (150,100,100) respectively.

We implemented the value network model in PyTorch

[45] and trained it with a batch size of using Adam [46]. The learning rate is for imitation learning and for reinforcement learning. The discount factor is set to be 0.9. The exploration rate of the -greedy policy decays from 0.5 to 0.1 linearly in the first episodes. The RL training took approximately 10 hours to converge on an i7-8700 CPU.

This work assumes holonomic kinematics for the robot, i.e. it can move in any direction. The action space consists of 80 discrete actions: 5 speeds exponentially spaced between (0, ] and 16 headings evenly spaced between [0, 2).

Iv Experiments

Iv-a Simulation Setup

We built a simulation environment in Python to simulate robot navigation in crowds. The simulated humans are controlled by ORCA [6]

, the parameters of which are sampled from a Gaussian distribution to introduce behavioral diversity. We use circle crossing scenarios for both training and test, where all the agents are randomly positioned on a circle of radius

with some random perturbation added to their x,y coordinates.

Three existing state-of-the-art methods, ORCA [5], CADRL [19] and LSTM-RL [22], are implemented as baseline methods. The main distinction between our method and the RL baselines lies in the interaction and pooling module, we keep the planning module identical for a fair comparison. Note that the LSTM-RL in our implementation differs from the original one [22] in that we use the joint state instead of human’s observable state as the input of the LSTM unit. We refer to our full model as LM-SARL and the model without local map as SARL for ablation experiments.

To fully evaluate the effectiveness of the proposed model, we look into two simulation settings: invisible and visible. The former one sets the robot invisible to the other humans. As a result, the simulated humans react only to humans but not to the robot. We also removed the penalty on the uncomfortable distance in the reward function to eliminate extra factors for collision avoidance. This setting serves as a clean testbed for validating the model’s ability in reasoning the Human-Robot and Human-Human interaction without affecting human’s behaviors. The latter visible setting resembles more realistic cases where the robot and humans have mutual impacts. Models are evaluated with 500 random test cases in both settings.

Iv-B Quantitative Evaluation

Iv-B1 Invisible Robot

In the invisible setting, a robot needs to forecast future trajectories of all the humans to avoid collisions. Table I reports the rates of success, collision, the average navigation time as well as the average discounted cumulative reward in test experiments.

As expected, the ORCA method fails badly in the invisible setting due to the violation of the reciprocal assumption. Among all the reinforcement learning methods, the CADRL has the lowest success rate. This is because the maximin approach used in the CADRL can only take a single pair of interaction into account while ignoring the rest. The frequent failure of the CADRL shows the necessity for a policy to simultaneously take all humans into account.

As a consequence of directly aggregating the surrounding agents’ information, both LSTM-RL and SARL achieve higher success rate. However, LSTM-RL still suffers from some occasional collisions and timeouts, whereas the SARL accomplishes all the test cases. We also observe a dramatic reduction in the average navigation time in the SARL. These results demonstrate the advantages of the proposed attentive pooling mechanism in capturing the collective impact of the crowd. The full version of our model, LM-SARL, achieves the best results in the invisible experiments, outperforming the SARL in terms of both the navigation time and the cumulative reward. Though not by a large margin, this improvements indicates the benefits of encoding the interactions among humans.

Iv-B2 Visible Robot

We further compare the navigation performance of our models with the baselines in the visible setting. The robot not only needs to understand the behavior of humans but also interact with them to obtain high rewards. We define the discomfort frequency as , where is the duration when the separation distance 0.2m. To fairly compare the ORCA baseline with the learning methods, we add an extra 0.1m as the virtual radius of the agent to maintain a comfortable distance to humans for human-aware navigation [9]. The results of all the methods are summarized in Table II.

Different from the invisible case which violates the reciprocal assumption, the ORCA policy in the visible setting achieves a very high success rate and never invade the comfort zone of the other humans. However, the ORCA baseline fails to obtain high rewards due to the short-sighted and conservative behaviors. As pointed out in [31], tuning ORCA towards an objective function can be a tedious and challenging process compared with the learning-based methods.

The Reinforcement Learning results in the visible setting are similar to the invisible ones as expected. Our SARL model outperforms the baselines significantly, and the LM-SARL shows further improvements on the final reward. Since the Human-Human interactions are not significant all the time, their effect on the quantitative results is diluted over episodes. However, we see qualitative improvements which we discuss in the next section.

Methods Success Collision Time Reward
ORCA [5] 0.43 0.57 10.86 0.054
CADRL [19] 0.78 0.22 10.80 0.222
LSTM-RL [22] 0.95 0.03 11.82 0.279
SARL (Ours) 1.00 0.00 10.55 0.338
LM-SARL (Ours) 1.00 0.00 10.46 0.342
TABLE I: Quantitative results in the invisible setting. “Success”: the rate of robot reaching its goal without a collision. “Collision”: the rate of robot colliding with other humans. “Time”: the robot’s navigation time to reach its goal in seconds. “Reward”: discounted cumulative reward in a navigation task.
Methods Success Collision Time Disc. Reward
ORCA [5] 0.99 0.00* 12.29 0.00* 0.284
CADRL [19] 0.94 0.03 10.82 0.10 0.291
LSTM-RL [22] 0.98 0.02 11.29 0.05 0.299
SARL (Ours) 0.99 0.01 10.58 0.02 0.332
LM-SARL (Ours) 1.00 0.00 10.59 0.03 0.334
TABLE II: Quantitative results in the visible setting. “Disc.” refers to as the discomfort frequency (% of duration where robot is too close to other humans). (*) Note that ORCA has a “Collision” and “Disc.” of 0 by design.

Iv-C Qualitative Evaluation

We further investigate the effectiveness of our model for socially compliant navigation through qualitative analysis. As shown in Fig. 5, the navigation paths of different methods are compared in an invisible test case, where the trajectories of humans are identical for a clear comparison. When encountering humans in the center of the space, the CADRL passes them aggressively. By contrast, the LSTM-RL slows down dramatically to avoid the crowd from 4.0s to 8.0s, ending up with a long navigation time. Neither the overly aggressive CADRL nor the conservative LSTM-RL are desirable for robot navigation in the crowd. In comparison to the baselines, our SARL hesitates at first but then recognizes a shorter path to the goal through the center. By taking the shortcut, the robot successfully avoids the other humans. The LM-SARL identifies the central highway from the very beginning, establishing a smart trace concerning both the safety distance and navigation time.

(a) CADRL [19]
(b) LSTM-RL [22]
(c) Our SARL
(d) Our LM-SARL
Fig. 5: Trajectory comparison in an invisible test case. Circles are the positions of agents at the labeled times. When encountering humans, CADRL and LSTM-RL demonstrate overly aggressive and conservative behaviors respectively. In contrast, our SARL and LM-SARL successfully identify a shortcut through the center, which allows the robot to keep some distance from others while navigating to the goal quickly.

In addition to the overall trajectory, we take a closer look at the learned policies in a typical crowded frame where the robot is surrounded by multiple humans. Fig. 6 shows the attention scores of humans inferred by our LM-SARL model. The lowest attention score is assigned to the 4 human who has the largest distance to the robot. The human 5 located not far from the robot also receives a low score, as he is walking away from the robot. In contrast, our model pays more attention to 1, 2 and 3, all of which have a potential future influence on robot’s path planning. The 2 is the closest to the robot and obtains a high attention as a natural result. However, our model gives the highest attention score to 3, who is most likely to get closest to the robot in the next few steps. Through assigning distinctive scores to different humans, our attentive pooling module demonstrates a good ability to reason the relative importance of humans in a dense scene.

Fig. 6: Attention scores in a dense scene. Our LM-SARL assigns low importance scores to the human 4 and 5 who walk away, whereas attending with the highest weight to 3 who is most likely to get close soon.

As the ultimate objective of our model is to accurately estimate the state value for planning, we finally compare the values estimated by different methods in Fig. 7. Given that humans and are likely to cross the straight path from the robot to the goal, the robot is expected to either step aside or to slow down to avoid them.

Limited by the maximin selection, CADRL predicts low values only in the direction towards the closest human 2 while erroneously assigning the highest value to the direction . The LSTM-RL model shifts the action preference slightly to the left but still overestimates the values of the high speeds in the directions around .

In contrast, our SARL model predicts distinctly low values for the full speeds in the dangerous directions from to , leading to a slow down to avoid collisions instead of moving fast. More interesting patterns can be observed in the LM-SARL estimation, where even the low speeds around the direction are discouraged. Considering the social repulsive forces between the person and , person might turn to the robot in the future, raising potential dangers or delays in the direction. By encoding the Human-Human interactions through local maps, LM-SARL succeeds in providing a smart action in the direction, which paves the way for cutting behind and . This indicates LM-SARL’s potential for reasoning about complex interactions among agents.

(a) CADRL [19]
(b) LSTM-RL [22]
(c) Our SARL
(d) Our LM-SARL
Fig. 7: Value estimations by different methods for the dense scene in Fig. 6. The baseline methods predict high values for high speeds straight towards the goal, which is dangerous because of humans 1 and 3. In contrast, our SARL slows down and waits safely, and our LM-SARL prefers to turn to 200, preparing to pass behind them.

Iv-D Real-world Experiments

Aside from the simulation experiments above, we also examine the trained policy in real-world experiments on a Segway robotic platform. A video showing the effectiveness of our model for socially compliant navigation can be found in supplement materials.

V Conclusion

In this work, we tackle the crowd navigation problem by decomposing the Crowd-Robot Interaction into two parts. We first jointly model the Human-Robot and Human-Human interactions and then aggregate the interactions into a compact crowd representation via a self-attention model. Our approach outperforms state-of-the-art reinforcement learning methods in terms of time-efficiency and task accomplishments. Qualitatively, we demonstrate our model’s ability to reason about Human-Human interactions and to selectively attend to humans.