Log In Sign Up

Probabilistic Crowd GAN: Multimodal Pedestrian Trajectory Prediction using a Graph Vehicle-Pedestrian Attention Network

by   Stuart Eiffert, et al.

Understanding and predicting the intention of pedestrians is essential to enable autonomous vehicles and mobile robots to navigate crowds. This problem becomes increasingly complex when we consider the uncertainty and multimodality of pedestrian motion, as well as the implicit interactions between members of a crowd, including any response to a vehicle. Our approach, Probabilistic Crowd GAN, extends recent work in trajectory prediction, combining Recurrent Neural Networks (RNNs) with Mixture Density Networks (MDNs) to output probabilistic multimodal predictions, from which likely modal paths are found and used for adversarial training. We also propose the use of Graph Vehicle-Pedestrian Attention Network (GVAT), which models social interactions and allows input of a shared vehicle feature, showing that inclusion of this module leads to improved trajectory prediction both with and without the presence of a vehicle. Through evaluation on various datasets, we demonstrate improvements on the existing state of the art methods for trajectory prediction and illustrate how the true multimodal and uncertain nature of crowd interactions can be directly modelled.


page 1

page 6


Social-BiGAT: Multimodal Trajectory Forecasting using Bicycle-GAN and Graph Attention Networks

Predicting the future trajectories of multiple interacting agents in a s...

Attentional-GCNN: Adaptive Pedestrian Trajectory Prediction towards Generic Autonomous Vehicle Use Cases

Autonomous vehicle navigation in shared pedestrian environments requires...

Personality-Aware Probabilistic Map for Trajectory Prediction of Pedestrians

We present a novel trajectory prediction algorithm for pedestrians based...

Socially Aware Crowd Navigation with Multimodal Pedestrian Trajectory Prediction for Autonomous Vehicles

Seamlessly operating an autonomous vehicle in a crowded pedestrian envir...

CARPe Posterum: A Convolutional Approach for Real-time Pedestrian Path Prediction

Pedestrian path prediction is an essential topic in computer vision and ...

The Simpler the Better: Constant Velocity for Pedestrian Motion Prediction

Pedestrian motion prediction is a fundamental task for autonomous robots...

Naturalistic Driver Intention and Path Prediction using Recurrent Neural Networks

Understanding the intentions of drivers at intersections is a critical c...

1 Introduction

Pedestrian motion prediction is required for the safe and efficient operation of autonomous vehicles and mobile robots in shared pedestrian environments, such as malls and campuses, as shown in Fig. 1. The motion of individual members of a crowd is dependent on the motion of others nearby, including any vehicles, and contains significant uncertainty during interactions.

Figure 1: Motion of detected pedestrians is predicted using our method Probabilistic Crowd GAN with a Graph Vehicle-Pedestrian Attention Network (PCGAN). Observed trajectories are shown in black. The most likely modal path of the multimodal probabilistic prediction are shown in green against ground truth in blue. Predictions use the vehicle’s motion as a feature input on the USyd Campus dataset [usydc].

In order to better predict pedestrian motion we need to be able to model this uncertainty, which is often multimodal due to the variety of ways in which individuals can interact and avoid each other. Recent works that aim to capture this multimodal and probabilistic nature of crowd interactions have attempted to do so through repeated sampling of generative models, often using Recurrent Neural Networks (RNNs)-based autoencoders trained as Generative Adversarial Networks (GANs)

[gupta2018social, kosaraju2019social]. Due to the nature of adversarial training, where generated trajectories must match the form of the ground truth for comparison by the Discriminator, these methods are limited to generating non-probabilistic outputs. Instead, they require repeated sampling with use of a random latent variable to identify the true multimodal distribution during inference.

Additionally, in applications involving the use of a single-vehicle around pedestrians, such as autonomously navigating a university campus, accurate prediction of nearby pedestrian motion requires inclusion of vehicle-pedestrian interactions in any predictive model. Recent work [kosaraju2019social] has shown that the use of Graph Attention Networks (GATs) [velivckovic2017graph] can improve the modelling of social interactions between pedestrians, as compared to previously used social pooling layers.

Our proposed method, Probabilistic Crowd GAN (PCGAN), allows for the direct prediction of probabilistic multimodal outputs during adversarial training. We make use of a Mixture Density Network (MDN) within the GAN’s generator to output a Gaussian mixture model (GMM) for each pedestrian, demonstrating how clustering of each component of the GMM allows the finding of likely modal-paths, that can then be compared to ground truth trajectories by the GAN’s discriminator. Additionally, we extend the use of GATs for modelling crowd interactions to include heterogeneous interactions between a vehicle and pedestrians, as a Graph Vehicle-Pedestrian Attention Network (GVAT) used for modelling social interactions in our method. We validate our approach on several publicly available real world datasets of pedestrian crowds, as well as two datasets which include crowd-vehicle interactions.

Our main contributions in this work include:

  • Direct multimodal probabilistic output from a GAN for trajectory prediction.

  • Extension of Graph Attention Networks to include a shared vehicle feature in the pooling mechanism.

  • Improved pedestrian motion prediction both with and without the presence of a single vehicle.

2 Related Work

Pedestrian Trajectory Prediction: Approaches to motion prediction in crowds have tended to focus either on modelling scene-specific motion patterns through the inclusion of contextual features [lee2017desire] and learning crowd motion for a specific observed scene [yi2015understanding, zhi2019kernel], or on interactions between individuals. Crowd interactions have been modelled using either hand-crafted methods such as the Social Force Model (SFM) [socialforce], or using learnt models of interaction. Recent developments in learning-based methods of trajectory prediction such as RNNs [alahi2016social, Vemula2018] allow for improved prediction in crowded environments, outperforming parametric based methods such as SFM [Becker2018]. These methods have been applied to multimodal prediction by learning semantically meaningful latent representations in conditional variational autoencoders [Salzmann2020, Hu2019] and GANs [Huang2019], or through clustering modal paths in output distributions [zyner2019naturalistic]. However, these methods can still fail to outperform even simple baselines such as constant velocity models in many situations [Scholler2019].

GANs for Probabilistic Prediction: GANs [NIPS2014_5423] have been recently used to enable the generation of socially acceptable trajectories in crowd motion prediction. Gupta et al. [gupta2018social] proposed Social GAN, in which the generator of the network consists of an LSTM based encoder-decoder with a social pooling layer modelling the relationship between each pedestrian. The output trajectories of the generator are directly compared to the ground truth by the Discriminator. Social-BiGAT [kosaraju2019social] extends this idea, introducing a flexible graph attention network and further encouraging generalization towards a multi-modal distribution. This method, as well as a similar GAN based approach proposed by [sophie2018], also make use of overhead contextual scene inputs, which are often difficult to capture in autonomous driving systems. Prior work [gupta2018social, lee2017desire, Huang2019] using GANs for trajectory prediction has followed the assumption from GAN application to image synthesis that we cannot efficiently evaluate the output distribution, but can sample from it, requiring multiple iterations to identify the true multimodal distribution. However, our problem’s output distribution is much lower-dimensional than image synthesis, and has been modelled previously by GMMs as in [ivanovic2019, Salzmann2020, zyner2019naturalistic], allowing a distribution to be generated from a single iteration. Further, our aim differs from the synthesis in that we are not trying to just generate samples in the style of ground truth conditioned on an observation, but rather samples that mimic ground truth.

Interaction Modelling: Alahi et al. [alahi2016social] proposed the use of RNNs with a social pooling layer to capture interactions between pedestrians in a crowd, with similar pooling layer being used in Social GAN [gupta2018social]. The pooling mechanism used in [Chandra_2019_CVPR] allows interactions between different agent types by learning respective weightings for each relationship. Recent works [Eiffert2019, ma2019trafficpredict] have extended the work of Vemula et al.[Vemula2018] to apply Structural-RNNs [Jain_2016_srnn] to heterogeneous interactions, modelling multiple road agents types using RNNs in a spatio-temporal graph. Veličković proposed graph attention networks (GAT) [velivckovic2017graph] to implicitly assign different importance to nodes in graph structured data. Kosaraju et al. [kosaraju2019social] applied this concept to multimodal trajectory prediction by formulating pedestrian interactions as a graph, however apply the graph structure only within the pooling mechanism as a GAT, rather than modelling each relationship of the graph as a separate RNN as in [Vemula2018]. We extend this idea in our work to demonstrate how a vehicle’s feature can be included in GAT as a GVAT.

3 Method

3.1 Problem Definition

In this paper, we address the problem of pedestrian trajectory prediction in crowds both with and without the presence of a vehicle. Given observed trajectories X, and the vehicle path V, for all time steps in period , where for pedestrians within a scene, our aim is to predict the likely future trajectories for each pedestrian in , across a future time period . The input position of the th pedestrian at time is defined as and the vehicle as . We denote Y as the ground truth future trajectory, with the position of the th pedestrian at time defined as and predicted position as for all predicted modal paths , , where is the likelihood of the predicted modal path for agent . is found from the probabilistic output , a Gaussian mixture model (GMM) detailed in Eq. 14.

Figure 2: Observed pedestrian trajectories are passed to the Generator’s encoder LSTM, whilst the relative position of all agents, including any vehicle, are passed to the GVAT Pooling module. The Generator outputs a GMM for each agent, from which the MultiPAC module finds the likely modal paths, which are compared to ground truth paths by the Discriminator.

3.2 Overview

Our approach consists of two networks, a Generator and a Discriminator trained adversarially. The Generator is composed of an RNN encoder, our GVAT module, an RNN decoder, and a Mixture Density Network (MDN). The Discriminator is composed of the modal path clustering module MultiPAC, an RNN encoder and a multilayer perceptron (MLP).

Fig.2 illustrates the overall system architecture.

Generator: The Generator is based on an RNN encoder-decoder framework using LSTM modules, where the GVAT Pooling module is applied to the hidden states between the encoder and decoder LSTMs. The input to the encoder LSTM at each timestep is the observed position of each pedestrian , which is first passed through a linear embedding layer as follows:


where is embedding weight of . All pedestrians within a scene share the LSTM weights . The decoder’s initial hidden state at is composed of the encoder’s final hidden state, concatenated with the transformed output of GVAT Pooling for each agent, detailed further in Section 3.3. The first input to the decoder at is again the observed pedestrian positions, passed first through a linear embedding layer in the same form as with separate weights. However, as the decoder outputs are a distribution, rather than a single point, we do not simply pass the prediction from the prior timestep as input to the decoder’s current timestep. Instead, for all prediction timesteps the decoder inputs are zeros. This is done as opposed to other probabilistic approaches which feed a sample from the prior output as current input to the decoder. This zero-feed approach is performed for both training and inference, and has been shown to improve performance for probabilistic outputs [zyner2019naturalistic].


where and is the combined output of Eq.2 for all agents in the scene.

is a multi-layer perception with ReLU non-linearity and

is the embedding weight. The outputs of the decoder are passed through a linear embedding layer with weights that maps to a bivariate GMM output for each agent’s position at each predicted timestep. is then passed to the MultiPAC module to determine the set of likely modal paths :


Discriminator: The Discriminator is comprised of a MultiPAC module, and an LSTM encoder of the same form as the Generator’s, with separate weights. The output of Generator is first passed to MultiPAC, from which we compute the set of likely modal paths , as detailed in Section 3.5. This produces trajectories in the same form as the ground truth, allowing comparison by the Discriminator’s encoder. The encoder is applied across all timesteps

, with inputs first passed through a linear embedding layer. Outputs of the encoder are passed to a multilayer perceptron (MLP) with ReLU activation, classifying the path as either a Real or Fake.


Training of the network is achieved using two loss functions

and . is the negative log-likelihood of the ground truth path Y given the Generator ’s output , across all prediction timesteps, for all pedestrians:


is the adversarial loss, determined from the binary cross entropy of the Discriminator ’s classification of the modal paths produced from by MultiPAC:


where the first term refers to

’s estimate of the probability that the ground truth trajectory

is real, and the second term is the sum of weighted estimates for each modal path in the set being real. We combine the losses to find the optimal Discriminator and Generator , with weighting applied to :


3.3 Graph Vehicle-Pedestrian Attention Network

We introduce a novel Graph Vehicle-Pedestrian Attention Network (GVAT), which extends upon the use of GATs [velivckovic2017graph] for trajectory prediction by [kosaraju2019social] [zhang2019sr], allowing the modelling of social interactions between all pedestrians in a scene, and accommodating the inclusion of a vehicle if present. As opposed to [kosaraju2019social], where only agent hidden states form the GAT input features, we also utilise distance between agents, so that vehicle distance to agent i can be included to allow the attention module to account for the impact that the vehicle’s motion has on each ped-ped relationship. Fig. 3. details the input features of a single node in the graph.

For the

th pedestrian, the input to the softmax layer is formulated across all other pedestrians

j by embedding the distance from pedestrian to the neighbour pedestrian and the vehicle. The softmax scalar is then used to scale the amount agent j’s hidden state influences agent i. The summed output across all other agents, , is then concatenated with i’s original state to form the output of GVAT pooling . , and are linear embedding functions, , and denote their parameters respectively:

Figure 3: Node features of agent i (red) in GVAT. The distance from i to the vehicle is appended to each other ped-ped distance input before encoding to account for the impact of the vehicle on i’s relationships within the graph. The input to softmax layer is as per Eq. 10
Metric Dataset Lin CVM SGAN SRLSTM Ours
ADE ETH-Univ 0.50 / 0.79 0.48 / 0.70 0.51 / 0.81 0.43 / 0.65 0.45 / 0.68 0.43 / 0.65
ETH-Hotel 0.35 / 0.39 0.28 / 0.33 0.55 / 0.67 0.24 / 0.42 0.52 / 0.64 0.59 / 0.64
UCY-Univ 0.56 / 0.82 0.34 / 0.56 0.56 / 0.78 0.38 / 0.53 0.34 / 0.55 0.49 / 0.57
UCY-Zara01 0.41 / 0.62 0.28 / 0.46 0.46 / 0.63 0.28 / 0.43 0.25 / 0.43 0.25 / 0.40
UCY-Zara02 0.53 / 0.77 0.23 / 0.35 0.35 / 0.56 0.24 / 0.32 0.27 / 0.37 0.22 / 0.34
FDE ETH-Univ 0.88 / 1.57 0.87 / 1.34 0.95 / 1.72 0.80 / 1.26 0.84 / 1.34 0.81 / 1.25
ETH-Hotel 0.60 / 0.72 0.40 / 0.62 0.49 / 1.71 0.45 / 0.90 1.13 / 1.45 1.10 / 1.40
UCY-Univ 1.01 / 1.59 0.71 / 1.20 1.20 / 1.70 0.81 / 1.17 0.71 / 1.23 0.89 / 1.24
UCY-Zara01 0.74 / 1.21 0.57 / 0.99 0.99 / 1.38 0.60 / 0.93 0.53 / 0.87 0.53 / 0.89
UCY-Zara02 0.95 / 1.48 0.47 / 0.75 0.75 / 1.21 0.51 / 0.73 0.56 / 0.76 0.45 / 0.77
MHD ETH-Univ 0.48 / 0.66 0.40 / 0.57 0.44 / 0.66 0.38 / 0.54 0.40 / 0.59 0.38 / 0.55
ETH-Hotel 0.33 / 0.33 0.20 / 0.27 0.22 / 0.69 0.22 / 0.37 0.45 / 0.56 0.51 / 0.55
UCY-Univ 0.52 / 0.76 0.31 / 0.48 0.48 / 0.67 0.34 / 0.45 0.30 / 0.49 0.41 / 0.50
UCY-Zara01 0.39 / 0.55 0.24 / 0.40 0.40 / 0.52 0.26 / 0.36 0.23 / 0.37 0.23 / 0.35
UCY-Zara02 0.47 / 0.71 0.23 / 0.31 0.31 / 0.49 0.22 / 0.31 0.25 / 0.33 0.20 / 0.31
Table 1: Quantitative results of tested methods on all non-vehicle datasets. For each dataset, we compare results across two prediction lengths of 8 and 12 timesteps (3.2 and 4.8 secs), showing Average Displacement Error (ADE), Final Displacement Error (FDE), and Modified Hausdorff Distance (MHD) in meters.
Metrics Dataset Lin CVM SGAN SRLSTM Ours
ADE USyd 0.16 0.13 0.16 0.11 0.11 0.11
VCI 0.11 0.09 0.12 0.08 0.12 0.08
FDE USyd 0.30 0.24 0.31 0.22 0.21 0.21
VCI 0.23 0.18 0.22 0.16 0.20 0.15
MHD USyd 0.12 0.09 00.12 0.09 0.08 0.09
VCI 0.09 0.07 0.09 0.07 0.09 0.07
Table 2: Quantitative results of tested methods on both vehicle datasets. We compare results using a prediction length of 12 timesteps (1.0 second (VCI) and 1.2 seconds (USyd)). ADE, FDE and MHD are shown in meters.

3.4 Mixture Density Network

An MDN is used to allow the Generator to propose a multimodal solution for each agent’s future trajectory, with assigned relative likelihoods for each Gaussian component of the mixture model. To achieve this, the output of the Generator’s decoder is passed through a multilayer perceptron (MLP) to produce output in the form:


where is the total number of components used in the mixture model, is the weight of each component in the mixture, is the mean and

the standard deviation per dimension, and

is the correlation coefficient, for each timestep . This is performed separately for each agent , which has been left off for clarity.

3.5 Modal Path Clustering

In order to allow the training of the Discriminator, the output of the Generator must be converted to the same form as the ground truth trajectories Y. This requires extracting individual tracks from the GMM , whilst preserving the multimodality of the distribution. We achieve this by adapting the multiple prediction adaptive clustering algorithm (MultiPAC) proposed by Zyner et al. [zyner2019naturalistic]

to allow backpropagation for use during training. MultiPAC finds the set of likely ‘modal paths’

, for each pedestrian from . It achieves this by clustering the components of the GMM at each timestep using DBSCAN [ester1996], determining each cluster’s centroid from the weighted average of all Gaussians in the mixture. Clusters in subsequent timesteps are assigned to parent clusters, forming a tree of possible paths with an upper limit of children at each timestep being the number of mixtures used within the GMM. This tree is computed from a single forward pass of the model, resulting in a forked trajectory when diverging possible paths are predicted for a single agent, passing each branch of the fork separately to the Discriminator. The paths from each leaf to the root are returned as the set of modal paths for each pedestrian with assigned likelihoods .

3.6 Implementation

The LSTM encoder and decoder of the Generator both have a hidden state size of 32, whilst the Discriminator’s LSTM encoder hidden state size is 64. The linear embedding layers applied to all inputs of both encoders, and the first input of the decoder at

, produce a 16-dimensional vector from the input coordinates. The linear embedding layer at the decoder’s output produces a vector of

, where

is the number of components in the GMM, set as 6 for all experiments. Both MLPs have a hidden layer of size 64 and use ReLU activation. The network is trained initially for 10 epochs using only the negative log-likelihood loss

, before training adversarially using both loss functions for a further 90 epochs. This initial training is implemented in order to encourage the Generator to produce sensible results before comparison to the ground truth by the Discriminator, and also allows training to converge in significantly fewer iterations. All training is performed using Adam optimiser with a batch size of 32 and initial learning rate of 0.001. The weighting applied to in Eq. 8 is chosen as 0.1.

4 Experiments

We conduct two experiments in order to validate our method’s effectiveness both with and without a vehicle feature input. Firstly, we evaluate our model without any vehicle feature input on two publicly available datasets of real world interacting pedestrian crowds , ETH [pellegrini2009you] and UCY[lerner2007crowds]. Next, we verify our model using a vehicle feature input on two datasets of interacting pedestrian crowds and vehicles. These include the publicly available dataset, Vehicle-Crowd Interaction DUT dataset (VCI) [yang2019top], and the USyd Campus Dataset (USyd) [usydc].

4.1 Datasets

ETH and UCY contain 5 crowd scenes: ETH-Univ, ETH-Hotel, UCY-Zara01, UCY-Zara02, and UCY-Hotel. Each dataset is converted to world coordinates with an observation frequency of 2.5 Hz, similar to [gupta2018social]. We deal with the ETH-Univ frame rate issue addressed by [zhang2019sr] similarly by treating every 6 frames as 0.4s rather than 10 frames, and retrain all comparative models for this scene.

USyd is collected on a weekly basis by [usydc] from March 2018 over the University of Sydney campus and surroundings. The dataset contains over 52 weeks drives and covers various environmental conditions. Since our research work primarily focuses on predicting socially plausible future trajectories of pedestrians under the influence of one vehicle, we select from the dataset 17 scenarios in an open large area with high pedestrian activity. Pedestrians are detected by fusing YOLO [2016you] classification results and LiDAR point clouds from vehicle onboard sensors, as illustrated in Fig. 1. The GMPHD [Vo2006] tracker is used to automatically label the trajectories of pedestrians. To increase the diversity of data available for training models, we apply data augmentation by flipping 2D coordinates randomly. Due to limitations regarding the length of time agents are observed in this dataset, we use an observation frequency of 10 Hz, rather than downsampling to be comparable to Experiment 1.

VCI proposed by [yang2019top], contains two scenes of labelled video from a birds eye view of vehicle-crowd interactions, recorded at 24 Hz. We downsample this dataset to 12 Hz in order to make results comparable with the USyd dataset. We remove sequences which contain more than one vehicle.

Figure 4: The predicted modal path trees of MultiPAC are shown in a different colour for each pedestrian, over the probabilistic output of the Generator. Example interactions are from the ETH and UCY datasets, using PCGAN trained without vehicle feature input. Multimodal output is clear in examples in which pedestrians may take one of multiple possible future paths to avoid the collision. Example (a) displays two likely paths that the yellow agent might have taken as the pedestrians approach each other. Example (b) similarly shows multimodal possibilities, including the pedestrians continuing to turn, or to start travelling forwards. Examples (c) through (f) demonstrate similar behaviour in larger pedestrian crowds.
Figure 5: Predicted pedestrian trajectories using PCGAN trained with vehicle feature input on the VCI dataset. Example (a) illustrates a scene of a vehicle approaching pedestrians from behind, displaying expected multimodal reactions of the pedestrians to either continue forwards at increased speed or move aside. Examples (b) and (c) further illustrate this concept, showing how the direction of the vehicle approach can impact the pedestrians’ reaction.

4.2 Evaluation Metrics and Baselines

4.2.1 Metrics

Similar to prior work [alahi2016social] we included two error metrics: Average Displacement Error (ADE) and Final Displacement Error (FDE). However, as discussed by Zyner et al. [zyner2019naturalistic]

, these commonly used measures do not consider outliers throughout the prediction and penalize misalignment in time and space equally. This can result in a prediction with an incorrect speed profile but correct direction having a similar error as a prediction with the completely wrong direction, which is a significantly worse result. As such, Modified Hausdorff Distance (MHD)


, which does not suffer this issue, is also included as an evaluation metric.

The metrics used are as follows:

  • ADE: Average Euclidean distance between ground truth and prediction trajectories over all predicted time steps.

  • FDE: Euclidean distance between ground truth and prediction trajectories for the final predicted time step.

  • MHD: A measure of similarity between trajectories, determining the largest distance from each predicted point to any point on the ground truth trajectory.

4.2.2 Baseline Comparisons

We compare our model against the following baseline and state of the art methods:

  • Lin: A linear regression of pedestrian motion over each dimension.

  • CVM: Constant Velocity Model proposed by  [Scholler2019].

  • Social GAN (SGAN) [gupta2018social]: LSTM encoder-decoder with a social pooling layer, trained as a GAN.

  • SR-LSTM [zhang2019sr]: LSTM based model using a State Refinement module.

Additionally, we perform an ablation study of our method, comparing our model using a social pooling layer as proposed in [gupta2018social] (PSGAN), and the model trained instead using our GVAT module for social pooling (PCGAN).

As SGAN requires a random noise input for generation, we sample this method 10 times, returning the average error of all samples, as opposed to [gupta2018social], where the sample with the best error compared to the ground truth was used.

4.3 Methodology

For all evaluations on our probabilistic methods, PSGAN and PCGAN, we apply MultiPAC to the output of the Generator to find all modal paths, using the predicted path with the highest probability to compute the error.

Experiment 1: Similar to [alahi2016social], we train on four datasets and evaluate on the remaining one. We observe the last 8 timesteps of each trajectory (3.2 seconds) and predict for the next 8 (3.2 seconds) and 12 (4.8 seconds) timesteps.

xxExperiment 2: Each dataset is split into non-overlapping train, validation and test sets in ratios of 60%, 20% and 20% respectively. We observe 8 timesteps of each trajectory (0.67 seconds (VCI) and 0.8 seconds (USyd)) and predict for the next 12 timesteps (1.0 second (VCI) and 1.2 seconds (USyd)).

Figure 6: Comparison of methods on UCY-Zara01 (top) and VCI (bottom) showing the entire modal path tree for both PSGAN and PCGAN. Whilst CVM and SRLSTM outperform our methods on some datasets, our multimodal output better represents uncertainty in crowd interactions, demonstrated in the top example where the possibility that oncoming pedestrians could avoid each other in two different ways is reflected in the branching modal path trees. The bottom illustrates how PCGAN improves predictions in the presence of a vehicle compared to PSGAN, accounting for the impact of the vehicle’s motion on pedestrians’ motion.

5 Results and Discussion

5.1 Quantitative Evaluation

Experiment 1: Table.1 compares results for all methods on the ETH and UCY datasets. Our adversarial approaches PSGAN and PCGAN clearly outperform the previous sampling-based adversarial approach [gupta2018social] demonstrating that the use of a direct probabilistic generator output can improve performance in the problem of trajectory prediction. Additionally, PCGAN and PSGAN achieve comparable or improved performance in 17 out of 30 metrics compared to prior methods suggesting that our probabilistic GAN approach can improve trajectory prediction performance in certain crowd interactions. Even when used without vehicle feature input we can see that the inclusion of the GVAT for social pooling in PCGAN improves the performance in the majority of tests compared to PSGAN. However, on both ETH-Hotel and UCY-Univ we find that PSGAN outperforms PCGAN. On these two datasets, CVM also performs well, suggesting that there may be fewer pedestrian interactions involved allowing more linear models to achieve improved results. Schöller et al. [Scholler2019] demonstrated the effectiveness of CVM, and we find that this result still holds even when limited to prediction periods of 8 and 12 timesteps. SGAN [gupta2018social] performs poorly for all tested datasets when limited to using the average error over multiple samples, as opposed to using the best sample error compared to the ground truth. This result is similar to that obtained in [zhang2019sr], where SGAN was not found to perform well when limited to a single sample. Whilst this may be a result of SGAN sampling between multiple future modal paths, our method PSGAN, which extends SGAN for direct probabilistic output, demonstrates that by being able to estimate the likelihood of each modal path, we can greatly decrease the error of the adversarially trained method obtained for all metrics. Unlike both SGAN and our methods, State Refinement LSTM (SRLSTM) [zhang2019sr] pools across the pedestrian hidden states found from the most recent observation. Whilst only being comparable for predictions of 12 timesteps, this method performs well for all datasets, confirming the importance of using the most recently available information for predictions.

Experiment 2: Table.2 outlines the performance of all compared methods on the VCI and USyd datasets, both of which contain pedestrian-vehicle interactions. These results again highlight how using a probabilistic output during adversarial training can improve prediction results, with both of our methods, PSGAN and PCGAN, improving upon SGAN. Importantly, we can see that by including the vehicle feature input in GVAT pooling we can achieve significant improvements on the VCI dataset, with PCGAN significantly outperforming PSGAN, and outperforming or equalling SRLSTM on all metrics. CVM and PSGAN score well for the MHD metric, suggesting that these methods are likely incorrectly predicting the speed profile of pedestrian trajectories, but correctly predicting the direction.

5.2 Qualitative Evaluation

Experiment 1: Fig.4 demonstrates realistic behaviours between pedestrians, producing results that reflect the actual probabilistic and multimodal nature of crowd interactions. (a) and (b) both reflect the ambiguity expected during an interaction between two pedestrians. The two possible trajectories that can be taken to avoid an oncoming pedestrian are clearly displayed in the modal paths of example (a), where one branch of the modal path tree matches the actual trajectory taken. This situation is again seen in Fig.6 (top), where PSGAN (dark green) and PCGAN (light green) are able to accurately predict the turning of oncoming pedestrians with branching modal paths, whilst both SRLSTM (pink) and CVM (yellow) do not account for this ambiguity. Likewise, example (b) reflects the possibility that the two pedestrians might continue turning together, or instead travel forwards beside each other. Additional examples extend these ideas to more crowded scenes, with multiple pedestrians displaying the similar multimodal and uncertain interactions. In Fig.4 (b), whilst there exists clear dependency between the two predicted forking modal path trees, our model does not currently have the ability to determine this relationship, and so cannot predict which branch an agent will take even with knowledge of the true path of a neighbouring agent.

xxExperiment 2: The extension of our approach to include a vehicle allows the modelling of interactions in shared pedestrian-vehicle environments, predicting crowd response in the presence of a vehicle as shown in Fig.5. Experiment 2 uses a shorter timestep, of only 0.1 second on the USyd, and 0.083 seconds for VCI. As is expected over this shorter time we don’t see as significant interactions, reflected in near-linear ground truth in both Fig.5 and Fig.6 (bottom). However, we can still see clear multimodal predictions in certain interactions, including when the vehicle is approaching pedestrians from behind as in Fig.6 (a), where the closest pedestrian responds by beginning to move to the side. This interaction is reflected in the predicted modal paths, although the sideways direction is predicted in the wrong direction. In Fig.6 (bottom) we also see how only PCGAN accounts for the vehicle’s influence on the pedestrians, correctly predicting the possibility that the pedestrians will return to their original motion once the vehicle has passed.

6 Conclusion

Our work shows how a direct multimodal probabilistic output can be generated in an adversarial network for pedestrian trajectory prediction, outperforming existing methods including sampling-based approaches. We additionally show how the presence of an autonomous vehicle can be considered through the introduction of a novel GVAT pooling mechanism. By comparing our work to [gupta2018social], a non-probabilistic GAN used for trajectory prediction, we have shown that our probabilistic approach clearly benefits adversarial training for this problem. Our work focuses on how a single vehicle can operate away from the lane-based structure of a road, examining crowd interactions to enable safer decisions, however could in future be extended for use with multiple vehicles through inclusion of all vehicles as nodes, removing the z term from Eq. 9 and replacing and with a different set of weights for each agent type pair to learn relationship dynamics.