Pedestrian trajectory prediction is an important task in autonomous driving [1, 2, 3] and mobile robot applications [4, 5, 6]. This task allows an intelligent agent, e.g., a self-driving car or a mobile robot, to foresee the future positions of pedestrians. Depending on such predictions, the agent can move in a safe and smooth route.
However, pedestrian trajectory prediction is a great challenge due to the intrinsic uncertainty of pedestrians’ future positions. In a crowded scene, each pedestrian dynamically changes his/her walking speed and direction, partly attributed to his/her interactions with surrounding pedestrians.
To make an accurate prediction, existing algorithms focus on making full use of the interactions between pedestrians. Early works model the interactions [7, 8, 9, 10] by hand-crafted features. Social Force  models several force terms to predict human behaviors. The approach in  constructs an energy grid map to describe the interactions in crowded scenes. However, their performances are limited by the quality of manually designed features. Recently, data-driven methods have demonstrated their powerful performance [11, 12, 13, 14]. For instance, Social LSTM  considers interactions among pedestrians close to each other. Social GAN  models interactions among all pedestrians. Social Attention  captures spatio-temporal interactions.
Previous methods have achieved great success in trajectory prediction. However, all these methods assume that the complicated interactions among pedestrians can be decomposed into pairwise interactions. This assumption neglects the collective influence among pedestrians in the real world. Thus previous methods tend to fail in complicated scenes. In the meanwhile, the number of pairwise interactions increases quadratically as the number of pedestrians increases. Hence, existing methods are computationally inefficient.
In this paper, we propose a new deep neural network, StarNet, to model complicated interactions among all pedestrians together. As shown in Figure 1, StarNet has a star topology, and hence the name. The central part of StarNet is the hub network, which produces a representation of the interactions among pedestrians. To be specific, the hub network takes the observed trajectories of all pedestrians and produces a comprehensive spatio-temporal representation of all interactions in the crowd. Then, is sent to each host network. Each host network predicts one pedestrian’s trajectory. Specifically, depending on , each host network exploits an efficient method to calculate the pedestrian’s interactions with others. Then, the host network predicts one pedestrian’s trajectory depending on his/her interactions with others, as well as his/her observed trajectory.
StarNet has two advantages over previous methods. First, the representation is able to describe not only pairwise interactions but also collective ones. Such a comprehensive representation enables StarNet to make accurate predictions. Second, the interactions between one pedestrian and others are efficiently computed. When predicting all pedestrians’ trajectories, the computational time increases linearly, rather than quadratically, as the number of pedestrians increases. Consequently, StarNet outperforms multiple state-of-the-arts in terms of both accuracy and computational efficiency.
Our contributions are two-folded. First, we propose to describe collective interactions among pedestrians, which results in more accurate predictions. Second, we devise an interesting topology of the network to take advantage of the representation , leading to computational efficiency.
Ii Related Work
Our work mainly focuses on human path prediction. In this section, we give a brief review of recent researches on this domain.
Pedestrian path prediction is a great challenge due to the uncertainty of future movements [7, 8, 10, 11, 13, 14, 15]. Conventional methods tackle this problem with manually crafted features. Social Force  extracts force terms, including self-properties and attractive effects, to model human behaviors. Another approach  constructs an energy map to indicate the traffic capacity of each area in the scene, and uses a fast matching algorithm to generate a walking path. Mixture model of Dynamic pedestrian-Agents (MDA)  learns the behavior patterns by modeling dynamic interactions and pedestrian beliefs. However, all these methods can hardly capture complicated interactions in crowded scenes, due to the limitation of hand-crafted features.
Data-driven methods remove the requirement of hand-crafted features, and greatly improve the ability to predict pedestrian trajectories. Some attempts [11, 13, 14, 26, 27] receive pedestrian positions and predict determined trajectories. Social LSTM  devises social pooling to deal with interpersonal interactions. Social LSTM divides pedestrian’s surrounding area into grids, and computes pairwise interactions between pedestrians in a grid. Compared with Social LSTM, other approaches [13, 15] eliminate the limitation on a fixed area. Social GAN  combines Generative Adversarial Networks (GANs)  with LSTM-based encoder-decoder architecture, and sample plausible trajectories from a distribution. Social Attention estimates multiple Gaussian distributions of future positions, then generates candidate trajectories through Mixture Density Network (MDN) .
However, existing methods compute pairwise features, and thus oversimplified the interactions in the real word environment. Meanwhile, they suffer from a huge computational burden in crowded scenes. In contrast, our proposed StarNet with novel architecture is capable of capturing joint interactions over all pedestrians, which is more accurate and efficient.
In this section, we first describe the formulation of the pedestrian prediction problem. Then we provide the details of our proposed method.
Iii-a Problem Formulation
We assume the number of pedestrians is . The number of observed time steps is . And the number of time steps to be predicted is . For the -th pedestrian, his/her observed trajectory is denoted as , where represents his/her coordinates at time step . Similarly, the future trajectory of ground truth is denoted as .
Given such notations, our goal is to build a fast and accurate model to predict the future trajectories of all pedestrians, based on their observed trajectories . In other words, we try to find a function mapping from to . We employ a deep neural network, which is called StarNet, to embody this function. Specifically, StarNet consists of two novel parts, i.e., a hub network and host networks. The hub network computes a representation of the crowd. Then, each host network predicts the future trajectory of one pedestrian depending on the pedestrian’s observed trajectory and . We first describe the hub network and then present host networks.
Iii-B The hub network
The hub network takes all of the observed trajectories simultaneously and produces a comprehensive representation of the crowd of pedestrians. The representation includes both spatial and temporal information of the crowd, which is the key to describe the interactions among pedestrians.
Note that our algorithm should be invariant against isometric transformation (translation and rotation) of the pedestrians’ coordinates. The invariance against rotation is achieved by randomly rotate our training data during the training process. While the invariance against translation is guaranteed by calculating a translation invariant representation .
As shown in Figure 2, the hub network produces by two steps. First, the hub network produces a spatial representation of the crowd for each time step. The spatial representation is invariant against the translation of the coordinates. Then, the spatial representation is fed into a LSTM to produce the spatio-temporal representation .
Iii-B1 Spatial representation
In the first step, in order to make the representation invariant against translation, the hub network preprocesses the coordinates of pedestrians by subtracting the central coordinates of all pedestrians at time step from every coordinate.
Thus, the centralized coordinates are invariant against translation. Such coordinates of each pedestrian are mapped into a new space using an embedding function with parameters ,
where is the predicted position of the -th pedestrian at time step . is the spatial representation of the -th pedestrian’s trajectory at time step . The embedding function is defined as:
Then, we use a maxpooling operation to combine the spatial representation of all pedestrians, obtaining the spatial representation of the crowd at time step ,
Spatial representation contains information of the crowd at a single time step. However, pedestrians interact with each other dynamically. To improve the accuracy of predictions, a spatio-temporal representation is required.
Iii-B2 Spatio-temporal representation
In the second step, the hub network feeds a set of spatial representations of sequential time steps into a LSTM. Then, the LSTM combines all the spatial representations in its hidden state. Thus, the hidden state of the LSTM is a spatio-temporal representation of all pedestrians. Specifically, we can calculate as follows:
where and are the embedding weights, is the weight of LSTM. and are the output and hidden state of the LSTM respectively.
Note that, depends on the observed trajectories of all pedestrians. Hence, our algorithm is able to consider complicated interactions among multiple pedestrians. This property allows our algorithm to produce accurate predictions. Meanwhile, is able to be obtained in a single forward propagation of the hub network at each time step. In other words, the time complexity of computing interactions among pedestrians is linear to the number of pedestrians . This property allows our algorithm to be computationally efficient. By contrast, conventional algorithms compute pairwise interactions, leading to oversimplification of the interactions among pedestrians. Also, the number of pairwise interactions increases quadratically as increases.
Iii-C The host networks
The spatio-temporal representation is then employed by host networks. For the -th pedestrian, the host network first embeds the observed trajectory , and then combines the embedded trajectory with the spatio-temporal representation , predicting the future trajectory. Specifically, the host network predicts the future trajectory by two steps.
First, the host network takes the observed trajectory and the spatio-temporal representation as input and generates an integrated representation ,
where is the embedding weight, and denotes the point-wise multiplication. depends on both the trajectory of the -th pedestrian and the interactions between the -th pedestrian and others in the crowd.
Second, the host network predicts the future trajectory of the -th pedestrian depending on the observed trajectory and the integrated representation . To encourage the host network to produce non-deterministic predictions, a random noise
, which is sampled from a Gaussian distribution with mean 0 and variance 1, is concatenated to the input of the host network. Specifically, the host network encodes the observed trajectorywith the hidden state , i.e.,
where with weight denotes the encoding procedure. Then, the host network proceeds with
where with weight is the decoding function. is the embedding weight of the output layer. And the initial states are set according to,
Iii-D Implementation Details
The network configuration of StarNet is detailed in TABLE I.
We train the proposed StarNet with the loss function applied in. Specifically, at the training stage, StarNet produces multiple predicted trajectories for each pedestrian. Each predicted trajectory has a distance to the ground truth trajectory . Only the smallest distance is minimized. Mathematically, the loss function is,
where is the number of sampled trajectories. This loss function improves the training speed and stability. Moreover, we employ an Adam optimizer and set the learning rate to .
In practice, all host networks share the same weights. The observed trajectories of all pedestrians form a batch, which is fed into one single implementation of the host network. In this way, the prediction for all pedestrians is able to be obtained in a single forward propagation.
We evaluate our model on two human crowded trajectory datasets: ETH  and UCY . These datasets have 5 sets with 4 different scenes. In these scenes, there exist challenging interactions, such as walking side by side, collision avoidance and changing directions. Following the settings in [11, 13, 14], we train our model on 4 sets and test on the remaining one.
We compare our StarNet with three state-of-the-arts including Social LSTM, Social GAN and Social Attention. Besides, we test the basic LSTM-based encoder-decoder model, which does not consider the interactions among pedestrians, as a baseline.
Following [11, 13, 14], we compare these methods in terms of the Average Displacement Error (ADE) and Final Displacement Error (FDE). The ADE is defined as the mean Euclidean distance between predicted coordinates and the ground truth. Specifically, all methods output 8 coordinates uniformly sampled from the predicted trajectory. Then the distance between such 8 points with the ground truth is accumulated as the ADE. The FDE is the distance between the final point of the predicted trajectory and the final point of the ground truth. All these methods are trained with the loss Eq. (10) to deal with multimodal distribution during evaluation. Besides, we compare the computational time of all these methods. All experiments are conducted on the same computational platform with an NVIDIA Tesla V100 GPU.
Iv-a Experimental Results
As shown in TABLE II, StarNet outperforms the others in most cases. A possible explanation is that StarNet considers the collective influence among pedestrians all together to make more accurate predictions. In comparison, other state-of-the-arts only model the pairwise interactions between pedestrians.
Interestingly, we notice that the test datasets include multiple senses. In these scenes, StarNet has the smallest variances of ADE and FDE, which means that StarNet is robust against the changes of scenes.
To assess StarNet qualitatively, we illustrate the prediction results in 4 scenes, as shown in Figure 3. In each scene, the left sub-figure presents the observed trajectories and the predicted trajectories of all pedestrians. The right sub-figure shows the trajectories of ground truth.
We can observe that StarNet could handle complicated interactions among pedestrians. Most predicted trajectories accurately reflect the pedestrians’ movements and have no collisions with other trajectories. However, there are some failure cases due to the multimodal distribution of future trajectories. For example, in 3(c), the predictions for the blue and green trajectories fail to match the ground truth. We argue that although these predicted trajectories do not match the ground truth, these trajectories are still plausible in crowded scenes.
Iv-A2 Computational time cost
When deployed in mobile robots and autonomous vehicles, the prediction algorithm needs to be invoked with a high frequency. Hence the computational time of the prediction algorithm is a crucial property.
As shown in TABLE III, the basic LSTM model is the fastest model since the model takes no interactions among pedestrians into consideration. StarNet is the second fastest model. Specifically, StarNet is 51 times faster than Social Attention, 7 times faster than Social LSTM, and 3 times faster than Social GAN. Meanwhile, the number of parameters employed by StarNet is less than state-of-the-arts by a large margin. StarNet is computationally efficient since the interpersonal interactions among pedestrians are computed in a single forward propagation, as discussed in Section II.
In this paper, we propose StarNet, which has a star topology, for pedestrian trajectory prediction. StarNet learns complicated interpersonal interactions and predicts future trajectories with low time complexity. We apply a centralized hub network to model the spatio-temporal interactions among pedestrians. Then the host network takes full advantage of the spatio-temporal representation and predicts pedestrians’ trajectories. We demonstrate that StarNet outperforms state-of-the-arts in multiple experiments.
-  D. Ferguson, D. Michael, U. Chris and K. Sascha, “Detection, prediction, and avoidance of dynamic obstacles in urban environments,” in 2008 IEEE International Conference on Intelligent Vehicles Symposium (IVS). IEEE, 2008, pp. 1149-1154.
-  Y. Luo, P. Cai, A. Bera, D. Hsu, W. S. Lee and D. Manocha, “Porca: Modeling and planning for autonomous driving among many pedestrians,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3418-3425, 2018.
-  F. Large, D. Vasquez, T. Fraichard and C. Laugier, “Avoiding cars and pedestrians using velocity obstacles and motion prediction,” in 2004 IEEE International Conference on Intelligent Vehicles Symposium (IVS). IEEE, 2004, pp. 375-379.
-  B. D. Ziebart, N. Ratliff, G. Gallagher, C. Mertz, K. Peterson, J. A. Bagnell, M. Hebert, A. K. Dey and S. Srinivasa, “Planning-based prediction for pedestrians,” in 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2009, pp. 3931-3936.
-  P. Trautman and A. Krause, “Unfreezing the robot: Navigation in dense, interacting crowds,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2010, pp. 797-803.
-  N. E. D. Toit and J. W. Burdick, “Robot motion planning in dynamic, uncertain environments,” IEEE Transactions on Robotics, vol. 28, no. 1, pp. 101-115, 2012.
-  D. Helbing and P. Molnar, “Social force model for pedestrian dynamics,” Physical review E, vol. 51, no. 5, pp. 4282, 1995.
-  S. Yi, H. Li and X. Wang, “Understanding pedestrian behaviors from stationary crowd groups,” . IEEE, 2015, pp. 3488-3496.
-  S. Yi, H. Li and X. Wang, “Pedestrian behavior modeling from stationary crowds with applications to intelligent surveillance,” IEEE transactions on image processing, vol. 25, no. 9, pp. 4354-4368, 2016.
-  B. Zhou, X. Wang and X. Tang, “Understanding collective crowd behaviors: Learning a mixture model of dynamic pedestrian-agents,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2012, pp. 2871-2878.
-  A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, F. Li and S. Savarese, “Social lstm: Human trajectory prediction in crowded spaces,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE 2016, pp. 961-971.
H. Wu, Z. Chen, W. Sun, B. Zheng and W. Wang, “Modeling trajectories with recurrent neural networks,”
in 28th International Joint Conference on Artificial Intelligence (IJCAI). 2017, pp. 3083-3090.
-  A. Gupta, J. Johnson, F. Li, S. Savarese and A. Alahi, “Social GAN: Socially acceptable trajectories with generative adversarial networks,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018, pp. 2255-2264.
A. Vemula, K. Muelling and J. Oh, “Social attention: Modeling attention in human crowds,”in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 1-7.
-  Y. Xu, Z. Piao and S. Gao S, “Encoding crowd interaction with deep neural network for pPedestrian trajectory prediction,” in 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018, pp. 5275-5284.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, “Generative adversarial nets,” in 28th Conference on Neural Information Processing Systems (NIPS). 2014, pp. 2672-2680.
-  C. M. Bishop, “Mixture density networks,” Technical Report NCRG/4288, Aston University, Birmingham, UK, 1994.
-  D. Ha and D. Eck, “A neural representation of sketch drawings,” arXiv preprint arXiv:1704.03477, 2017.
-  E. Schmerling, K. Leung, W. Vollprecht and M. Pavone, “Multimodal probabilistic model-based planning for human-robot interaction,” in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 1-9.
-  K. Cho, B. V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk and Y. Bengio, “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014.
-  K. Cho, B. V. Merrienboer, D. Bahdanau and Y. Bengoi, “On the properties of neural machine translation: Encoder-decoder approaches,” arXiv preprint arXiv:1409.1259, 2014.
-  D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel and Y. Bengio, “End-to-end attention-based large vocabulary speech recognition,” in 2015 International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 4945-4949.
C. R. Qi, H. Su, K and J. G. Leonidas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,”in 2017 Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2017, pp. 652-660.
-  S. Pellegrini, A. Ess, K. Schindler and L. V. Gool, “You’ll never walk alone: Modeling social behavior for multi-target tracking,” in 2009 IEEE International Conference on Computer Vision (ICCV). IEEE, 2009, pp. 261-268.
-  A. Lerner, Y. Chrysanthou and D. Lischinski, “Crowds by example,” Computer Graphics Forum, vol. 26, no. 3, pp. 655-664, 2007.
-  D. Varshneya, G. Srinivasaraghavan, “Human trajectory prediction using spatially aware deep attention models,” arXiv preprint arXiv:1705.09436, 2017.
-  T. Fernando, S. Denma, S. Sridharan and C. Fookes, “Soft+hardwired attention: An lstm framework for human trajectory prediction and abnormal event detection,” arXiv preprint arXiv:1702.05552, 2017.