BiTraP: Bi-directional Pedestrian Trajectory Prediction with Multi-modal Goal Estimation

by   Yu Yao, et al.
University of Michigan

Pedestrian trajectory prediction is an essential task in robotic applications such as autonomous driving and robot navigation. State-of-the-art trajectory predictors use a conditional variational autoencoder (CVAE) with recurrent neural networks (RNNs) to encode observed trajectories and decode multi-modal future trajectories. This process can suffer from accumulated errors over long prediction horizons (>=2 seconds). This paper presents BiTraP, a goal-conditioned bi-directional multi-modal trajectory prediction method based on the CVAE. BiTraP estimates the goal (end-point) of trajectories and introduces a novel bi-directional decoder to improve longer-term trajectory prediction accuracy. Extensive experiments show that BiTraP generalizes to both first-person view (FPV) and bird's-eye view (BEV) scenarios and outperforms state-of-the-art results by  10-50 non-parametric versus parametric target models in the CVAE directly influence the predicted multi-modal trajectory distributions. These results provide guidance on trajectory predictor design for robotic applications such as collision avoidance and navigation systems.



There are no comments yet.


page 7

page 8


Kernel Trajectory Maps for Multi-Modal Probabilistic Motion Prediction

Understanding the dynamics of an environment, such as the movement of hu...

Goal-GAN: Multimodal Trajectory Prediction Based on Goal Position Estimation

In this paper, we present Goal-GAN, an interpretable and end-to-end trai...

Multi-modal Scene-compliant User Intention Estimation for Navigation

A multi-modal framework to generated user intention distributions when o...

MANTRA: Memory Augmented Networks for Multiple Trajectory Prediction

Autonomous vehicles are expected to drive in complex scenarios with seve...

Sliding Sequential CVAE with Time Variant Socially-aware Rethinking for Trajectory Prediction

Pedestrian trajectory prediction is a key technology in many application...

Are socially-aware trajectory prediction models really socially-aware?

Our field has recently witnessed an arms race of neural network-based tr...

Multi-Modal Trajectory Prediction of NBA Players

National Basketball Association (NBA) players are highly motivated and s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Understanding and predicting pedestrian movement behaviors is crucial for autonomous systems to safely navigate interactive environments. By correctly forecasting pedestrian trajectories, a robot can plan safe and socially-aware paths in traffic [1, 22, 32, 21] and produce alarms about anomalous motions (e.g., crashes or near collisions) [24, 40, 38, 36, 37]. Early work in pedestrian trajectory prediction often assumed a deterministic future, where only one trajectory is predicted for each person given past observations [16, 12, 35]. However, pedestrians move with a high degree of stochasticity so multiple plausible and distinct future behaviors can exist [11, 10]. Recent studies [15, 20, 2, 13, 31] have shown predicting a distribution of multiple potential future trajectories (i.e., multi-modal prediction) rather than a single best trajectory can more accurately model future motions of pedestrians.

Recurrent neural networks (RNNs), notably long short-term memory networks (LSTMs) and gated recurrent units (GRUs), have demonstrated success in trajectory prediction 

[22, 9, 39, 26]. However, existing models recurrently predict future trajectories based on previous output thus their performance tends to deteriorate rapidly over time ( 560 ms) [10, 5]. We propose to address this problem with a novel goal-conditioned bi-directional trajectory predictor, named BiTraP. BiTraP first estimates future goals (end-points of the future trajectories) of pedestrians and then predicts trajectories by combining forward passing from current position and backward passing from estimated goals. We believe that predicting goals can improve long-term trajectory predictions, as pedestrians in real world often have desired goals and plan paths to reach these goals [23]. Compared to existing goal-conditioned methods [23, 27, 29] where goals were used as an input to a forward decoder, BiTraP takes goals as the starting position of a backward decoder and predicts future trajectories from two directions, thus mitigating the accumulated error over longer prediction horizons.

Recently, generative models such as the generative adversarial network (GAN)

[11] and conditional variational autoencoder (CVAE) [33, 20]

, were developed to predict multi-modal distributions of future trajectories. Our BiTraP model predicts multi-modal trajectories based on CVAE which learns target future trajectory distributions conditioned on the observed past trajectories through a stochastic latent variable. The two most common forms of the latent variable follow either a Gaussian distribution or a categorical distribution, resulting in either a non-parametric target distribution

[20, 23]

or a parametric target distribution model such as a Gaussian Mixture Model (GMM) 

[13, 31]

. There has been limited research on how latent variable distributions impact predicted multi-modal trajectories. To fill this gap, we conducted extensive comparison studies using two variations of our BiTraP method: a non-parametric model using Gaussian latent variables (BiTraP-NP) and a GMM model using categorical latent variables (BiTraP-GMM). We implemented two types of loss functions, best-of-many (BoM) L2 loss 

[4] and negative log-likelihood (NLL) loss [31] to evaluate different predicted trajectory behaviors (e.g., spread and diversity). We show that latent variable distribution choices are closely related to the diversity of predicted distributions, which provides guidance for selecting trajectory predictors for robot navigation and collision avoidance systems.

The contributions of this work are summarized as follows. First, we developed a novel bi-directional trajectory predictor (BiTraP) based on multi-modal goal estimation and show it offers significant improvements on trajectory prediction performance especially for longer ( seconds) prediction horizons. Second, we studied parametric versus non-parametric target modeling methods by presenting two variations of our model, BiTraP-NP and BiTraP-GMM, and compare their influence on the diversity of predicted distribution. Extensive experiments with both first person and bird’s eye view datasets show the effectiveness of BiTraP models in different domains.

2 Related Work

Our BiTraP model consists of two parts: a multi-modal goal estimator and a goal-conditioned bi-directional trajectory predictor. This section describes related work in multi-modal trajectory prediction and goal-conditioned prediction.

CVAE-based Approaches for Multi-modal Trajectory Prediction. Probabilistic approaches, particularly conditional variational autoencoder (CVAE) based models, have been developed for multi-modal trajectory prediction. Different from GANs [11, 17], CVAEs can explicitly learn the form of a target distribution conditioned on past observations by learning the latent distribution from which it samples. Some CVAE methods assume the target trajectory follows a non-parametric (NP) distribution and produces multi-modal predictions by sampling from a Gaussian latent space. Lee et al[20] first used CVAE for multi-modal trajectory prediction by incorporating Gaussian latent space sampling to an long short-term memory encoder-decoder (LSTM-ED) model. CVAE with LSTM components has since been used in many applications [7, 14, 6]. Other CVAE-based methods assume parametric trajectory distributions. Ivanovic et al.[13] assumed the target trajectory follows a Gaussian Mixture Model (GMM) and designed a Trajectron network to predict GMM parameters using a spatio-temporal graph. Trajectron++ [31] extended Trajectron to account for dynamics and heterogeneous input data. Our work extends existing CVAE models to include goal estimation and shows improved multi-modal prediction results. Our work also provides novel insights in comparisons between CVAE target distributions (NP and GMM).

Trajectory Conditioned on Goals. Incorporating goals has been shown to improve trajectory prediction. Rehder et al[27] proposed a particle-filter based method to estimate goal distribution as a prior for trajectory prediction. We drew inspiration from [28]

, which computed forward and backward rewards based on current position and goal; the path is planned using Inverse Reinforcement Learning (IRL). Our work is distinct due to its bi-directional temporal propagation and integration combined with a CVAE to achieve multi-modal prediction. Rhinehart 

et al[29] estimated multi-modal semantic action as goals and planned conditioned trajectories using imitative models. Deo et al[8] used IRL to estimate goal states and fused results with past trajectory encodings to generate predictions. Most recently, Mangalam et al[23] designed a PECNet which showed state-of-the-art results on BEV trajectory prediction datasets. However, PECNet only concatenated past trajectory encodings and end-point encodings, which we believe did not fully take advantage of goal information. We have designed a bi-directional trajectory decoder in which current trajectory information is passed forward to the end-points (goals) and goals are recurrently propagated back to the current position. Experiment results show that our goal estimation can help generate more accurate trajectories.

3 BiTraP: Bi-directional Trajectory Prediction with Goal Estimation

Our BiTraP model performs goal-conditioned multi-modal bi-directional trajectory prediction in either first-person view (FPV) or bird’s eye view (BEV). Let denote observed past trajectory at time , where is bounding box location and size in pixels for FPV [39, 26] and position in meters for BEV [31]. Given , we first estimate goal of the person then predict future trajectory , where and are observation and prediction horizons, respectively. Define goal as the future trajectory endpoint, which is given in training and unknown in testing. We adopt a CVAE model to realize multi-modal goal and trajectory prediction. BiTraP contains four sub-modules: conditional prior network to model latent variable from observations, recognition network to capture dependencies between and , goal generation network , and trajectory generation network where , , and represent network parameters. Either parametric or non-parametric models can be used to design networks and for CVAE. Non-parametric models do not assume the distribution format of target but learn it implicitly by learning the distribution of . Parametric models assume a known distribution format for and predict distribution parameters. We design non-parametric and parametric models in Sections 3.1 and 3.2, respectively, and explain different loss functions to train these models in Sections 3.3 and 3.4.

3.1 BiTraP with Non-parametric (NP) Distribution

BiTraP-NP is built on a standard recurrent neural network encoder-decoder (RNN-ED) based CVAE trajectory predictor as in [20, 23, 4, 14], except it predicts goal first and then predict trajectories leveraging goals. Following previous work, we assume Gaussian latent variable and a non-parametric target distribution format. Fig. 1 shows the network architecture of BiTraP-NP.

Figure 1: Overview of our BiTraP-NP network. Red, blue and black arrows show processes that appear in training only, inference only, and both training and inference, respectively.

Encoder and goal estimation. First, observed trajectory

is processed by a gated-recurrent unit (GRU) encoder network to obtain encoded feature vector

. In training, ground truth target is encoded by another GRU yielding . Recognition network takes and to predict distribution mean and covariance which capture dependencies between observation and ground truth target. Prior network assumes no knowledge about target and predicts and using

only. Kullback–Leibler divergence (

) loss between and is optimized so that dependency between and is implicitly learned by the prior network. Latent variable is sampled from and concatenated with to predict multi-modal goals with goal generation network . In testing, we directly draw multiple samples from and concatenate to predict estimated goals

. We use 3-layer multi-layer perceptrons (MLPs) for

prior, recognition and goal generation networks.

Trajectory Decoder. Predicted goals are used as inputs to a bi-directional trajectory generation network , the trajectory decoder, to predict multi-modal trajectories. BiTraP’s decoder contains forward and backward RNNs. The forward RNN is similar to a regular RNN decoder (Eq. (1)) except its output is not transformed to trajectory space. The backward RNN is initialized from encoder hidden state . It takes estimated goal as the initial input (Eq. (2)) and propagates from time to so backward hidden state is updated from the goal to the current location. Forward and backward hidden states for the same time step are concatenated to predict the final trajectory way-point at that time (Eq. (3)). These steps can be formulated as


where, , , and indicate “forward”, “backward”, “input” and “output” respectively, and and are initialized by passing through two different fully-connected networks.

3.2 BiTraP with GMM Distribution

Figure 2: Latent space sampling and decoder modules of BiTraP-GMM. The ellipse shows one of GMM components at each timestep. The rest of the network is the same as BiTraP-NP in Fig. 1.

Parametric models predict trajectory distribution parameters instead of trajectory coordinates. BiTraP-GMM is our parametric variation of BiTraP assuming a GMM for the trajectory goal and at each way-point [13, 31]. Let denote a -component GMM at time step . We assume , where each Gaussian component can be considered the distribution of one trajectory modality. Mixture component weights sum to one thus form a categorical distribution. Each

indicates the probability (confidence) that a person’s motion belongs to that modality. We design latent vector

as a categorical () variable parameterized by GMM component weights rather than separately-computed parameters. Similar to BiTraP-NP, we use three 3-layer MLPs for the prior, recognition and goal generation networks, and a bi-directional RNN decoder for the trajectory generation network. Instead of directly predicting trajectory coordinates, generation networks of BiTraP-GMM estimate the and of the th Gaussian components at time . In training, we sample one from each category to ensure all trajectory modalities are trained. In testing, we sample from so it is more probable to sample from high-confidence trajectory modalities.

3.3 Residual Prediction and BoM Loss for BiTraP-NP

Instead of directly predicting future location [26] or integrating from predicted future velocity [31], BiTraP-NP predicts change with respect to the current location based on residuals . There are two advantages of residual prediction. First, it assures the model will predict the trajectory starting from the current location, providing smaller initial loss than predicting location from scratch. Second, the residual target can be less noisy than the velocity target due to the fact that trajectory annotation is not always accurate. Standard CVAE loss includes NLL loss of the predicted distribution which is not applicable to NP methods due to their unknown distribution format. L2 loss between predictions and targets can be used as a substitution [20]. To further encourage diversity in multi-modal prediction, we use best-of-many (BoM) L2 loss as in [4]. The final loss function for BiTraP-NP is a combination of the goal L2 loss, the trajectory L2 loss and the KL-divergence loss between prior and recognition networks, written as


where and are the predicted goal and trajectory waypoints with respect to current position .

3.4 Bi-directional NLL Loss for BiTraP-GMM

Similar to [31], our BiTraP-GMM models the pedestrian velocity distribution as a GMM at each time step. The velocity GMM is then integrated forward to obtain the GMM distribution of trajectory waypoints as shown by blue blocks in Fig. 2. We assume linear dynamics for pedestrian and use a single integrator as in Eq. (5). The loss function is then the summation of negative log-likelihood (NLL) of the ground truth future waypoints over the prediction horizon, formulated as


where are velocity GMM parameters at time , and the symbol indicates location GMM parameters obtained from integration. is the probability density function. Such an emphasizes earlier waypoints along the prediction horizon because a waypoint at time is used in integration results over , while these later waypoints are not used when computing . This goes against our proposed idea which is to leverage a bi-directional temporal model. Therefore, we compute bi-directional NLL loss with reverse integration from the goal, formulated as


where is the backward probability density function, the symbol indicates backward location GMM parameters. The final loss function for BiTraP-GMM can be written as


where the first term is loss of the goal estimation, and are computed from forward and backward integration, the term is the KL-divergence similar to Eq. (4).

4 Experiments and Results

In this section, we empirically evaluate BiTraP-NP and BiTraP-GMM models on both first-person view (FPV) and bird’s eye view (BEV) trajectory prediction datasets. We also provide a comparative study and discussion on the effects of model and loss selection.

Datasets. Two FPV datasets, Joint Attention for Autonomous Driving (JAAD)  [18] and Pedestrian Intention Estimation (PIE)  [26], and two benchmark BEV datasets, ETH [25] and UCY [19], were used in our experiments. JAAD contains 2,800 pedestrian trajectories captured from dash cameras annotated at 30Hz. PIE contains 1,800 pedestrian trajectories also annotated at 30Hz, with longer trajectories and more comprehensive annotations such as semantic intention, ego-motion and neighbor objects. ETH-UCY datasets contain five sub-datasets captured from down-facing surveillance cameras in four different scenes with 1,536 pedestrian trajectories annotated at 2.5Hz.

Implementation Details. We used the standard training/testing splits of JAAD and PIE as in [26]. A 0.5-second (15 frame) observation length and 1.5-second (45 frame) prediction horizon were used for evaluation. For ETH-UCY, a standard leave-one-out approach based on scene was used per [11, 31]. We observed trajectories for 3.2 seconds (8 frames) and predicted the paths for the next 4.8 seconds (12 frames). We used hidden unit size 256 for all encoders and decoders in BiTraP across all datasets. All models were trained with batch size 128, learning rate (LR) 0.001, and an exponential LR scheduler [31] on a single NVIDIA TITAN XP GPU.

4.1 Experiments on JAAD and PIE Datasets


We compare our results against the following baseline models: 1) Linear Kalman filter, 2) Vanilla LSTM model, 3) Bayesian-LSTM model (B-LSTM) 

[3], 4) PIE, an attentive RNN encoder-decoder model, 5) PIE, a multi-stream attentive RNN model, by injecting ego-motion and semantic intention stream to PIE, and 6) FOL-X [39], a multi-stream RNN encoder-decoder model using residual prediction. We also conducted an ablation study for a deterministic variation of our model (BiTraP-D), where the multi-modal CVAE module was removed.

Evaluation Metrics. Following [39, 26, 3], our BiTraP model was evaluated using: 1) bounding box Average Displacement Error (), 2) box center ADE () and 3) box center Final Displacement Error () in squared pixels. For our multi-modal BiTraP-NP and BiTraP-GMM, we compute the best-of-20 results (the minimum ADE and FDE from 20 randomly-sampled trajectories), following [11, 31, 30]

. We also report the Kernel Density Estimation-based Negative Log Likelihood (KDE-NLL) metric for BiTraP-NP and BiTraP-GMM, which evaluates the NLL of the ground truth under a distribution fitted by a KDE on trajectory samples from each prediction model 

[31, 34]. For all metrics, lower values are better.

Results. Table 1 presents trajectory prediction results with JAAD and PIE datasets. Our deterministic BiTraP-D model shows consistently lower displacement errors across various prediction horizons than baseline methods such as PIE and FOL-X indicating our goal estimation and bi-directional prediction modules are effective. Our BiTraP-D model, based only on past trajectory information, also outperforms the state-of-the-art PIE, which requires additional ego-motion and semantic intention annotations. Table 1 also shows that non-parametric multi-modal method BiTraP-NP performs better on displacement metrics while parametric method BiTraP-GMM performs better on the metric. This difference illustrates the objectives of these methods: BiTraP-NP generates diverse trajectories, and one trajectory was optimized to have minimum displacement error, while BiTraP-GMM generates trajectory distributions with more similarity to the ground truth trajectory.

Methods JAAD PIE
(0.5/1.0/1.5s) (1.5s) (1.5s) (0.5/1.0/1.5s) (1.5s) (1.5s)
Linear [26] 233/857/2303 1565 6111 - 123/477/1365 950 3983 -
LSTM [26] 289/569/1558 1473 5766 - 172/330/911 837 3352 -
B-LSTM [3] 159/539/1535 1447 5615 - 101/296/855 811 3259 -
FOL-X  [39] 147/484/1374 1290 4924 - 47/183/584 546 2303 -
PIE [26] 110/399/1280 1183 4780 - 58/200/636 596 2477 -
PIE [26] - - - - -/-/556 520 2162 -
BiTraP-D 93/378/1206 1105 4565 - 41/161/511 481 1949 -
BiTraP-NP (20) 38/94/222 177 565 18.9 23/48/102 81 261 16.5
BiTraP-GMM (20) 153/250/585 501 998 16.0 38/90/209 171 368 13.8
Table 1: Results on JAAD and PIE datasets. The center row shows deterministic baselines including our ablation model BiTraP-D; the bottom row shows our proposed multi-modal methods. NLL is not available for deterministic methods since they predict single trajectories. Lower values are better.

Fig. 3 shows trajectory prediction results on sample frames from the PIE dataset. We observed that when a pedestrian intends to cross the street or change directions, the multi-modal BiTraP methods yield higher accuracy and more reasonable predictions than the deterministic variation. For example, as shown in Fig. 2(b), the deterministic BiTraP-D model (top row) can fail to predict the trajectory and the end-goal, where a pedestrian intends to cross the street in the future; the multi-modal BiTraP-NP model (bottom row) can successfully predict multiple possible future trajectories, including one where the pedestrian is crossing the street matching ground truth intention. Similar observations can be made in other frames. This result indicates multi-modal BiTraP-NP can predict multiple possible futures, which could help a mobile robot or a self-driving car safely yield to pedestrians. Although BiTraP-NP samples diverse trajectories, it still predicts distribution with high likelihood around ground truth targets and low likelihood in other locations per Fig. 2(b)-2(d).

Figure 3: Qualitative results of deterministic (top row) vs multi-modal (bottom row) bi-directional prediction. Past (dark blue), ground truth future (red) and predicted future (green) trajectories and final bounding box locations are plotted. In the bottom row, each BiTraP-NP likelihood heatmap fits a KDE over samples. The orange color indicates higher probability.

4.2 Experiments on ETH-UCY Datasets

Baselines. We compare our methods with five multi-modal baseline methods: S-GAN [11], SoPhie [30], S-BiGAT [17], PECNet [23] and Trajectron++ [31]. PECNet and Trajectron++ are most recent. PCENet is a goal-conditioned method using non-parametric distribution (thus directly comparable to our BiTraP-NP) while Trajectron++ uses a GMM trajectory distribution directly comparable to our BiTraP-GMM. Note that all baselines incorporate social information while our methods fully focus on investigating trajectory modeling and do no require social information input.

Evaluation Metrics. Following [11, 23, 30], we used best-of-20 trajectory ADE and FDE in meters as evaluation metrics. We also report Average and Final KDE-NLL (ANLL and FNLL) metrics as a supplement [34, 31] to evaluate the predicted trajectory and goal distribution.

Results. Table 2 shows the best-of-20 ADE/FDE results across all methods. We observed that BiTraP-NP outperforms the state-of-the-art goal based method (PECNet) by a large margin (), demonstrating the effectiveness of our bi-directional decoder module. BiTraP-NP also obtains lower ADE/FDE on most scenes (- improvement) compared with Trajectron++. Our BiTraP-GMM model was trained using NLL loss, so it shows higher ADE/FDE results compared with BiTraP-NP. This is consistent with our FPV dataset observations in Section 4.1. Nevertheless, BiTraP-GMM still achieves similar or better results than PECNet and Trajectron++.

Datasets S-GAN [11] SoPhie [30] S-BiGAT [17] PECNet [23] Trajectron++ [31] BiTraP-NP BiTraP-GMM
ETH 0.81/1.52 0.70/1.43 0.69/1.29 0.54/0.87 0.43/0.86 0.37/0.69 0.40/0.74
Hotel 0.72/1.61 0.76/1.67 0.49/1.01 0.18/0.24 0.12/0.19 0.12/0.21 0.13/0.22
Univ 0.60/1.26 0.54/1.24 0.55/1.32 0.35/0.60 0.22/0.43 0.17/0.37 0.19/0.40
Zara1 0.34/0.69 0.30/0.63 0.30/0.62 0.22/0.39 0.17/0.32 0.13/0.29 0.14/0.28
Zara2 0.42/0.84 0.38/0.78 0.36/0.75 0.17/0.30 0.12/0.25 0.10/0.21 0.11/0.22
Average 0.58/1.18 0.54/1.15 0.48/1.00 0.29/0.48 0.21/0.39 0.18/0.35 0.19/0.37
Table 2: Trajectory prediction results (ADE/FDE) on BEV ETH-UCY datasets. Lower is better.

To further evaluate predicted trajectory distributions, we report KDE-NLL results in Table 3. As shown, BiTraP-GMM outperforms Trajectron++ with lower ANLL and FNLL on ETH, Univ, Zara1 and Zara2 datasets. On Hotel, Trajectron++ achieves lower NLL values which may be due to the possible higher levels of inter-personal interactions than in other scenes. We observed improved ANLL/FNLL on Hotel (-1.88/0.27) when combining the BiTraP-GMM decoder with the interaction encoder in [31], consistent with our hypothesis.

Datasets S-GAN [11] Trajectron++ [13] BiTraP-NP BiTraP-GMM
ETH 15.70/- 1.31/4.28 3.80/3.79 0.96/3.55
Hotel 8.10/- -1.94/0.25 -0.41/1.26 -1.60/0.51
Univ 2.88/- -1.13/2.13 -0.84/2.15 -1.19/2.03
Zara1 1.36/- -1.41/1.83 -0.81/1.85 -1.51/1.56
Zara2 0.96/- -2.53/0.50 -1.89/1.31 -2.54/0.38
Table 3: Average-NLL/Final-NLL (ANLL/FNLL) results on ETH-UCY datasets. Lower is better.

We also computed KDE-NLL results for both Trajectron++ and BiTraP-GMM methods at each time step to analyze how BiTraP affects both short-term and longer-term (up to 4.8 seconds) prediction results. Per Fig. 4, BiTraP-GMM outperforms Trajectron++ with longer prediction horizons (after 1.2 seconds on ETH, Univ, Zara1, and Zara2). This shows the backward passing from the goal helps reduce error with longer prediction horizon.

Fig. 5 shows qualitative examples of our predicted trajectories using the BiTraP-NP and BiTraP-GMM models. As shown, BiTraP-NP (top row) generates future possible trajectories with a wider spread (more diverse), while BiTraP-GMM generates more compact distributions. This is consistent with our quantitative evaluations as reported in Table 3, where the lower NLL results of BiTraP-GMM correspond to more compact trajectory distributions. To intuitively present model performance in collision avoidance and robot navigation, we conducted a robot path simulation experiment on ETH-UCY dataset and report collision related metrics in the supplementary material.

Figure 4: KDE-NLL results on the ETH-UCY dataset per timestep up to 4.8 seconds.
(a) Hotel
(b) Univ
(c) Zara2
(d) ETH
Figure 5: Visualizations of BiTraP-NP (first row) and BiTraP-GMM (second row). Twenty sampled future trajectories are plotted. For BiTraP-GMM, we also plot end-point GMM distributions as colored ellipses. Size indicates component and transparency indicates component weight .

5 Conclusion

We presented BiTraP, a bi-directional multi-modal trajectory prediction method conditioned on goal estimation. We demonstrated that our proposed model can achieve state-of-the-art results for pedestrian trajectory prediction on both first-person view and bird’s eye view datasets. The current BiTraP models, with only observed trajectories as inputs, already surpass previous methods which required additional ego-motion, semantic intention, and/or social information. By conducting a comparative study between non-parametric (BiTraP-NP) and parametric (BiTraP-GMM) models, we observed that the different latent variable choice affects the diversity of target distributions of future trajectories. We hypothesized that such difference in predicted distribution directly influences the collision rate in robot path planning and showed that collision metrics can be used to guide predictor selection in real world applications. For future work, we plan to incorporate scene semantics and social components to further boost the performance of each module. We are also interested in using estimated goals and predicted trajectories to infer and interpret pedestrian intention and actions.


This work was supported by a grant from Ford Motor Company via the Ford-UM Alliance under award N028603. This material is based upon work supported by the Federal Highway Administration under contract number 693JJ319000009. Any options, findings, and conclusions or recommendations expressed in the this publication are those of the author(s) and do not necessarily reflect the views of the Federal Highway Administration.


  • [1] A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese (2016) Social lstm: human trajectory prediction in crowded spaces. In CVPR, Cited by: §1.
  • [2] C. Anderson, X. Du, R. Vasudevan, and M. Johnson-Roberson (2019) Stochastic sampling simulation for pedestrian trajectory prediction. arXiv preprint arXiv:1903.01860. Cited by: §1.
  • [3] A. Bhattacharyya, M. Fritz, and B. Schiele (2018) Long-term on-board prediction of people in traffic scenes under uncertainty. In CVPR, Cited by: §4.1, §4.1, Table 1.
  • [4] A. Bhattacharyya, B. Schiele, and M. Fritz (2018) Accurate and diverse sampling of sequences based on a “best of many” sample objective. In CVPR, Cited by: §1, §3.1, §3.3.
  • [5] J. Bütepage, H. Kjellström, and D. Kragic (2018) Anticipating many futures: online human motion prediction and generation for human-robot interaction. In ICRA, Cited by: §1.
  • [6] C. Choi, A. Patil, and S. Malla (2019) Drogon: a causal reasoning framework for future trajectory forecast. arXiv preprint arXiv:1908.00024. Cited by: §2.
  • [7] N. Deo and M. M. Trivedi (2018) Multi-modal trajectory prediction of surrounding vehicles with maneuver based lstms. In IV, Cited by: §2.
  • [8] N. Deo and M. M. Trivedi (2020) Trajectory forecasts in unknown environments conditioned on grid-based plans. arXiv preprint arXiv:2001.00735. Cited by: §2.
  • [9] X. Du, R. Vasudevan, and M. Johnson-Roberson (2019) Bio-lstm: a biomechanically inspired recurrent neural network for 3-d pedestrian pose and gait prediction. IEEE Robotics and Automation Letters. Cited by: §1.
  • [10] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik (2015) Recurrent network models for human dynamics. In ICCV, Cited by: §1, §1.
  • [11] A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi (2018) Social GAN: socially acceptable trajectories with generative adversarial networks. In CVPR, Cited by: §1, §1, §2, §4.1, §4.2, §4.2, Table 2, Table 3, §4.
  • [12] D. Helbing and P. Molnar (1995) Social force model for pedestrian dynamics. Physical review E 51 (5), pp. 4282. Cited by: §1.
  • [13] B. Ivanovic and M. Pavone (2019) The trajectron: probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs. In ICCV, Cited by: §1, §1, §2, §3.2, Table 3.
  • [14] B. Ivanovic, E. Schmerling, K. Leung, and M. Pavone (2018) Generative modeling of multimodal multi-human behavior. In IROS, Cited by: §2, §3.1.
  • [15] H. O. Jacobs, O. K. Hughes, M. Johnson-Roberson, and R. Vasudevan (2017) Real-time certified probabilistic pedestrian forecasting. IEEE Robotics and Automation Letters 2 (4), pp. 2064–2071. Cited by: §1.
  • [16] R. E. Kalman (1960) A new approach to linear filtering and prediction problems. Journal of basic Engineering 82 (1), pp. 35–45. Cited by: §1.
  • [17] V. Kosaraju, A. Sadeghian, R. Martín-Martín, I. Reid, H. Rezatofighi, and S. Savarese (2019) Social-bigat: multimodal trajectory forecasting using bicycle-gan and graph attention networks. In NIPS, Cited by: §2, §4.2, Table 2.
  • [18] I. Kotseruba, A. Rasouli, and J. K. Tsotsos (2016) Joint attention in autonomous driving (JAAD). arXiv preprint arXiv:1609.04741. Cited by: §4.
  • [19] L. Leal-Taixé, M. Fenzi, A. Kuznetsova, B. Rosenhahn, and S. Savarese (2014) Learning an image-based motion context for multiple people tracking. In CVPR, Cited by: §4.
  • [20] N. Lee, W. Choi, P. Vernaza, C. B. Choy, P. H. Torr, and M. Chandraker (2017) Desire: distant future prediction in dynamic scenes with interacting agents. In CVPR, Cited by: §1, §1, §2, §3.1, §3.3.
  • [21] N. Li, Y. Yao, I. Kolmanovsky, E. Atkins, and A. Girard (2019) Game-theoretic modeling of multi-vehicle interactions at uncontrolled intersections. arXiv preprint arXiv:1904.05423. Cited by: §1.
  • [22] J. Liang, L. Jiang, J. C. Niebles, A. G. Hauptmann, and L. Fei-Fei (2019) Peeking into the future: predicting future person activities and locations in videos. In CVPR, Cited by: §1, §1.
  • [23] K. Mangalam, H. Girase, S. Agarwal, K. Lee, E. Adeli, J. Malik, and A. Gaidon (2020) It is not the journey but the destination: endpoint conditioned trajectory prediction. arXiv preprint arXiv:2004.02025. Cited by: §1, §1, §2, §3.1, §4.2, §4.2, Table 2.
  • [24] R. Morais, V. Le, T. Tran, B. Saha, M. Mansour, and S. Venkatesh (2019)

    Learning regularity in skeleton trajectories for anomaly detection in videos

    In CVPR, Cited by: §1.
  • [25] S. Pellegrini, A. Ess, K. Schindler, and L. Van Gool (2009) You’ll never walk alone: modeling social behavior for multi-target tracking. In ICCV, Cited by: §4.
  • [26] A. Rasouli, I. Kotseruba, T. Kunic, and J. K. Tsotsos (2019) PIE: a large-scale dataset and models for pedestrian intention estimation and trajectory prediction. In ICCV, Cited by: §1, §3.3, §3, §4.1, Table 1, §4, §4.
  • [27] E. Rehder and H. Kloeden (2015) Goal-directed pedestrian prediction. In ICCVW, Cited by: §1, §2.
  • [28] E. Rehder, F. Wirth, M. Lauer, and C. Stiller (2018) Pedestrian prediction by planning using deep neural networks. In ICRA, Cited by: §2.
  • [29] N. Rhinehart, R. McAllister, K. Kitani, and S. Levine (2019) Precog: prediction conditioned on goals in visual multi-agent settings. In ICCV, Cited by: §1, §2.
  • [30] A. Sadeghian, V. Kosaraju, A. Sadeghian, N. Hirose, H. Rezatofighi, and S. Savarese (2019) Sophie: an attentive gan for predicting paths compliant to social and physical constraints. In CVPR, Cited by: §4.1, §4.2, §4.2, Table 2.
  • [31] T. Salzmann, B. Ivanovic, P. Chakravarty, and M. Pavone (2020) Trajectron++: multi-agent generative trajectory forecasting with heterogeneous data for control. arXiv preprint arXiv:2001.03093. Cited by: §1, §1, §2, §3.2, §3.3, §3.4, §3, §4.1, §4.2, §4.2, §4.2, Table 2, §4.
  • [32] S. Sivaraman and M. M. Trivedi (2014) Dynamic probabilistic drivability maps for lane change and merge driver assistance. IEEE Transactions on Intelligent Transportation Systems 15 (5), pp. 2063–2073. Cited by: §1.
  • [33] K. Sohn, H. Lee, and X. Yan (2015) Learning structured output representation using deep conditional generative models. In NIPS, Cited by: §1.
  • [34] L. A. Thiede and P. P. Brahma (2019) Analyzing the variety loss in the context of probabilistic trajectory prediction. In ICCV, Cited by: §4.1, §4.2.
  • [35] C. K. Williams and C. E. Rasmussen (2006)

    Gaussian processes for machine learning

    Vol. 2, MIT press Cambridge, MA. Cited by: §1.
  • [36] Y. Yao and E. Atkins (2018) The smart black box: a value-driven automotive event data recorder. In ITSC, pp. 973–978. Cited by: §1.
  • [37] Y. Yao and E. Atkins (2020) The smart black box: a value-driven high-bandwidth automotive event data recorder. IEEE Transactions on Intelligent Transportation Systems. Cited by: §1.
  • [38] Y. Yao, X. Wang, M. Xu, Z. Pu, E. Atkins, and D. Crandall (2020) When, where, and what? a new dataset for anomaly detection in driving videos. arXiv preprint arXiv:2004.03044. Cited by: §1.
  • [39] Y. Yao, M. Xu, C. Choi, D. J. Crandall, E. M. Atkins, and B. Dariush (2019) Egocentric vision-based future vehicle localization for intelligent driving assistance systems. In ICRA, Cited by: §1, §3, §4.1, §4.1, Table 1.
  • [40] Y. Yao, M. Xu, Y. Wang, D. J. Crandall, and E. M. Atkins (2019) Unsupervised traffic accident detection in first-person videos. In IROS, Cited by: §1.