This technical report aims to share details of our method. Our network is developed upon SceneTransformer [ngiam2021scene], a Transformer-based [xiong2020layer] model. For agent trajectories, time-wise and agent-wise self-attention is used to encode time sequence and spatial interaction information. Besides, cross-attention is used to share map information with agent trajectories. Finally, we predict trajectories and their corresponding scores from learnable tokens. These learnable tokens get history trajectories and map information from mixed features of agent and map through cross-attention layer.
In general, we summarize the contribution of our proposed algorithm as follow:
We propose Temporal Flow Header to enhance the flow of temporal information in the whole network.
We propose a K-means method for the ensemble stage, and achieve state-of-the-art performance.
For Transformer model, we propose an efficient strategy to reduce the input size in the training stage and increase the size in the testing stage to achieve faster training of the model while obtaining high accuracy.
The overview of TENET is given in Fig. 1. The proposed motion prediction model consists of three modules: (1) Transformer-based [xiong2020layer] encoder that extracts spatially and temporally the feature of agent and map. (2) Attention decoder that utilizes learnable token [carion2020end] to query effective information from mixed features of agent and map after encoder. (3) Three output headers that are for regression, score, and enhancing temporal information interaction.
Encoder and Decoder We adopt efficient self-attention and cross-attention from SceneTransformer [ngiam2021scene] to implement intra and inter information interaction of agent and map.
where performs attention over axis
of the tensor, mixing information along axis while keeping information along other axes independent. It is the same for . In decoder, we generate learnable tokens as queries to learn prediction trajectory feature from mixed features of agent and map after encoder through cross-attention layer,
where is the mixed feature of agent and map, is learnable token, and is prediction trajectory feature.
Regression Header We take a 2-layer MLP to decode the trajectory information from in all timestamps (including history and future), including positions, orientation, and instant velocity in BEV coordinates. To eliminate the effect of long-range prediction error accumulation on a predicted trajectory, we use LSTM [hochreiter1997long] to extract time-sequential information on target agent feature, and add it to post-MLP trajectory information. The final forecasted regression result is defined as
Score Header To predict more reasonably, our model take map information into consideration. Hence, cross-attention layer is used to fuse map feature into . Then we obtain a normalized score after feeding corresponding regression results into a 2-layer MLP and softmax,
Temporal Flow Header To enhance the flow of temporal information in TENET, we propose this header as an auxiliary task to realize a closed loop between history and future. Specifically, TENET regresses backward to get historical trajectories through using the prediction results, establishing temporal consistency on trajectory information. The below equation explains how we slice out future timestamps feature from and devise a FPN (Feature Pyramid Networks) [lin2017feature] module to obtain history trajectory ,
|Method & Rank||minADE (=6)||minFDE (=6)||Miss Rate (=6)||brier-minADE (=6)||brier-minFDE (=6)|
2.2 Loss function
The loss function of the model consists of three parts.
, in order to construct the connection between trajectories and scores, we use GMM (Gaussian Mixture Model)[reynolds2015gaussian]
loss which ensures each trajectory has a reasonable score. We use all attributes of trajectory as ground truth label, including position, orientation, and instant velocity. In addition, both future and history timestamps are supervised, aiming to learn a motion pattern from history trajectory through Autoencoder[liou2014autoencoder].
where , is the score of predicted trajectory.
For , to make the positive trajectory (defined as the closest trajectory to ground truth) more confident, we adopt the max-margin loss:
Where is the margin and we set it 0.15 in our loss function and is the score of positive prediction.
For , we take the historical trajectory as the ground truth for Temporal Flow Header middle-level supervision.
2.3 Data Augmentation
Augmentation, which is essential for our model, can be divided into agent augmentation and training augmentation.
Firstly, we regard other agents as the target agent in agent augmentation. Owing to our transformer-based information interactor, the target agent and the other agents are the same during the training process. Therefore, in order to generate more kinds of conditions under one single scenario, we propose agent augmentation exchanging the identities between target agent and other agents for prediction model.
Besides, during training, four augmentation methods (translation, rotation, flipping, and resizing) are used to generate abundant scenarios. Augmentations are applied simultaneously in agent tensors, ground-truth tensors and road graph tensors. In translation augmentation, we first generate random distance within to , and then translate the coordinates in the above three tensors, such as the coordinates of trajectories and centerlines etc. In rotation augmentation, we randomly rotate the coordinate system within . In flip augmentation, we flip the coordinate system along the
axis with a certain probability during training process. In resize augmentation, the entire scene are scaled by a randomly selected constant within.
2.4 Hard Mining
We use hard mining technique to improve the prediction of the model in difficult scenarios. Specifically, we train a proxy model with a randomly sampled training subset from the original training set, and let this proxy model perform inference on the remaining training set. Then, we mine those scenarios in which the proxy model performs poorly (scenarios with a large brier-minFDE) and increase the proportion of these scenarios in the training phase.
2.5 Multi-Trajectory Ensemble
Multi-modality is a central characteristic of the trajectory prediction task. Most methods avoid unimodal prediction output using the winner-takes-all (WTA) [lee2016stochastic], which is unstable due to network initialization. Inspired by DCMS [ye2022dcms], we enhance the multi-modality of predicted trajectories by Multi-Trajectory Ensemble.
Specifically, we use models with different random seed initializations, different degrees of hard mining, and different training epochs to generatesets of trajectories (each set contains trajectories). Then, from the total trajectories, we apply K-means algorithm [macqueen1967some] to generate trajectory clusters. For each cluster, we average all trajectories in the cluster to generate the output trajectory, and use the sum of their scores as the score of that output trajectory. Finally, all scores of trajectories will be linearly normalized. It turns out that summing over scores is better than averaging over scores, as summing tends to give higher scores to clusters with more trajectories. We use the endpoint distance as the distance metric when clustering. Fig. 2 shows the visualization of Multi-Trajectory Ensemble. We also find out that the final result can be used as teacher during knowledge distillation to make single models achieve better precision.
3.1 Dataset and Metrics
Dataset The Argoverse 2 Motion Forecasting Dataset [wilson2021argoverse] contains 250000 11-second scenarios with a sampling rate of 10HZ. For the training and the validation sets, the first five seconds of each scenarios are used as input and the other six seconds are used as the ground truth for models to predict. For the test set, only the first five seconds are provided. Argoverse 2 Motion Forecasting Dataset provides rich map information and contains five different dynamic categories.
Metrics Argoverse 2 Motion Forecasting Challenge chooses brier-minFDE (=6) as the metrics. MinFDE() is the minimum displacement between final positions and the ground truth final position. Similarly, brier-minFDE multiplies with the endpoint L2 distance, where corresponds to the probability of the best forecasted trajectory.
3.2 Implementation Details
We train our model for 200 epochs (around 96 hours) using eight 2080Ti GPUs. As for the input, we sample actors and lanes with distance less than 100 meters from the target agent. Using rotation and translation, each scene is normalized using the target agent as the center. Specifically, the most recent historical position of the target agent (at the frame) is taken as the origin, and the direction of the target agent historical trajectory is aligned with the positive axis of . We use Adam [kingma2014adam] optimizer with an initial learning rate of 2.5e-4, which is decayed to 2.5e-5 at 170 epochs and to 2.5e-6 at 190 epochs. Agent dimension (denoted by ) is set to 32, and map dimension (denoted by
) is set to 128 at training time. Since the model supports dynamic agent and map dimensional inputs, the dimensions increased to 64 and 256 respectively during testing. We use padding and clipping to align the dimensions. All transformer modules in TENET contain 128 hidden units.
Argoverse 2 Motion Forecasting Competition. We evaluate TENET on Argoverse 2 Motion Forecasting Competition. As shown in Table 1, our method ranks on the final leaderboard. The official metric is brier-minFDE.
To demonstrate the effectiveness of Multi-Trajectory Ensemble, we compare the average performance of trajectories before ensemble and after ensemble on the Argoverse 2 Motion Forecasting test set. As shown in Table 2 and Figure 2, Multi-Trajectory Ensemble integrates all trajectories and enhances the multi-modality and confidence of the prediction.
Besides, Table 2 shows that increasing the input size yields better prediction results. So we reduce the input size in the training phase and increase the size in the testing phase to accelerate training while maintaining predictive accuracy.
|Method||big input size||MTE||brier-minFDE (=6)|
This technical report presents an effective method for the motion prediction task. We develop an efficient Transformer-based network to predict trajectories, and we propose Temporal Flow Header to enhance the trajectories. Besides, we devise a training strategy to accelerate model training and a strong K-means based ensemble method. We conduct experiments on Argoverse 2 Motion Forecasting Dataset [wilson2021argoverse] and achieve state-of-the-art performance. Finally, we hope this work will be a strong baseline in this motion prediction task.