1 Introduction
Most autonomous navigation and tracking applications include interactions with other agents in the world. If a robot can accurately forecast the motion of external agents, it can plan intelligent behaviors, instead of having purely reactive interactions. For example, to avoid collisions with other drivers on a road, an autonomous car should be able to model future trajectories of other vehicles, and a robot manipulator that interacts with humans should anticipate a person’s behavior to avoid dangerous situations.
Typically, the motion prediction problem is addressed using filteringbased methods [1]
, which use specific kinematic and observation models to estimate the states of the subject, such as position, orientation, and velocity. However, predicting the motion of an agent considering only kinematics is a incomplete model of the true behavior, which also depends on the surrounding environment. Although human experts can manually design functions that model the agent’s interactions with objects, this process is usually timeconsuming and offers weak generalization to new environments.
Recently, imitation learning based approaches showed the possibility of learning the complex agentenvironment interaction
[2, 3]. In particular, inverse reinforcement learning (IRL) techniques can recover a complex reward structure using expert demonstrations, and offer higher robustness to generalization than manuallydesigning functions or supervised learning
[4]. Although the first IRL reward structures were linear [5, 6, 7], recent work [3, 8] improved the reward model complexity.A challenge in IRL comes from the choice of representation of the agent’s state. Adding more dimensions to the state such as velocities and higherorder derivatives generally improves the fidelity of the model, but extra dimensions come with an exponential increase in computation complexity [9]. Finn [8] used importance sampling to reduce computation time when considering positions and velocities as the robot state. But in previous IRL work for ground vehicles [3] does not consider kinematics when computing reward maps, using only position as the state.
Our key insight is that one can lower the exponential complexity increase of incorporating kinematics into the statespace if kinematic data is used instead during feature extraction. We offer three major contributions: 1) We propose an improvement on existing deep IRL frameworks, incorporating both kinematic and environmental context to predict trajectories from raw sensory input; 2) We train and verify the proposed method on a custom offroad driving dataset; 3) We qualitatively and quantitatively compare our prediction results with baseline methods such as extended Kalman filters (EKF) and direct behavior cloning (BC).
2 Related work
The problem of predicting an agent’s motion can be approached using different frameworks depending on the system’s objective. A Kalman filter estimates linear and angular velocities, then forwardpropagates the motion [1]. The filter ignores environmental context, and provides a unimodal distribution over the space of possible trajectories. To model the agent’s interactions with objects in the world, different methods manually design cost functions [10, 11, 12]. However, to overcome the issue of human finetuning and low generalization, one can explore the idea of imitation learning.
BC methods to model driving date as far back as 1989, when Pomerleau [13] used supervised learning to map the current firstperson image from a car to the desired steering angle, and methods improved significantly over time [14]. A drawback of BC is that in practice it is based on supervised learning with noni.i.d samples due to the sequential aspect of data, therefore matching the training and test set distributions can be a challenge, hindering generalizability [4], despite techniques like DAgger [15] are developed to mitigate this problem.
IRL can be used to improve generalization of a policy. IRL’s objective is to recover the reward function that the agent optimizes [4]. While early work on IRL modeled rewards as linearly dependent on features [5, 6, 7, 2], more complex models followed [8, 16, 17, 3].
Closest to our work, Wulfmeier et al. [3] use a neural network to model the mapping from sensor measurements coming from expert driving demonstrations to a reward map. A motion planner can then use the reward map to generate a path between desired start and goal locations. However, this approach ignores the role of dynamics in the forecast, considering only positions in the statespace. Adding extra dimensions to the state exponentially increases computational complexity for the task [9]. Thus, the training process slows down significantly and requires larger amounts of demonstrations for converge of a policy. Finn [8] attempts to overcome complexity by using importance sampling to compute state visitation frequencies from the learned policy. In this work, however, we argue that kinematics can instead be successfully used in the feature extraction process. Therefore, we propose a nonlinear reward model that depends on both kinematics and environmental features, while keeping a small statespace dimension.
3 Approach
We frame the problem of vehicle trajectory prediction with maxentropy IRL [7]
. In order to incorporate both environment and kinematic context, a twostage convolutional neural network (CNN) architecture is utilized to approximate the underlying reward function.
3.1 Problem formulation
Our objective is to learn to predict the target’s future path
or probability distribution of future path
. We use a set of demonstration trajectories , which are collected in various environments. The trajectory is defined as a sequence of states , where represents the target’s 2D location in world frame. Each state contains features , obtained from sensory inputs. To predict the trajectory, one can directly estimate the state sequence, or predict the action sequence , where action is a discretized motion (up, down, left, right).In BC, we can find a mapping from state and features to action: [4]. In IRL, however, the goal is to recover the agent’s hidden reward function , so as to maximize the probability of the demonstrated trajectories. Once the hidden reward is inferred, the probabilities of future trajectories can be computed.
3.2 Maximum entropy deep IRL
Here we follow the maximum entropy IRL formulation [7], treating trajectories with higher rewards as exponentially more likely. As shown by [3, 8], we approximate the reward function using a deep neural network . We define the reward of a trajectory as the accumulative reward over all states in that trajectory Under the maximum entropy assumption, we derive the probability of trajectory given the reward function as , where is the partition function over all possible trajectories. Then we try to maximize the following log likelihood of the demonstrated trajectories [3]:
Where and are the state visitation frequencies (SVF) from the demonstrated trajectories and from the inferred reward function respectively. After the reward network update, we use standard value iteration to solve the forward RL problem in the loop, as shown in Alg. 1. During value iteration, to speed up the convergence rate we artificially increase the probability of the most likely action being chosen, in what we call annealed softmax in Alg. 2.
3.3 Incorporating kinematics into the reward: twostage network architecture
Intuitively, when humans predict vehicle motion from a thirdperson perspective we primarily reason about two aspects: the surrounding environment and the vehicle’s motion so far. We implicitly evaluate traversability of the terrain, the location of obstacles, and extrapolate the past vehicle motion to infer where turning will be necessary and/or possible. If given enough observations, we can even infer more subtle cues such as driver preferences and a specific motion model for the vehicle.
Inspired by this intuition, we designed a twostage network architecture to reason about environmental and kinematic context when computing rewards (Figure 2). In the first stage, we adopt a fourlayer fully convolutional network (FCN) [18] structure which takes colored point cloud statistics as inputs, and outputs features extracted purely from the environmental context. Instead of using encoderdecoder approaches [18, 19, 20], which inevitably lose spatial information due to the downsampling stages, we adopt dilated convolutional layers, which systematically aggregate multiscale context without losing spatial information [21]. The approximate receptive field of our dilated network is pixels, which we estimated to be sufficient to extract the relevant environmental context. The final output of the dilated network is 25 feature maps, a number experimentally observed to contain sufficient information representing spatial context without excessive redundancy and sparsity.
As inputs to the second stage, we concatenate the output from the first stage with two feature maps encoding positional information and three feature maps representing kinematic information from the past trajectory, as follows. The first two feature maps encode, for each grid cell, the and position of the grid cell in a vehiclecentered, worldaligned frame. These feature maps are independent of the trajectories, but convey absolute position information to FCN, which is translationinvariant^{1}^{1}1An alternative way of encoding absolute spatial position would be using a fullyconnected network, but this will result in a substantial increase in the number of parameters.. Then, for each training sample, past trajectory information is encoded with three feature maps . and represent the vehicle’s past velocity, discretized to the four cardinal directions, and with a normalized magnitude proportional to its absolute speed. encodes the trajectory curvature, which is related to angular velocity. are estimated from the past five seconds of vehicle motion and respectively constant across the whole feature map. Finally, they are empirically normalized to a small finite range to aid training stability. More implementation details can be found in Appendix.
4 Experiments
In this section we present our approach for dataset collection, baseline design, definition of metrics, prediction results, and discussion on learned results.
4.1 Offroad driving dataset collection
The offroad driving dataset was collected in a test site with roughly 400 acres involving more than 5 different drivers. The vehicle platform, a modified All Terrain Vehicle (ATV), is shown in Figure 3. A total of over 1000 trajectories are included for the dataset, with average length about 2040 m long, in about 30 km of driving demonstrations.
For each demonstration we create a highprecision, colored local point cloud map. We then convert the point cloud into a 2D grid with statistics for each cell: maximum height, height variance, and mean RGB values, and extract the vehicle past trajectory and ground truth prediction using high precision RTKGNSS/INS which can provide position data with centimeter level accuracy.
The reason behind our choice of features is that in offroad environments, a combination of geometry and color statistics can provide useful information regarding terrain traversability. For example, one can intuitively categorize regions with high variance in height to be rough and nontraversable. However, combining geometry with color can be even more informative: an area that is simultaneously rough but light green is probably just a region of tall grasses, over which the vehicle can easily drive. An example of data from our dataset of demonstrations is shown in Figure 4.
4.2 Metrics and baselines
We employ two metrics for quantitative evaluation: the Negative LogLikelihood (NLL) of the demonstration data and the Hausdorff Distance (HD) [2]. The NLL metric computes the loglikelihood of the demonstration trajectory under the learned policy . We normalize NLL by the demonstration trajectory length. The HD metric represents a spatial similarity between expert demonstrations and trajectories sampled with the learned policy. To compute it, we use the average HD between the demonstration trajectory and 1000 trajectories randomly sampled from the learned policy.
We selected three baseline methods for comparison: 1) EKF with a kinematic bicycle model; 2) BC technique that uses supervised learning to learn a policy mapping both environment and kinematic inputs to an action; and 3) deep IRL method considering only environment input to compute features, with no kinematics.
4.3 Trajectory prediction results
Qualitative evaluation
We show key experimental results in Figure 5, and a graphical comparison with baselines in Figure 6. We verify that the forecasts based on the inferred reward map, match the expected behaviors of a human driver, with trajectory probabilities concentrated on trails or traversable paths. In addition, multimodal behaviors are present at intersections and when approaching open areas.
In Figure 1 we see how the trajectory prediction behaves on the same environments, for different agent kinematics. Our predictions are surprisingly intuitive: at low speed the forecast shows higher likelihood of taking turns at an intersection, while at high speed it predicts that the vehicle will probably keep driving straight along the trail. More visualizations can be found in the demo video^{2}^{2}2https://github.com/yfzhang/vehiclemotionforecasting.
Quantitative evaluation
We compare our method with four baselines in the offroad driving dataset, shown in Table 1. Our method has the best prediction results on the test set using both metrics. We observe that the EKF obtains surprisingly good results for HD; we believe that this is due to a relatively large proportion of straight or nearstraight trajectories in the dataset, which EKF can predict well. We also observe a comparatively poor performance of Deep IRL without kinematic features. Examination of the predictions suggests that without past kinematic context, in many open scenarios this method can only predict diffuse, highly uncertain future motions, as shown in Fig. 6. On the other hand, our method, which can exploit both environmental and kinematic context, obtains the best results.
Method  EKF  Behavior cloning  Random  Deep IRL without kinematics  Ours 

NLL  N.A.  0.87  1.35  1.33  0.69 
HD  9.12  10.54  25.62  25.46  6.71 
Prediction performance comparison on the test set using NLL and HD. For both NLL and HD, lower numbers represent better predictions. Our method obtains the best results in both evaluation metrics.
4.4 What is learned?
We try to build an understanding of what is being learned by plotting representative outputs from different stages of the network. Shown in Figure 7, the firststage is network apparently encoding the traversable trail based on colored point cloud input. Shown in Figure 8, the second stage of the network learns an effective forward motion model of the car with an uncertainty cone roughly oriented towards the direction of motion.
5 Discussion and future work
In this work we proposed and validated a deep IRL approach that integrates kinematic and environmental context to learn a reward structure for driving predictions in offroad environments. We show that our approach outperforms baselines such as EKF, BC, and a deep IRL approach without considering kinematics.
Our twostage network architecture used for the reward approximation can mitigate the computational complexity of adding kinematics as extra dimensions to the statespace; we instead add kinematics in the feature extraction process. Based on our findings, we see future work in a few directions:
Dynamic environment: Our current approach can not handle dynamic environment which contains other moving agents, such as pedestrians and other vehicles. As an extension to the current framework, we can take a history of visual and LiDAR data as the environmental input and this temporal information can be used to infer the motion of additional moving agents in the environment. A recurrent network probably can fuse this sequential information and approximate the reward structure.
Improve kinematics model: Our kinematic features come from a fixedtime window from the past trajectory. When the driver does a sudden motion, using a fixedtime window is not ideal, because under sudden movements only a smaller time segment should be considered to forecast where the vehicle will go. We envision improvements in the secondstage network, where the network can learn how to select adaptive time windows for prediction, probably using recurrent networks.
This research is funded by Yamaha Motor Co. Ltd. We thank Lentin Joseph, Masashi Uenoyama, Yuji Hiramatsu for data collection and the anonymous reviewers for providing feedback.
References
 Thrun et al. [2005] S. Thrun, W. Burgard, and D. Fox. Probabilistic robotics. MIT press, 2005.

Kitani et al. [2012]
K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert.
Activity forecasting.
In
European Conference on Computer Vision
, pages 201–214. Springer, 2012.  Wulfmeier et al. [2017] M. Wulfmeier, D. Rao, D. Z. Wang, P. Ondruska, and I. Posner. Largescale cost function learning for path planning using deep inverse reinforcement learning. The International Journal of Robotics Research, 36(10):1073–1087, 2017.
 Osa et al. [2018] T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel, J. Peters, et al. An algorithmic perspective on imitation learning. Foundations and Trends® in Robotics, 7(12):1–179, 2018.

Abbeel and Ng [2004]
P. Abbeel and A. Y. Ng.
Apprenticeship learning via inverse reinforcement learning.
In
Proceedings of the twentyfirst international conference on Machine learning
, page 1. ACM, 2004.  Ratliff et al. [2006] N. D. Ratliff, J. A. Bagnell, and M. A. Zinkevich. Maximum margin planning. In Proceedings of the 23rd international conference on Machine learning, pages 729–736. ACM, 2006.
 Ziebart et al. [2008] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In AAAI, volume 8, pages 1433–1438. Chicago, IL, USA, 2008.
 Finn et al. [2016] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning, pages 49–58, 2016.
 Sutton and Barto [1998] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
 Choset et al. [2005] H. M. Choset, S. Hutchinson, K. M. Lynch, G. Kantor, W. Burgard, L. E. Kavraki, and S. Thrun. Principles of robot motion: theory, algorithms, and implementation. MIT press, 2005.
 Urmson et al. [2009] C. Urmson, C. Baker, J. Dolan, P. Rybski, B. Salesky, W. Whittaker, D. Ferguson, and M. Darms. Autonomous driving in traffic: Boss and the urban challenge. AI magazine, 30(2):17, 2009.
 Thrun et al. [2006] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, et al. Stanley: The robot that won the darpa grand challenge. Journal of field Robotics, 23(9):661–692, 2006.
 Pomerleau [1989] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances in neural information processing systems, pages 305–313, 1989.
 Bojarski et al. [2016] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al. End to end learning for selfdriving cars. arXiv preprint arXiv:1604.07316, 2016.

Ross et al. [2011]
S. Ross, G. Gordon, and D. Bagnell.
A reduction of imitation learning and structured prediction to
noregret online learning.
In
Proceedings of the fourteenth international conference on artificial intelligence and statistics
, pages 627–635, 2011.  Ho and Ermon [2016] J. Ho and S. Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565–4573, 2016.
 Levine et al. [2011] S. Levine, Z. Popovic, and V. Koltun. Nonlinear inverse reinforcement learning with gaussian processes. In Advances in Neural Information Processing Systems, pages 19–27, 2011.

Long et al. [2015]
J. Long, E. Shelhamer, and T. Darrell.
Fully convolutional networks for semantic segmentation.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pages 3431–3440, 2015.  Badrinarayanan et al. [2017] V. Badrinarayanan, A. Kendall, and R. Cipolla. Segnet: A deep convolutional encoderdecoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12):2481–2495, 2017.
 Paszke et al. [2016] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello. Enet: A deep neural network architecture for realtime semantic segmentation. arXiv preprint arXiv:1606.02147, 2016.
 Yu and Koltun [2015] F. Yu and V. Koltun. Multiscale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
Appendix A Additional experimental detail
In this section, we provide additional details for all experiments, including data preprocessing, baseline implementation, network parameters, and training hyperparameters.
a.1 Data preprocessing
Environment input. We fuse camera data with LiDAR data to generate colored point cloud. For each demonstration, we create a local 3D point cloud map via registration and further compress it into a 2D grid ( m m, with m resolution) with five channels: maxheight, height variance, mean red, mean green, and mean blue.
Kinematic input. We have 5 channels encoding positional and kinematic information. The first two channels encode, for each grid, the positional information in a vehiclecentered, worldaligned frame as shown in Figure 9. They provide positional information to convolutional filters and break translationinvariance. The other three channels encode the kinematic information , which are estimated from the past trajectory as illustrated in Figure 9 and are uniform over the whole feature map. are calculated on the gridmap and therefore are discretized. They approximate the vehicle’s past linear velocity. is estimated by fitting a least square circle to the past trajectory and approximates the trajectory curvature, which is related to angular velocity. All the kinematic input is empirically normalized to aid training stability.
a.2 Training and implementation details
We implemented the training and inference pipeline using PyTorch
^{3}^{3}3https://pytorch.org/ and our code and dataset will be made public available^{4}^{4}4For code and dataset, see https://github.com/yfzhang/vehiclemotionforecasting. We use a 90/10 split to divide our data into training and test sets. We used the following techniques for training the network:Data augmentation. Augmenting the demonstration data with different rotations is critical because the proposed twostage network is sensitive to directional bias. We wanted to avoid overfitting to a particular direction without properly reasoning about kinematics.
Demonstration distribution. The network tends to overfit to straight line predictions if the majority of the demonstrations are straight as well, sometimes ignoring environmental features. Therefore we balanced the distribution of types of trajectories, such as straight lines, small and large curves.
Simulated annealing. To improve convergence during training time, we artificially increased the probability of the most likely action being chosen.
Training in parallel. Since for every training iteration, RL needs to be performed to compute the gradients, we implemented parallel computation of gradients for each batch of demonstrations. We used batches size of 16 demonstrations. Batch training proved in practice to be more stable than using gradients based on single demonstration.
a.3 Baseline details
For all learningbased baselines, we set all training hyperparameters the same as those used in our method, and stop training when it starts to show overfitting in the validation set.
Deep IRL without kinematics. We removed the secondstage and kept the firststage the same as the original network design in our approach. Therefore, there is no kinematic data used in training and testing. The results is as expected that it predicts a distribution concentrated in the path, but diffused over all directions compared to our approach. Because of this omnidirectional diffusion, the quantitative evaluation score of this baseline is very low. The reason why deep IRL without kinematics work for the prior work [3] is that a explicit goal state can be provided in the motion planning problem setting. In both training and testing phases, the value of the goal state can be hardly reset and pulls the optimal policy towards that direction. However, in our problem setting, the aim is to predict the future motion in a finitetime horizon, and there is no explicit goal state.
BC
. We kept the network architecture the same as that in our approach. But instead of outputting the reward values, we set the last layer to output four channels and add a softmax layer. Therefore, the final output becomes the policy and contains the probabilities of four different actions. The network input is also the same as our approach and contains both environment (LiDAR + RGB) and kinematic data.
EKF. We adopted an EKF with kinematic bicycle model for state estimation. We use the vehicle position observed in the past trajectory as the measurement input and implicitly estimates the vehicle’s linear and angular velocity. During the predication phase, we assume the vehicle’s linear velocity and front steering angle is unchanged. In the experimental results, we observed that EKFbased prediction performs well when the demonstration trajectory is nearstraight or has a constant curvature.
Comments
There are no comments yet.