Longitudinal Trajectory Prediction of Human-driven Vehicles Near Traffic Lights

06/02/2019 ∙ by Geunseob Oh, et al. ∙ University of Michigan 0

Predicting future trajectories of human-driven vehicles is a crucial problem in autonomous driving. While the trajectory prediction problem in highway has been well addressed, the problem in city driving where the motions of vehicles are governed by traffic lights has barely been discussed. Despite its importance, no comprehensive model which predicts longitudinal trajectories of vehicles near traffic signals is available. Our idea is to simply utilize information from vehicle-to-infrastructure communications to model how human drivers drive near traffic signals and use the model for the longitudinal trajectory prediction. We propose a "human policy model" which maps a state of a human vehicle and a traffic signal to a longitudinal acceleration of the vehicle. The proposed model is trained on 471,273 data points sampled from 3,398 real-world historical trips conducted by 583 distinct vehicles near a signalized intersection. We used a neural network for learning deterministic (most-likelihood) human policy and a mixture density network for learning probabilistic human policy. Our most-likelihood predictions were as accurate as 0.9-2.3m for the position and 0.3-0.9m/s for the speed (the median error between the predicted and the actual value at 5 seconds into the future) depending on scenarios. This result is far superior to the results obtained from other available models. Our probabilistic policy model provides probabilistic contexts for the predicted trajectories. It is also capable of learning multi-modal distributions which allows the model to capture competing policies, for example, 'pass' or 'stop' in the yellow-light dilemma zone. Finally, we conducted an ablation study to identify the influence of the state features on the deterministic policy model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Autonomous driving have been more successful in highway than in urban city mainly due to the simplicity of its driving environment; absence of traffic signals, and pedestrians. Realizing fully autonomous vehicles in urban driving environments is more challenging due to the opposite reasons; existence of traffic signals, frequent interactions with human-driven vehicles, and pedestrians.

One of the major differences between urban city and highway driving is traffic lights. In urban driving, especially in the vicinity of traffic lights exemplified by signalized corridors or intersections, the motions of vehicles are mainly governed by traffic signals. People drive in such a manner that they obey to the traffic signals and properly respond to implicit rules which traffic lights impose. Examples of the implicit traffic rules include stopping before a traffic light in a red phase, maintaining a proper speed in a green phase in a free-flow situation. This is why predicting how human drivers respond to traffic signals is the key to successful autonomous driving. If we can accurately predict the trajectories of surrounding human-driven vehicles, then we can leverage such predictions in decision-making, trajectory planning, and control synthesis of a self-driving vehicle.

Despite the importance, there is no comprehensive model which describes the behavior of human drivers near traffic signals available yet. A few papers have studied specific instances of the problem but limited to few simple scenarios; [1], [2], and [3] developed models for vehicles approaching a signalized intersection and making complete stops in red light. [3], [4], [5], [6], and [7] proposed models for vehicles departing from a signalized intersection from zero-speed in green phase. However, these models are either limited to specific instances of the problem, or they do not serve as a prediction model. For example, the polynomial model proposed in [4] requires model parameters to calculate a trajectory of a vehicle, however such parameters including total deceleration time, final speed, maximum acceleration can only be measured after a trip is complete. A group of papers [8], and [9] presented prediction algorithms for the vehicles in highway and [10] presented trajectory predictions in car-following scenarios based on car models proposed in [11], and [12]. While these two groups either describe trajectories of vehicles in highway or in car-following scenarios, no model describes how human drivers react to the traffic signals. A comprehensive model which can predict longitudinal trajectories of human-driven vehicles near traffic lights is missing.

Our idea is to simply utilize information obtained from vehicle-to-infrastructure (V2I) communications to model and predict how human drivers drive near traffic signals. We pay our attentions to the following two facts: (1) behaviors of human-driven vehicles near traffic lights are mainly governed by traffic signals, (2) the phase and timing of traffic signals can be shared through V2I communications ahead of time. We propose a ”human policy” model which maps a state of a human-driven vehicle and the corresponding SPaT to an action (a longitudinal acceleration) of the vehicle.

Fig. 1: (a) depicts the longitudinal trajectory prediction problem near a traffic light for the vehicles with through movements. Given an initial state (the initial position and speed of the vehicle), our goal is to predict its future states. (b) describes three examples scenarios of the problem. The full list of the scenarios is listed in Table. I. We define ‘scenario G’ as a prediction problem when the prediction window starts on a green light and ends on the same green light. ’Scenario GYR’ represents a prediction problem where the window spans over a set of green, yellow, and red light. All the other scenarios are defined likewise, however, they are not depicted here to avoid redundancy.

In this paper, we trained the proposed human policy model near traffic lights using neural networks for a deterministic (most-likelihood) human policy, and mixture density networks (MDN) [13] for a probabilistic human policy. We show that our baseline deterministic neural net model performed far superior than existing methods on a test set. Our probabilistic model provides contexts on the stochastic nature of human driving and measures how confident we are on the predictions. It is capable of learning multi-modal distributions, thus accurately captures two competing human policies (pass or stop) in the yellow-light dilemma zone [14]. For the training, validation, and testing, we used 471,273 data points sampled from 3,398 historical naturalistic trips conducted by 583 distinct vehicles driven in a small section of a road in Ann Arbor, MI that includes a signalized intersection.

The remainder of the paper is organized as follows: Section II elaborates on the proposed human policy model which maps a state of a human-driven vehicle and a traffic signal to an action (a longitudinal acceleration). Section III describes the framework which describes how we obtain predictions on the longitudinal trajectories of human-driven vehicles using the proposed human policy model. Section IV describes prediction results obtained from the deterministic policy model and analyzes performance of the predictions compared to existing models from literature. The probabilistic policy model are utilized in the scenarios where competing policies exist or to obtain probabilistic trajectories given prediction intervals. Finally, Section V offers concluding remarks.

Ii Human Policy Model

Ii-a Problem Description

Our goal is to predict longitudinal trajectories of human-driven vehicles at the vicinity of traffic light(s), specifically focusing on the through vehicles which passed through a signalized corridor or intersection. As described in Fig. 1

(a), we aim to obtain future longitudinal positions and velocities of a vehicle as a function of time. This problem has not only been barely been addressed, but also is challenging due to stochastic motions of vehicles near traffic signals. For example, a driver may prefer hard-breaking when he/she approaches to a red light, while other driver prefers soft-breaking. A driver may prefer to accelerate hard in a departure scenario, while other drivers prefer soft-departure. Additionally, the reactions of drivers at phase transitions (G to Y, Y to R, or R to G) are different from those at steady phases (G, Y, or R). Another motivational example is the human-decision making in yellow light dilemma zone

[14], where a driver approaches at a high speed to a traffic light. In this example, there usually exists two competing decisions; a driver could either make a sudden stop or pass through the traffic light.

In this sense, we broke the problem down to seven distinct scenarios which are depicted in Fig. 1(b) and Table. I. The idea behind this categorization is our belief that humans react differently at different traffic phases and timings, resulting the trajectories to be significantly different depending on the scenario. For example, a trajectory of a human driver at a green phase would be notably different from that at a red phase.

Ii-B Related Works

A few papers have discussed acceleration models or velocity profiles near traffic signals. [3], [6], [5], and [7] proposed polynomial velocity or acceleration models for vehicles departing from a signalized intersection from zero-speed in green lights. [1], [2], and [3]

developed deceleration models for vehicles approached a signalized intersection which made complete stops in red light. However, these models only studied very specific instances of the problem. We classified the available studies into the scenarios we defined in Table. 

I.

Scenario Available Studies
G D0 (departure from zero-speed) ATL Newzealand (1990), Bham (2002), Day (2013), Modified IDM (2018)
General None
Y None
R A0 (arrival to zero-speed) Bennett (1995), Wang (2005), Modified IDM (2018)
General None
GY None
YR None
RG None
GYR or more None
TABLE I: Seven distinct scenarios of the prediction problem

A number of papers [4], [14], and [15] which studied the average behavior of drivers near traffic lights proposed acceleration or deceleration profiles by regressing on field data. However, model parameters of these profiles such as total acceleration time, final speed are not known at the time predictions are made. In other words, they cannot be used for the prediction problems. Thus, these are not included in the Table. I. [4] developed a physics-based polynomial acceleration profile in free-flow scenario, however the model parameters include final speed, maximum acceleration that can only be measured after the trajectory is complete. Moreover, traffic light is not concerned in this model. [15] obtained average deceleration levels on yellow lights when vehicles decelerated in controlled field experiments, however, the calculation of deceleration levels requires the final speed of a vehicle. In addition, other yellow light scenarios where vehicles accelerated or maintained the speed were not discussed. [14] obtained a deceleration model on yellow lights as a quadratic function of approaching speed, distance to the intersection, response time and type of vehicle, however, the response time of a vehicle for most yellow scenarios is not known by the time a prediction is made. Also, the model is limited to yellow deceleration events.

We believe that a comprehensive model which describes the behavior of human drivers in all scenarios described in Fig. 1(b) and Table. I is crucial to accurately predict trajectories of human vehicles near traffic lights. To the best of our knowledge, there is no such model available yet.

Ii-C Proposed Model

The main idea of our comprehensive model of human driver is to simply incorporate traffic signals information including phasing (G, Y, R), and timing (time elapsed in the current phase) into the model. The current and future traffic signal information can easily be accessed through V2I communications. Based on the idea, we propose a ”human policy” model which maps a state of a human-driven vehicle and the corresponding traffic light information to an action (a longitudinal acceleration) of the vehicle (). Specifically, a state of the model consists of 5 features:

Distance to traffic light () represents a longitudinal distance of a vehicle to a traffic light that the vehicle is approaching to or departing from. It is an essential feature which greatly impacts behavior of human vehicles. For example, a vehicle approaching to a traffic light in red phase travels slow when it is close to the traffic light, whereas it can travel fast when it is far away from the traffic light. represents that the vehicle is at the stop line of a lane that is subject to the traffic light. means that the vehicle is approaching to the traffic light (upstream), and indicates the vehicle is departing from the traffic light (downstream).

Longitudinal speed () indicates a longitudinal speed of a vehicle. is also an important deciding factor of the behavior. For instance, a vehicle traveling in relatively low speed compared to the speed of the traffic is more likely to accelerate. Another example is that a vehicle approaching to a traffic light in red phase in high speed tends to break harder than a vehicle approaching to the traffic light in low speed. We assume .

State of traffic light () represents the phase of the traffic light that a vehicle is subject to. Green, yellow, and red phase are each represented by an integer 1, 2, and 4. Needless to say, the driver behavior at a green phase is different from that at a yellow or a red phase.

Elapsed time () is the time elapsed since the last phase change (). Every time a phase transition occurs, is initialized to zero and is accumulated as time elapses. While it may not be obvious how much impact this feature has on the behavior, accounts for transient behaviors of human drivers near phase changes. For example, a vehicle approaching to an intersection in a red phase with a small , meaning that the phase has just shifted to red, may not be traveling slow whereas a vehicle with a large is likely to travel slow or at a stop. Another motivational example is when a vehicle is stationary in a queue and a phase shift from red to green just occurred. Depending on the position of the vehicle at the queue, the vehicle may or may not stay stationary for a while. Here, is a critical deciding factor of the driver behaviors. It is worth mentioning that indirectly accounts for the queue formed near traffic lights.

Time of day () represents the time of day as elapsed hours since beginning of the day (). , and each represent a midnight and a noon. It is well understood that the traffic characteristics including congestion, and speed differ considerably depending on time of day; traffic speed is much slower in rush-hour than in free-flow traffic. As shown in studies including [16] which quantified the influence of TOD on the road traffic speed in rush-hours (4-6pm) and free-flow hours (9pm-6am) by investigating hundreds of historical trips near traffic lights, TOD has a significant impact on traffic speed, thus affects the behaviors of drivers. Unlike other 4 features, reflects a macroscopic trend of the traffic.

Fig. 2: The first part of the prediction framework: learning the policy. (a) a deterministic policy () is obtained by training a neural net on the dataset. (b) a probabilistic policy () by training a mixture density network. Here, we the distribution is a Gaussian mixture.

In this regard, a state and an output are defined as follows:

(1)
(2)

Here, and .

Due to the stochastic nature of human decision making in driving, and interactions with traffic signals, a simple analytical model such as a linear or a physics-based model cannot accurately represent the nominal or probabilistic behaviors of human-drivers near traffic signals. Instead, the proposed model should be learned through (non)parametric regression methods, or deep learning methods based on historical driving data. The learned proposed models can be either deterministic, or probabilistic.

Ii-D Model Learning and Data

In order to select the best performing methods, we trained the proposed model using a number of regression methods including polynomial regression models, support vector machine regressions, random forest regressions, neural networks, and mixture density networks on a dataset.

The data consists of 471,273 observations from 3,398 trips that 583 distinct vehicles have reported over a span of 2 years at a particular section of a road with a signalized intersection. Each observation is a pair of a state (5 features) and an action (a longitudinal acceleration). Each vehicle reported its 10Hz GPS signals (coordinates, speeds, and heading angles) which then were used to calculate , , , and . The traffic light information , and were obtained from a V2I communication device installed at the signalized intersection. In order to reduce the noise in , and , a least-square polynomial smoothing filter was used [17].

The data was divided into three sets of data to be used for training, validation and testing. The training set comprising 70% of the data was used to learn the parameters of the model while the validation set comprising 20% of the data was used to validate and determine the parameters of the model. The test set comprising 10% of the data was set aside for evaluating the performance of the trained models. The three sets are independent of each other.

In the training process, the parameter set with the smallest validation loss was chosen. The large number (583) of individual drivers help reducing the bias in the model and possibility of overfitting, allowing the trained model to better represent nominal behaviors of a human-driven vehicle near traffic signals. As a result of performance analysis of the methods on the test set (described in Table. II), we chose a neural network for learning deterministic (most-likelihood) human policy as our baseline deterministic human policy, which had the smallest mean absolute error and highest . For the probabilistic human policy, we used MDN as our baseline model to obtain a conditional distribution which reflects the nature of human decision making process in driving.

Method MAE()
SVM, Guassian Kernel 0.44 0.31
Boosted Tree 0.37 0.45
Random Forest 0.36 0.51
Neural Network 0.30 0.68
TABLE II: Performance comparisons on deterministic policy learning

Our baseline deterministic policy is learned by minimizing a loss function

which is a summation of mean squared error as described below.

(3)

The solution of the probabilistic policy learning is obtained by minimizing a loss function which is a summation of the negative log likelihood function .

(4)
Fig. 3: The framework for the trajectory predictions is divided into two steps. The first step is to train the proposed policy model on a dataset based on two different approaches: deterministic and probabilistic learning (off-line). The second step is to predict the trajectories of target vehicles by iterative predictions of human policies and propagations of the longitudinal vehicle dynamics over the prediction horizon (on-line).

Ii-E Implementation Details

We implemented both the deterministic and the probabilistic model in Keras. Fig. 

2 describes example networks for the deterministic and probabilistic policy. The implemented deterministic policy

neural net consists of 3 hidden layers each followed by two ReLU

[18]activation functions and a softmax function. The probabilistic policy which is a conditional distribution obtained using MDN and the network consists of 2 hidden layers with ReLU activation functions and 1 MDN layer. The MDN layer is a fully connected neural net which takes an input from the second hidden layer and outputs the parameters of a mixture model. Since Guassian mixtures were used as the mixture network, the MDN layer outputs a set of parameters which contains the following three parameter sets of the Gaussian mixture: mixture weights , mean of components

, variance of components

for and is the number of components. In case of , the output dimension is 6. Both models were trained on the same data using ADAM optimizer [19].

Iii Prediction Framework

Fig. 3 illustrates the prediction framework used to obtain trajectories. The first part of the framework is the off-line learning of the human policy model which was described in the previous section. The second part is an iterative process, where we alternate instantaneous predictions and propagations of a vehicle dynamics. The propagation of the longitudinal vehicle dynamics (one-step propagation) is done by utilizing a zero-order hold for discrete dynamics equations described in Eq. 5, and 6.

(5)
(6)

Where each represents the longitudinal position, velocity, and acceleration at predicted at and for the deterministic policy and for the probabilistic policy.

Given an initial state of a vehicle and a sequence of future states of the traffic light , the predicted state of the vehicle at is obtained by:

(7)
(8)

We can obtain (the final state is obtained when ) given and by the iterative predictions and the propagations. In the deterministic prediction, is deterministic for arbitrary , thus the predicted trajectory for is deterministic. For the probabilistic prediction, the resulting

is a mixture of Gaussian distributions, however, we are not able to directly obtain the probability distribution function. Instead, we utilize Monte Carlo Simulation

[20] to obtain the resulting pdf from the rollout trajectories.

Iv Results

Fig. 4: Six samples of the most-likelihood longitudinal trajectories of human-driven vehicles with through movements for the scenario G (left plots), Y (middle plots), and R (right plots). Each scenario includes two samples, and each sample includes two plots (the upper for the predicted distance, the lower for the predicted speed and acceleration). All trajectories were obtained at time by running the iterative predictions and propagations of the dynamics every 0.2s, over the prediction horizon (5s).
Fig. 5: Six samples of the most-likelihood longitudinal trajectories of human-driven vehicles for the scenario GY (the left two plots), YR (the middle plots), and RG (the right two plots). All trajectories are the prediction results obtained at . The prediction horizon is 5s, and predictions were made every 0.2s.

In this section, prediction results are presented for all the scenarios defined in Table. I. This section consists of 5 sub-sections. In section IV-A, a result of the predictions from the deterministic policy model is presented. Specifically, the resulting trajectories of the most-likelihood policy for the scenario G, Y, R are presented in Fig. 4. Those for the scenario GY, YR, RG are depicted in Fig. 5. We also presents a resulting trajectory for a sample trip of the scenario GYR in Fig. 6. In section IV-B, three different performance metrics are defined and are used in the following sections to evaluate prediction performances. Section IV-C, performance comparisons are made between our baseline deterministic model, and models available from literature for two scenarios A0, and D0. Section  IV-D elaborates on error statistics of prediction results for all the scenarios. In section IV-E, our probabilistic prediction algorithm is utilized to tackle a scenario with competing policies. In section IV-F, a result of an ablation study is presented to quantify influence of each feature on the accuracy of our human policy model.

Iv-a Individual Results

From the historical data, we obtained a large number of distinct sample trips for each scenario. Among all the trips, 6 sample trips for the scenario GY, YR, and RG were selected and their prediction results are depicted in Fig. 5. Prediction results for other scenarios are in the supplementary materials.

Fig. 4 shows most-likelihood trajectory predictions on sample trips for the scenario G, Y, and R. The two scenario G examples depict when vehicles coast in green phases through the signalized intersections. The top scenario Y example depicts an instance when vehicles slows down as it approaches to the intersection. The bottom scenario Y example describes an instance where a vehicle passes through the intersection, maintaining its speed. The top scenario R example describes an example when a vehicle is at stop. The bottom scenario R example describes a prediction instance where a vehicle slows down as it approaches to the intersection.

Fig. 5

shows most-likelihood trajectory predictions on sample trips for the scenario GY, YR, and RG. The top scenario GY example presents an instance when a vehicle reacts to a phase shift to yellow and decides to stop before the intersection. The bottom scenario GY example depicts an instance when a vehicle decides to pass through the intersection by speeding up. The top scenario YR example depicts an instance when a vehicle slows down as it approaches to the intersection. The bottom scenario YR is an interesting example where the human driver chose to pass through the intersection in a red phase. Our prediction algorithm was able to predict this behavior that violates a traffic rule. The top scenario RG example describes an example when a vehicle departs as the phase shifted to green. Here, one could guess that a vehicle is at a queue, judging from the position of the vehicle and the length of the vehicle at stationary. Again, our prediction algorithm was able to predict the moment when the vehicle started the departure, capturing the existence of a queue formed near the intersection. The bottom scenario RG example describes an instance where the phase was originally red and shifted to green, which made the vehicle slowed down for the first few seconds.

Fig. 6 shows a most-likelihood trajectory prediction on a sample trip for the GYR scenario. Although the prediction horizon (15s) is much longer than the sample trips in Fig. 4,  5, our most-likelihood position, speed, acceleration predictions were qualitatively almost identical to the actual position, speed, and acceleration profile of the human driver. Note that we were able to capture transient response of the human-driver in the phase shift from green to yellow (as shown as the delayed deceleration, around ), and general trends in the phase shifts GY, and YR.

Fig. 6: The most-likelihood trajectory prediction on a sample trip for the GYR scenario. The prediction horizon is 15s, meaning that the 15s long trajectory were obtained at . Predictions were made every 0.2s.

Iv-B Performance measure

In order to make fair performance comparisons, we used the following three evaluation metrics: mean absolute error (MAE), time weighted absolute error (TWAE), and absolute deviation at the end of the prediction window (ADN) defined in Eq. 

9, 10, and 11. For the graphical description, refer to Fig. 7.

Fig. 7: Calculation of the three performance evaluation metrics is based on . Given the prediction horizon , the metrics used in the evaluation are mean absolute error (MAE), time-weighted absolute error (TWAE), absolute deviation at the end of the prediction horizon (ADN). The number of samples used to calculate MAE and TWAE is obtained by dividing the prediction horizon by the sampling time .
Fig. 8: Performance comparison against the benchmark models. Note that comparisons are only made for the scenario A0, D0 since no benchmark model is available for all the other scenarios including G, Y, R, GY, YR, RG, and GYR. The error distributions for our deterministic model are represented as *. The error distributions of benchmark models for the scenario A0 are Bennett(B1) [1], Wang(B2) [2], and Modified IDM(B3) [3]. Those for the scenario D0 are Bham(B1) [5], and Dey(B2) [7]. The scenario lengths are 15s.
(9)
(10)
(11)

Iv-C Benchmarks for Scenario A0, and D0

In this subsection, performance comparisons are made for two scenarios A0, and D0. It is important to mention that the reason we only benchmark the scenarios A0, and D0 is because those scenarios are the only scenarios where a previous model is available. For all the other scenarios including G, Y, R, GY, YR, RG, or more, there simply does not exist a model in the literature for those scenarios. As shown in Fig. 8, our deterministic prediction model performs far superior to all the other benchmarks currently available for both position and speed predictions in all 3 metrics.

As shown in Fig. 8, our deterministic prediction model performs far superior to all the other benchmarks currently available for both position and speed predictions in all 3 metrics we defined in the previous section.

Iv-D Prediction Statistics, All Scenarios

This section elaborates on the most important result of the work: the statistics on the performance of our most-likelihood trajectory predictions for 6 scenarios (G, Y, R, GY, YR, RG). The resulting box plots are shown in Fig. 9. Among the 3 metrics, ADN is the biggest and MAE is the smallest in all the scenarios. This is because a prediction deviates from the true value as we make prediction further into the future (as the prediction horizon becomes bigger). Note that the median ADN with a prediction horizon 5s is as accurate as 0.9-2.3 and 0.3-0.9 depending on the scenario, and 0.9-1.1 in the scenario Y and R; the scenarios where the predictions were the most accurate (Fig. 9).

The outliers of Fig. 

9 which are depicted as ’+’ occur mostly in the scenarios where there are two or more competing human policies, for example, ’pass’ or ’stop’ in the yellow dilemma zone. Indeed, the biggest outlier (based on te speed prediction ADN) of all the sample trips in the dataset is the one with a red rectangle in the scenario YR (Fig. 9).

Iv-E Probabilistic Prediction

The most-likelihood trajectory prediction of the aforementioned outlier trip is depicted in Fig. 10(a). This is the case where a probabilistic trajectory comes in handy. Our MDN model is capable of reproducing a multi-modal distributions, thus was able to depict the other competing policy of the outlier trip as depicted in Fig. 10(b). Our probabilistic prediction algorithm is not only capable of capturing competing policies, but also able to provide contexts on the predictions with the most probable trajectories of competing policies, trajectories whose probability density is bigger than a threshold, and prediction intervals as depicted in Fig. 10(b).

For the scenario with the prediction horizon 5s, the computation time to obtain a most-likelihood trajectory prediction is 5-10msecs on a single-core personal laptop with i7-6500U 2.50GHz CPU, and 8GB RAM. However, it takes several seconds (5-10s for 1,000 rollout trajectories) to construct the probabilistic predictions on the same machine due to the heavy computation from the Monte Carlo simulation.

Iv-F Ablation Study

In order to investigate the influence of each feature on the performance of the deterministic policy, we conducted an ablation study where we removed a feature from the state vector. We used the identical neural network architecture as our baseline most-likelihood neural net, training & testing process for the performance evaluation. As shown in Table. III, has the least influence among all the features, and have the biggest impacts on the performance.

Features () MAE()
Baseline (All 5): 0.30 0.68
No 0.31 0.66
No 0.32 0.58
No 0.36 0.45
No 0.36 0.46
No 0.37 0.51
TABLE III: Ablation study on the deterministic policies
Fig. 9: Performance evaluation of our most-likelihood trajectory predictions on the 6 scenarios. The left three plots are box plots for the position prediction errors, based on the metrics described in Section IV-B. The right three plots are those for the speed prediction errors. The number of sample trips each scenario have: (Scenario G: 533, Y: 51, R: 359, GY: 57, YR: 42, RG: 74). The prediction horizon is 5s, and predictions were made every 0.2s.
Fig. 10: The comparison between the predicted and the actual trajectory of the biggest outlier trip (depicted in red rectangles in Fig. 9) in the dataset. The left plot shows how the most-likelihood trajectory far deviated from the actual trajectory, where our deterministic model predicted the driver to make a stop before the intersection, whereas in reality the driver passed through the intersection. The right plot shows our probabilistic trajectories. Specifically, the two peak trajectories and the trajectories for all are illustrated as dotted lines and shades. Predictions were made every 0.2s.

V Conclusion

One of the remaining challenges for autonomous driving is how to accurately predict longitudinal trajectories of human-driven vehicles near traffic lights. In this paper, we address this gap by proposing human policy models at traffic lights and a prediction framework which utilizes the most-likelihood human policy or the probabilistic human policy

to make longitudinal trajectory predictions. Our models are built upon a simple idea: to utilize traffic signal phasing and timing information obtained from V2I communication and longitudinal vehicle kinematics in modeling human policies and predictions. The human policies are learned using supervised learning on a neural network for most-likelihood predictions and a Gaussian-mixture based Mixture Density Network for probabilistic and multi-distributional predictions. We show that our models beat the benchmarks at the scenarios where a previous study is available and able to produce accurate trajectory predictions at the scenarios where no previous model is capable of making a prediction. Our human policy model is a comprehensive model which is capable of predicting trajectories of human-driven vehicles in the vicinity of traffic lights at all scenarios defined in Table. 

I.

The proposed human policy model near traffic lights can be used to predict future trajectories of human-driven vehicles. The predicted trajectories then can be utilized for various applications in decision makings, trajectory plannings, and controls of a host vehicle (either a self-driving car or a human-driven car). Our current interests in the application of the prediction models include an extension of the work presented in [21]; we plan to improve the performance of the energy-efficient planning algorithm by leveraging our prediction model. In conclusion, our human policy model helps us to better understand and predict behaviors of human drivers in the vicinity of traffic signals, and can be leveraged to improve autonomous drivings in urban city driving, including decision-making, planning, and control of host vehicles.

References

  • [1] C. R. Bennett and R. Dunn, “Driver deceleration behavior on a freeway in new zealand,” Transportation Research Record, no. 1510, 1995.
  • [2] J. Wang, K. K. Dixon, H. Li, and J. Ogle, “Normal deceleration behavior of passenger vehicles at stop sign–controlled intersections evaluated with in-vehicle global positioning system data,” Transportation research record, vol. 1937, no. 1, pp. 120–127, 2005.
  • [3] C. Sun, X. Shen, and S. Moura, “Robust optimal eco-driving control with uncertain traffic signal timing,” in 2018 Annual American Control Conference (ACC).   IEEE, 2018, pp. 5548–5553.
  • [4] R. Akçelik and D. Biggs, “Acceleration profile models for vehicles in road traffic,” Transportation Science, vol. 21, no. 1, pp. 36–54, 1987.
  • [5] G. Bham and R. Benekohal, “Development, evaluation, and comparison of acceleration models,” in 81st Annual Meeting of the Transportation Research Board, Washington, DC, vol. 6, 2002.
  • [6] ATS, “Acceleration/deceleration profiles at urban intersections.”
  • [7] P. P. Dey, S. Nandal, and R. Kalyan, “Queue discharge characteristics at signalised intersections under mixed traffic conditions,” European Transport, vol. 55, no. 7, pp. 1–12, 2013.
  • [8] J. Park, D. Li, Y. L. Murphey, J. Kristinsson, R. McGee, M. Kuang, and T. Phillips, “Real time vehicle speed prediction using a neural network traffic model,” in The 2011 International Joint Conference on Neural Networks.   IEEE, 2011, pp. 2991–2996.
  • [9] B. Jiang and Y. Fei, “Vehicle speed prediction by two-level data driven models in vehicular networks,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 7, pp. 1793–1801, 2016.
  • [10]

    K. Fadhloun, H. Rakha, and P. Eng, “A vehicle dynamics model for estimating typical vehicle accelerations 2,”

    Transportation Research Record: Journal of the Transportation Research Board, vol. 35, no. 36, p. 37, 2015.
  • [11] M. Treiber, A. Hennecke, and D. Helbing, “Congested traffic states in empirical observations and microscopic simulations,” Physical review E, vol. 62, no. 2, p. 1805, 2000.
  • [12] P. G. Gipps, “A behavioural car-following model for computer simulation,” Transportation Research Part B: Methodological, vol. 15, no. 2, pp. 105–111, 1981.
  • [13] C. M. Bishop, “Mixture density networks,” Citeseer, Tech. Rep., 1994.
  • [14] T. J. Gates, D. A. Noyce, L. Laracuente, and E. V. Nordheim, “Analysis of driver behavior in dilemma zones at signalized intersections,” Transportation Research Record, vol. 2030, no. 1, pp. 29–39, 2007.
  • [15] I. El-Shawarby, H. Rakha, A. Amer, and C. McGhee, “Impact of driver and surrounding traffic on vehicle deceleration behavior at onset of yellow indication,” Transportation research record, vol. 2248, no. 1, pp. 10–20, 2011.
  • [16] G. Oh, D. J. Leblanc, and H. Peng, “Vehicle energy dataset (ved), a large-scale dataset for vehicle energy consumption research,” arXiv preprint arXiv:1905.02081, 2019.
  • [17] R. W. Schafer et al., “What is a savitzky-golay filter,” IEEE Signal processing magazine, vol. 28, no. 4, pp. 111–117, 2011.
  • [18] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in

    Proceedings of the fourteenth international conference on artificial intelligence and statistics

    , 2011, pp. 315–323.
  • [19] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [20] R. Y. Rubinstein and D. P. Kroese, Simulation and the Monte Carlo method.   John Wiley & Sons, 2016, vol. 10.
  • [21] G. Oh and H. Peng, “Eco-driving at signalized intersections: What is possible in the real-world?” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC).   IEEE, 2018, pp. 3674–3679.