Structural Recurrent Neural Network for Traffic Speed Prediction

02/18/2019 ∙ by Youngjoo Kim, et al. ∙ 0

Deep neural networks have recently demonstrated the traffic prediction capability with the time series data obtained by sensors mounted on road segments. However, capturing spatio-temporal features of the traffic data often requires a significant number of parameters to train, increasing computational burden. In this work we demonstrate that embedding topological information of the road network improves the process of learning traffic features. We use a graph of a vehicular road network with recurrent neural networks (RNNs) to infer the interaction between adjacent road segments as well as the temporal dynamics. The topology of the road network is converted into a spatio-temporal graph to form a structural RNN (SRNN). The proposed approach is validated over traffic speed data from the road network of the city of Santander in Spain. The experiment shows that the graph-based method outperforms the state-of-the-art methods based on spatio-temporal images, requiring much fewer parameters to train.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Large traffic networks experience a large volume of data and require predictions of the future traffic states based on current and historical traffic data. The traffic data are usually obtained by magnetic induction loop detectors mounted on road segments. These data include traffic speed and flow, where the term traffic flow is used interchangeably with the terms traffic volume and traffic counts. Machine learning approaches have recently been applied to traffic prediction tasks due to the massive volume of traffic data that has become available. The sequence of traffic data on each road segment is essentially a time series. However, each time series pertaining to each road segment has a spatial relationship with each other. Capturing the spatio-temporal patterns of the vehicular traffic is an important task and is part of the control of traffic networks.

Preliminary results on traffic forecasting with convolutional neural networks (CNNs) have been reported

[1, 2]

. They have been demonstrated to be effective in understanding spatial features. Successive convolutional layers followed by max pooling operations increase the field of view of high-level layers and allow them to capture high-order features of the input data. Recurrent neural networks (RNNs) have also been incorporated, considering the traffic prediction as a time series forecasting. Different gating mechanisms like long short-term memories (LSTMs)

[2, 3]

and gated recurrent unit (GRU)

[4] have been tested with various architecture. Instead of dealing with spatial features and temporal features separately, a novel approach has been proposed in [5] where the traffic data are converted into spatio-temporal images that are fed into a CNN. The deep neural network captures the spatio-temporal characteristics by learning the images. Recently, a capsule network (CapsNet) architecture proposed in [6] has been demonstrated to outperform the state-of-the-art in complex road networks. The dynamic routing algorithm of the CapsNet replaces the max pooling operation of the CNN, resulting in more accurate predictions but more parameters to train. Gaussian process (GP) [7] approach is another data-driven approach considered as a kernel-based learning algorithm. GPs have been repeatedly demonstrated to be powerful in exploring the implicit relationship between data to predict the value for an unseen point. Although comparative studies [8, 9] have shown that GPs are effective in short-term traffic prediction, they still suffer from cubic time complexity in the size of training data.

Inspired by ideas from [10, 11], this paper develops a structural RNN (SRNN) for traffic speed prediction, by incorporating the topological information into the sequence learning capability of RNNs. Considering each road segments as a node, the spatio-temporal relationship is represented by spatial edges and temporal edges. All the nodes and edges are associated with RNNs that are jointly trained. A computationally efficient SRNN is implemented in this paper and the performance is evaluated with real data.

The remaining of the paper is organized as follows. Section 2 describes the traffic speed prediction problem of interest. Section 3 validates the performance of the proposed approach. Finally, Section 4 concludes the paper.

2 Traffic Speed Prediction

2.1 Problem Formulation

In this study, we address the problem of short-term traffic speed prediction based on historical traffic speed data and a road network graph. Suppose we deal with road segments where the loop detectors are installed. Let represent the traffic speed on road segment at time step . Given a sequence of traffic speed data for road segments at time steps , we predict the future traffic speed on each road segment where denotes the current time step and denotes the length of data sequence under consideration.

2.2 Spatio-Temporal Graph Representation

We use a spatio-temporal graph representation as in [10, 11]. Let denote the spatio-temporal graph. , , and denote the set of nodes, the set of spatial edges, and the set of temporal edges, respectively.

(a) Spatio-temporal graph
(b) Unrolled over time
Figure 1: An example spatio-temporal graph. (a) Nodes represent road segments and the nodes are linked by spatial edges and temporal edges . (b) The spatio-temporal graph is unrolled over time using the temporal edges

. The edges are labelled with corresponding feature vectors.

In this study, the nodes in the graph correspond to road segments of interest. Thus, . The spatial edges represent the dynamics of traffic interaction between two adjacent road segments, and the temporal edges represent the dynamics of the temporal evolution of the traffic speed in road segments. Fig. 1(a) shows an example spatio-temporal graph. Nodes represent road segments. The connection between the road segments is represented by spatial edges . Note that our approach differs from [11] in that the spatial edges are established if the two road segments are connected, whereas [11]

employs an attention model on a fully-connected graph. In addition, we use two spatial edges in opposite direction to link neighbouring nodes. We attempt to take into account the directionality of the interaction between road segments. A temporal edge originated from node

is pointing node . The spatial graph is unrolled over time using temporal edges to form as depicted in Fig. 1(b) where the edges are labelled with corresponding feature vectors.

The feature of node at time step is , denoting the traffic speed on the road segment. The feature vector of spatial edge at time step is , which is obtained by concatenating the features of nodes and as a row vector. Two spatial edges linking two nodes and in opposite direction have different feature vectors, e.g., and . The feature vector of temporal edge at time step is , which is obtained by concatenating the features of node at the previous time step and the current time step.

2.3 Model Architecture

Figure 2: Architecture of the SRNN in perspective of node drawn with the unrolled spatio-temporal graph.

In our architecture of the SRNN, the sets of nodes , spatial edges , and temporal edges are associated with RNNs denoted as nodeRNN , spatial edgeRNN , and temporal edgeRNN , respectively. The SRNN is derived from the factor graph representation [10]. Our architecture is the simplest case where the nodes, spatial edges, and temporal edges are sharing the same factors, respectively. This means we assume the dynamics of spatio-temporal interactions is semantically same for all road segments, which keeps the overall parametrization compact and makes the architecture scalable with varying number of road segments. Readers interested in the factor graph representation can refer to [12].

Fig. 2 visualises the overall architecture. For each node , a sequence of node features is fed into the architecture. Every time each node feature enters, the SRNN is supposed to predict the node label , which corresponds to the traffic speed at the next time step . The input into the edgeRNNs is the edge feature of edge where the edge is incident to node in the spatio-temporal graph. The node feature is concatenated with the outputs of the edgeRNNs to be fed into the nodeRNN.

We use LSTMs for the RNNs. The hidden state of the nodeRNN has a dimension of 128, and that of the edgeRNNs has a dimension of 256. We employ embedding layers in the network that convert the input into an 128-dimensional vector with a rectified linear unit (ReLU) activation function to give nonlinearity.

3 Performance Validation

3.1 Dataset

We use a traffic speed dataset from the case studies of the SETA EU project [13]. Traffic speed measurements had been taken every 15 minutes in the central Santander city of Spain for the year of 2016. Each sparsely missing measurement is masked with an average of speed data recorded at the same time in the other days. We use data of the first 9 months as a training set and the remaining data of the last 3 months as an evaluation set.

We compare the performance of the proposed SRNN with the CapsNet architecture in [6] that outperforms the state-of-the-art with the dataset. These methods performed the following two speed prediction tasks:

  • Task 1: prediction based on 10-time-step data ()

  • Task 2: prediction based on 15-time-step data ()

for two different sets of road segments as depicted in Fig. 3. 50 road segments of interest, where the speed sensors are installed, are marked in red ().

(a)
(b)
Figure 3: Two sets of road segments used in the experiment. Each set contains 50 road segments marked in red.

3.2 Implementation Details

As one can see from Fig. 3, the road segments of interest are located sparsely. For constructing the spatial graph in the proposed architecture, we consider road segments adjacent to if they have the shortest distance to segment

. Here, the distance means the number of links traversed from a node to another. Our model has been developed based on the Pytorch implementation of

[11].

The proposed architecture and the CapsNet in [6]

are given their best settings. Our network is trained with a batch size of 8, a starting learning rate of 0.001, and an exponential decay rate of 0.99. The CapsNet is trained with a batch size of 10, a starting learning rate of 0.0005, and an exponential decay rate of 0.9999. Both networks employ the mean squared error (MSE) as a loss function and the Adam optimizer

[14]. The traffic speed data, measured in [], are scaled into the range before fed into the networks.

3.3 Performance Metrics

Statistical performance metrics are required to validate the overall performance of the networks. The mean relative error (MRE) is one of the most common metrics to quantify the accuracy of different prediction models in general. However, the error of a larger value of speed might result in a smaller MRE and vice versa, providing inconsistent results as witnessed in [6]. Thus, we employ mean absolute error (MAE) and root mean squared error (RMSE) as more intuitive metrics for assessing the speed prediction performance. These performance metrics are defined as:

(1)
(2)

where and denote the speed prediction on road segment at time step and its true value, respectively. Here, denotes the set of time steps in the evaluation set, and represents the number of the speed data in the evaluation set.

3.4 Results

Table 1 shows the resultant performance of the neural networks on the two tasks. The best performance out of 20 epochs is obtained for each method. The result indicates the SRNN performs slightly better by

in RMSE, showing mere performance difference between methods and between tasks. The distinguishable difference resides in the number of trainable parameters which is translated into computational burden. The number of trainable parameters of the CapsNet varies from (Task 1) to (Task 2). Meanwhile, the number of trainable parameters of the SRNN is , which is independent of the sequence length . In fact, the size of the trainable parameter set of the SRNN is affected only by the size of the RNNs. The image-based approaches [5, 6] would face a significant increase in the computation time as the sequence length and the number of road segments increase. On the other hand, the SRNN is scalable to varying and , and can learn the spatio-temporal traffic characteristic with much fewer parameters given the topological information.

CapsNet SRNN
MAE RMSE MAE RMSE
Task 1 5.720 9.133 5.632 8.906
Task 2 5.741 9.172 5.588 8.975
Table 1: Speed prediction performance (unit: km/h).

4 Conclusion

This paper presents a SRNN architecture that combines the road network map with the traffic speed data to make predictions of future traffic speed. The proposed architecture captures the spatio-temporal relationship of the traffic data with much fewer parameters compared with the image-based state-of-the-art methods.

Existing methods generally provide predictions on road segments where the traffic history is available. Our future work will focus on predictions in road networks with sparse data.

References

  • [1]

    Y. Lv, Y. Duan, W. Kang, Z. Li, and F.-Y. Wang, “Traffic flow prediction with big data: A deep learning approach,”

    IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 2, pp. 865–873, 2015.
  • [2] J. Zhang, Y. Zheng, and D. Qi, “Deep spatio-temporal residual networks for citywide crowd flows prediction,” in Proceedings of the AAAI Conference on Aritificial Intelligence, 2017, pp. 1655–1661.
  • [3] X. Ma, H. Yu, Y. Wang, and Y. Wang, “Large-scale transportation network congestion evolution prediction using deep learning theory,” PloS one, vol. 10, no. 3, pp. 1–17, 2015.
  • [4] Y. Wu, H. Tan, L. Qin, B. Ran, and Z. Jiang, “A hybrid deep learning based traffic flow prediction method and its understanding,” Transportation Research Part C: Emerging Technologies, vol. 90, pp. 166–180, 2018.
  • [5] X. Ma, Z. Dai, Z. He, J. Ma, Y. Wang, and Y. Wang, “Learning traffic as images: a deep convolutional neural network for large-scale transportation network speed prediction,” Sensors, vol. 17, no. 4, p. 818, 2017.
  • [6] Y. Kim, P. Wang, Y. Zhu, and L. Mihaylova, “A capsule network for traffic speed prediction in complex road networks,” in Proceedings of the Symposium Sensor Data Fusion: Trends, Solutions, and Applications, 2018, Conference Proceedings.
  • [7] P. Wang, Y. Kim, L. Vaci, H. Yang, and L. Mihaylova, “Short-term traffic prediction with vicinity gaussian process in the presence of missing data,” in Proceedings of the Symposium Sensor Data Fusion: Trends, Solutions, and Applications, 2018, Conference Proceedings.
  • [8] Y. Xie, K. Zhao, Y. Sun, and D. Chen, “Gaussian processes for short-term traffic volume forecasting,” Transportation Research Record, vol. 2165, no. 1, pp. 69–78, 2010.
  • [9] J. Chen, K. H. Low, Y. Yao, and P. Jaillet, “Gaussian process decentralized data fusion and active sensing for spatiotemporal traffic modeling and prediction in mobility-on-demand systems,” IEEE Transactions on Automation Science and Engineering, vol. 12, no. 3, pp. 901–921, 2015.
  • [10] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena, “Structural-rnn: Deep learning on spatio-temporal graphs,” in

    Proceedings of the Conference on Computer Vision and Pattern Recognition

    , 2016, Conference Proceedings, pp. 5308–5317.
  • [11] A. Vemula, K. Muelling, and J. Oh, “Social attention: Modeling attention in human crowds,” in Proceedings of the International Conference on Robotics and Automation.   IEEE, 2018, Conference Proceedings, pp. 1–7.
  • [12] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Transactions on information theory, vol. 47, no. 2, pp. 498–519, 2001.
  • [13] SETA EU Project, A ubiquitous data and service ecosystem for better metropolitan mobility, Horizon 2020 Programme, 2016. Available: http://setamobility.weebly.com/.
  • [14] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.