Traffic forecasting approaches are critical to developing adaptive strategies for mobility. Traffic patterns have complex spatial and temporal dependencies that make accurate forecasting on large highway networks a challenging task. Recently, diffusion convolutional recurrent neural networks (DCRNNs) have achieved state-of-the-art results in traffic forecasting by capturing the spatiotemporal dynamics of the traffic. Despite the promising results, adopting DCRNN for large highway networks still remains elusive because of computational and memory bottlenecks. We present an approach to apply DCRNN for a large highway network. We use a graph-partitioning approach to decompose a large highway network into smaller networks and train them simultaneously on a cluster with graphics processing units (GPU). For the first time, we forecast the traffic of the entire California highway network with 11,160 traffic sensor locations simultaneously. We show that our approach can be trained within 3 hours of wall-clock time using 64 GPUs to forecast speed with high accuracy. Further improvements in the accuracy are attained by including overlapping sensor locations from nearby partitions and finding high-performing hyperparameter configurations for the DCRNN using DeepHyper, a hyperparameter tuning package. We demonstrate that a single DCRNN model can be used to train and forecast the speed and flow simultaneously and the results preserve fundamental traffic flow dynamics. We expect our approach for modeling a large highway network in short wall-clock time as a potential core capability in advanced highway traffic monitoring systems, where forecasts can be used to adjust traffic management strategies proactively given anticipated future conditions.
Deep learning, Graph neural networks, Diffusion, Traffic forecasting, Graph partitioning
In the United States alone, the estimated loss in economic value due to traffic congestion reaches into the tens or hundreds of billions of dollars, impacting not only the productivity lost due to additional travel time but also the additional inefficiencies and energy required for vehicle operation. To address these issues, Intelligent Transportation Systems (ITS)[bishop2005intelligent]
seek to better manage and mitigate congestion and other traffic-related issues via a range of data-informed strategies and highway traffic monitoring systems. Near-term traffic forecasting is a foundational component of these strategies; and accurate forecasting across a range of normal, elevated, and extreme levels of congestion is critical for improved traffic control, routing optimization, probability of incident prediction, and identification of other approaches for handling emerging patterns of congestion[teklu2007genetic, tang2005traffic]
. Furthermore, these predictions and the related machine learning configurations and weights associated with a highly accurate model can be used to delve more deeply into the dynamics of a particular transportation network in order to identify additional areas of improvement above and beyond those enabled by improved prediction and control[fadlullah2017state, abdulhai2003reinforcement, lv2014traffic]. These forecasting methodologies are also expected to enable new and additional forms of intelligent transportation system strategies as they become integrated into larger optimization and control approaches and highway traffic monitoring systems [pang1999adaptive, decorla1997total]. For example, the benefits of highly dynamic route guidance and alternative transit mode pricing in real time would be greatly aided by improved traffic forecasting.
Traffic forecasting is a challenging problem: The key traffic metrics such as flow111Flow (volume) is a quantity representing an estimate of the number of vehicles that passed over each detector on the highway in a given time period and speed222Speed is the estimated rate of motion at which a detector records drivers operating their vehicles exhibit complex spatial and temporal correlations that are difficult to model with classical forecasting approaches [williams2003modeling, chan2012neural, karlaftis2011statistical, castro2009online]. From the spatial perspective, locations that are close geographically in the Euclidean sense (for example, two locations located in opposite directions of the same highway) may not exhibit a similar traffic pattern, whereas locations in the highway network that are far apart (for example, two locations separated by a mile in the same direction of the same highway) can show strong correlations. Many traditional predictive modeling approaches cannot handle these types of correlation. From the temporal perspective, because of different traffic conditions across different locations (e.g., diverse peak hour patterns, varying traffic flow and volume, highway capacity, incidents, and interdependencies), the time series data becomes nonlinear and non-stationary, rendering many statistical time series modeling approaches ineffective.
Recently, deep learning (DL) approaches have emerged as high-performing methods for traffic forecasting. In particular, Li et al. [li2017diffusion] developed a diffusion convolution recurrent neural network (DCRNN) that models complex spatial dependencies using a diffusion process333In physics, diffusion is a process of movement of particle from a region of higher concentration to a region of lower concentration. The diffusion process can be represented as a weighted combination of infinite random walks on a graph. on a graph and temporal dependencies using a sequence to sequence recurrent neural network. The authors reported forecasting performances for 15, 30, and 60 minutes on two data sets: a Los Angeles data set with 207 locations collected over 4 months and a Bay Area data set with 325 locations collected over 6 months. They showed improvement on the state-of-the-art baselines methods such as historical average [williams2003modeling]
, an autoregressive integrated moving average model with a Kalman filter[xu2017real]hamilton1995time]
, a linear support vector regression, a feed-forward neural network[raeesi2014traffic]
, and an encoder-decoder framework using long short-term memory[sutskever2014sequence]. Despite these results, modeling large highway networks with DCRNN remains challenging due to the computational and memory bottlenecks.
We focus on developing and applying DCRNN to a large highway network with thousands of traffic sensor locations. Our study is motivated by the fact that the highway network of a state such as California is 30 times larger than the Los Angeles or Bay Area dataset. Training a DCRNN with 30 times more data poses two main challenges. First, the training data size for thousands of locations is too large to fit in a single computer’s memory. Second, the time required for training a DCRNN on a large data set can be prohibitive, rendering the method ineffective for large highway networks. Two common approaches to overcome this issue in deep learning literature are distributed data-parallel training or model-parallel training [dean2012large]. In data-parallel training, different computing nodes train the same copy of the model on different subsets of the data and synchronize the information from these models. The number of trainable parameters is the same as for single-instance training because the whole highway network graph is considered together. Speedup is achieved only by the reduced amount of training data per compute node. In model-parallel training, the model is split across different computing nodes, and each node estimates a different part of the model parameters. It is used mostly when the model is too large to fit in a single node’s memory. Implementation, fault tolerance, and better cluster utilization are easier with data-parallel training than with model-parallel training. Therefore, data-parallel training is arguably the preferred approach for distributed systems [hegde2016parallel]. On the other hand, in traditional high-performance computing (HPC) domains, a common approach for scaling is domain decomposition, wherein the problem is divided into a number of subproblems that are then distributed over different compute nodes. While domain decomposition approaches are not applicable in scaling typical DL training such as image and text classification, for the traffic forecasting problem with DCRNN it is well suited. The reason is that traffic flow in one part of the highway network does not affect another part when the parts are separated by a large driving distance.
In this paper, we develop a graph-partitioning-based DCRNN for traffic forecasting on a large highway network. The main contributions of our work are as follows.
We demonstrate the efficacy of the graph-partitioning-based DCRNN approach to model the traffic on the entire California highway network with 11,160 sensor locations. We show that our approach can be trained within 3 hours of wall-clock time to forecast speed with high accuracy.
We develop two improvement strategies for the graph-partitioning-based DCRNN. The first is an overlapping sensor location approach that includes data from partitions that are geographically close to a given partition. The second is an adoption of DeepHyper, a scalable hyperparameter search, for finding high-performing hyperparameter configurations of DCRNN to improve forecast accuracy of multiple sensor locations.
We adopt and train a single DCRNN model to forecast both flow and speed simultaneously as opposed to the previous DCRNN implementation that predict either speed or flow.
In this section, we describe the DCRNN approach for traffic modeling, followed by graph partitioning for DCRNN, the overlapping node method, and the hyperparameter search approach.
3.1 Diffusion convolution recurrent neural network
Formally, the problem of traffic forecasting can be modeled as spatial temporal time series forecasting defined on a weighted directed graph , where is a set of nodes that represent sensor locations, is the set of edges connecting the sensor locations, and is the weighted adjacency matrix that represents the connectivity between the nodes in terms of highway network distance. Given the graph and the time series data to , the goal of the traffic forecasting problem is to learn a function h(.) that maps historical data at given to future time steps:
In DCRNN, the temporal dependency of the historical data has been captured by the encoder-decoder architecture [cho2014learning, sutskever2014sequence] of recurrent neural networks. The encoder steps through the input historical time series data and encodes the entire sequence into a fixed length vector. The decoder predicts the output of the next time steps while reading from the vector. Along with the encoder-decoder architecture of RNN, a diffusion convolution process has been used to capture the spatial dependencies. The diffusion process [teng2016scalable] can be described by a random walk on with a state transition matrix . The traffic flow from one node to the neighbor nodes can be represented as a weighted combination of infinite random walks on the graph. The diffusion kernel is used in the convolution operation to map the features of the node to the result of the diffusion process beginning at that node. A filter learns the features for graph-structured data during training as a result of the diffusion convolution operation over a graph signal.
During the training phase, historical time series data and the graph are fed into the encoder, and the final stage of the encoder is used to initialize the decoder. The decoder predicts the output of the next
time steps, and the layers of DCRNN are trained by using backpropagation through time. During the test, the ground truth observations are replaced by previously predicted output. The discrepancy between the input distributions of training and testing can cause performance degradation. In order to resolve this issue, scheduled sampling[bengio2015scheduled] has been used, where the model is fed a ground truth observation with probability of or the prediction by the model with probability at the
th iteration. The model is trained with MAE loss function, defined as, where is the observed value and corresponds to the forecasted values for the training data.
3.2 Graph-partitioning-based DCRNN
To scale DCRNN, we adopt a divide-and-conquer approach for solving a large problem by solving subproblems defined on smaller subdomains. The overall idea of scaling is shown in Figure 1. Here, the graph has been divided into multiple subgraphs shown as partition 1 to partition M. Each of the partitions is then trained on M compute nodes simultaneously. Simultaneous training of subgraphs on multiple GPUs speeds up the overall training time in comparison with single-node training. The speedup with graph partitioning can be expressed as , and the efficiency can be expressed as . Here, is the time to execute an algorithm on a single node, and is the time to execute the same algorithm on nodes. in a perfectly parallel algorithm.
We use Metis [metis], a graph-partitioning package, to decompose the large network graph into smaller subgraphs. First, to reduce the size of the input graph, Metis coarsens the graph iteratively by collapsing the connected nodes into supernodes. The process of coarsening helps reduce edge-cut. Then, the coarsened graph is partitioned by using either multilevel -way partitioning [karypis1998multilevelk] or multilevel recursive bisection algorithms [karypis1998fast]. The next step is to map the partitions into the original graph by backtracking through the coarsened graph. In order to reduce the edge-cut, the nodes are swapped between partitions by using the Kernighan-Lin algorithm [hendrickson1995multi] during uncoarsening. The method produces roughly equally sized partitions. Metis’s multilevel -way partitioning algorithm provides additional capabilities such as minimizing the resulting subdomain connectivity graph, enforcing contiguous partitions, and minimizing alternative objectives. Therefore, we use the way partitioning algorithm in our work. Metis is extremely fast and provides high-quality partitions in a few seconds. For example, to perform 64 partition on a graph of 11, 160 nodes, metis takes only 0.030 seconds.
Various graph clustering and community detection methods [liu2015empirical]
have been developed, such as spectral clustering, Louvain, SlashBurn[koutra2015summarizing], and core-based clustering [giatsidis2011evaluating]. Compared with all these methods, Metis is a fast graph-partitioning algorithm [liu2015empirical] that is capable of partitioning a million-node graph in a few tightly connected clusters. It generates roughly equally sized partitions. Our approach is agnostic to the graph-partitioning method adopted.
3.3 Overlapping nodes
An issue that affects the prediction accuracy in DCRNN due to graph partitioning is that nodes that are spatially correlated will end up in different partitions. While the graph-partitioning methods try to minimize this effect, the nodes at the boundary of the partitions will not have nearby spatially correlated nodes. To address this issue, we develop an overlapping nodes approach, wherein for each partition, we find and include spatially correlated nodes from other partitions. Consequently, the nodes that are near the boundary of the partition will appear in more than one partition. A naive approach for finding these nodes consists of computing nearest neighbors for each node in the partition based on the driving distance and excluding the nodes already included in the partition. The disadvantage of this approach is that it can include, for a given node, several spatially correlated nodes that are close to each other. This can lead to an increase in the number of nodes per partition, and consequently higher training time and memory requirement. Therefore, we down sample the spatially correlated nodes from other partitions as follows: given two spatially correlated overlapping nodes from a different partition, we select only one and remove the other if they are within driving distance miles, where is a parameter.
3.4 Hyperparameter tuning
The forecasting accuracy of the DCRNN depends on a number of hyperparameters such as batch size, filter type (i.e., random walk, Laplacian), maximum diffusion steps, number of RNN layers, number of RNN units per layers, a threshold max_grad_norm to clip the gradient norm to avoid exploring gradient problem of RNN [pascanu2013difficulty], initial learning rate, and learning rate decay. Li et al. [li2017diffusion] used a tree-structured Parzen estimator [bergstra2011algorithms] for tuning the hyperparameters of the DCRNN; the obtained values are used as the default configuration. However, our dataset has a lot more variability because we consider all the districts of California. Therefore, finding the appropriate hyperparameter values is critical in our setting.
We use DeepHyper [balaprakash2018deephyper], a scalable hyperparameter search (HPS) package for neural networks, to search for high performing hyperparameters values for DCRNN. DeepHyper adopts an asynchronous model-based search (AMBS) method, which relies on fitting a surrogate model that tries to learn the relationship between the hyperparameter configurations and their corresponding model validation errors. The surrogate model is then used to prune the search space and identify promising regions of the search space. The surrogate model is iteratively refined in the promising regions of the hyperparameter search space by obtaining new outputs at inputs that are predicted by the model to be high performing.
Given that we use a graph partition approach, finding the best hyperparameter configuration for each partition, although feasible, will be computationally expensive. Therefore, we select an arbitrary partition, run a hyperparameter search on it, and use the same best hyperparameter configuration for all the partitions.
3.5 Multi-output forecasting with a single model
In the previous study, DCRNN was used to forecast only speed based on historical speed data. In this paper, we customize the input and output layers of the DCRNN for multi-output forecasting and demonstrate that a single DCRNN model can be trained and used for forecasting speed and flow simultaneously. The three key modifications for multi-output forecasting are as follows: 1) normalization of speed and flow: to bring speed and flow to the same scale, normalization has been done separately on the two features using the standard scalar transformation. The normalized values of speed are given by: , where is the mean and
is the standard deviation of the speed values. The same method is applied for normalizing the flow values (, where and are the standard deviation of the flow values ). We apply an inverse transformation to the normalized speed and flow forecasting values to transform them to the original scale (for computing error on the test data). 2) multiple output layers in the DCRNN: in the previous study of DCRNN, the convolution filter learns the graph-structured data from input graph signal . This filter is parameterized by to take P-dimensional input (such as speed and flow) and predict Q-dimensional output (such as speed and flow). Though multiple output prediction is reported as a capability of DCRNN, but its implementation had the format to take only 1-dimensional input and predict same as output. We changed the input/output format in our implementation with which dimensional input can be given to predict dimensional output. 3) loss function: for multioutput training, we use a loss function of the form , where and are observed speed and flow values and and are corresponding forecast values, respectively, for the training data, and is the total number of training points.
4 California highway network
For modeling the California highway network, we used data from PeMS [pems]. It provides access to real-time and historical performance data from over 39,000 individual sensors. The individual sensors placed on the different highways are aggregated across several lanes and are fed into vehicle detector stations. The PeMS dataset contains raw detector data for over 18,000 vehicle detector stations. These include a variety of sensors such as inductive loops, side-fire radar, and magnetometers. The sensors may be located on High-occupancy Vehicle lanes, mainlines, on ramps, and off ramps. The dataset covers 9 districts of California—D3 (North Central) with 1,212 stations, D4 (Bay Area) with 3,880 stations, D5 (Central coast) with 382 stations, D6 (South Central) with 624 stations, D7 (Los Angeles) with 4,864 stations, D8 (San Bernardino) with 2,115 stations, D10 (Central) with 1,195 stations, D11 (San Diego) with 1,502 stations, and D12 (Orange County) with 2,539 stations. A total of 18,313 stations are listed by site. Detectors capture samples every 30 seconds. PeMS then aggregates that data to the granularity of 5 minutes, an hour, and a day. The data includes timestamp, station ID, district, freeway, direction of travel, total flow, and average speed(mph). The time series data is available from 2001 to 2019.
PeMS details the station IDs, district, freeway, direction of travel, and absolute postmile markers. This list does not contain the latitude and longitude for the stations IDs, which is essential to defining the connectivity matrix used by the DCRNN. In the PeMS database, the latitude and longitude are associated with postmile markers of every freeway given the direction. We downloaded the entire time series data of the California highway network and find the latitude and longitude for sensor IDs by matching the absolute postmile markers of every freeway. Linear interpolation is used to find the exact latitude and longitude if the absolute postmile markers do not match exactly.
The official PeMs website shows that 69.59% of the 18K stations are in good working condition. The remaining 30.41% do not capture time series data throughout the year. These are excluded from our dataset. Our final dataset has 11,160 stations for the year 2018 with the granularity of 5 minutes. We observed that flow and speed values are missing for multiple time periods in the time series data. We calculate the missing data by taking the average of the past one week data of that particular timestamp. Holidays are handled separately from normal working days.
5 Experimental results
We represent the highway network of 11,160 detector stations as a weighted directed graph. The speed and flow data of each node of the graph is collected over one year ranging from January 1, 2018, to December 31, 2018, from PeMS [pems]. From the one-year data, we used the first 70% of the data (36 weeks approx.) for training and the next 10% (5 weeks approx.) and 20% (10 weeks approx.) of the data for validation and testing, respectively. Given 60 minutes of time series data on the nodes in the graph, we forecast for the next 60 minutes. We prepared the dataset in a way to look back ( as mentioned in 3.1) for 60 minutes or 12 time steps (granularity of the data is 5 minutes as mentioned in Section 4) to predict () next 60 minutes or 12 time steps. The look back () window slides by 5 minutes or 1 time steps and repeat until the whole data is consumed. The forecasting performance of the models were evaluated on the test data using MAE =, where , . . . , represent the observed values, represent the corresponding predicted values, and denotes the number of prediction samples.
The adjacency matrix for DCRNN requires the highway network distance between the nodes. We used the Open Source Routing Machine (OSRM) [osrm] running locally for the area of interest to compute the highway network distance. Given the latitude and longitude of two nodes, OSRM gives the shortest driving distance between them using OpenStreetMap data [osm]. To speed up the highway network distance computation, first we find 30 nearest neighbors for each node using the Euclidean distance and then limit the OSRM queries only to the nearest neighbors. As in the original DCRNN work, we compute the pairwise highway network distances between nodes to build the adjacency matrix using a thresholded Gaussian kernel [shuman2012emerging]: otherwise , where represents the edge weight between node and node ; denotes the highway network distance from node to node ; is the standard deviation of distances; and is the threshold, which introduces the sparsity in the adjacency matrix.
For the experimental evaluation, we used Cooley, a GPU-based cluster at the Argonne Leadership Computing Facility. It has 126 compute nodes, where each node consists of two 2.4 GHz Intel Haswell E5-2620 v3 processors (6 cores per CPU, 12 cores total), one NVIDIA Tesla K80 (two GPUs per node), 384 GB RAM per node, and 24 GB GPU RAM per node (12 GB per GPU). The compute nodes are interconnected via an InfiniBand fabric. We used Python 3.6.0, TensorFlow 1.3.1, and Metis 5.1.0. We customized the DCRNN code of[li2017diffusion], which is available on Github [li2018dcrnn_traffic]. Given partitions of the highway network, we trained partition-specific DCRNNs simultaneously on Cooley GPU nodes. We used two MPI ranks per node, where each rank ran a partition-specific DCRNN using one GPU. The input data for different partitions (time series, and adjacency matrix of the graph) were prepared offline and loaded into the partition-specific DCRNN before the training started.
We used a bidirectional graph random walk [lovasz1993random]
to model the stochastic nature of highway traffic. Random walk on a directed graph is random process that gives a path composed of successive random steps on the graph. The default hyperparameter configuration for the DCRNN is: batch size: 64, filter type: random walk, number of diffusion steps: 2 , RNN layers: 2, and RNN units per layer: 16 , a threshold for gradient clipping: 5, initial learning rate: 0.01, and learning rate decay of 0.1. We trained our model by minimizing MAE using the Adam optimizer[kingma2014adam].
5.1 Impact of number of graph partitions on accuracy and training time
Here, we experiment with different number of graph partitions and show that partitions with larger number of nodes require longer training time and partitions with fewer nodes can reduce the forecasting accuracy.
We used Metis to obtain 2, 4, 8, 16, 32, 64, and 128 partitions of the California highway network graph. The average number of nodes in each case is 5,580, 2,790, 1395, 697, 348, 174, and 87, respectively. Partition of size 1 (the whole network) and 2 were not presented because the training data was too large to fit in the memory of a single K80 node of Cooley. Given partitions, we used nodes (or GPUs) on Cooley to run the partition-specific DCRNNs simultaneously. We consider the training time as the maximum time taken by any partition-specific DCRNN training (excluding the data loading time).
shows the distribution of MAE of all nodes obtained using box-and-whisker plots. Each box represents distribution of MAE of 11,160 nodes. The ends of each box are 25% (bottom) and 75% (top) quantiles of the distribution, the median of the distribution is shown as the horizontal line in the middle of the box, the two vertical lines on the two sides of the whisker represent 5% and 95% of the distribution, and the diamonds mark the outliers of the distribution. From the results we can observe that medians, 75% quantiles, and the maximum MAE values show a trend in which an increase in the number of partitions decreases the MAE. From 4 to 64 partitions, the median of MAE decreases from 2.11 to 2.02. The increase in accuracy can be attributed to the effectiveness of the graph partitioning of Metis that separates nodes that were not temporally and spatially correlated. For smaller number of partitions, presence of such nodes increases MAE. For 128 partitions (with only 87 nodes per partition), the observed MAE values are higher than that of 64 partitions. This is because the graph partition results in significant number of spatially correlated nodes ending up in different partitions. This can be assumed as a tipping point for graph partitioning, which relates to the size and spread of the actual network.
Figure 3 shows the training time required for different numbers of partitions. We can observe that the time decreases significantly with an increase in the number of partitions. We can also observe that our approach reduces the training time from 2,820 minutes on 4 partitions(= 4 GPUs) to 178.67 minutes on 64 partitions (= 64 GPUs), resulting in a 15.78x speedup. Until 64 partitions, we observe almost a liner speedup, where doubling the number of partitions (and GPUs) results in 2X speedup. However, the speedup gains drop significantly with 128 nodes. This can be attributed to the reduction in the workload per GPU, where there is not enough workload for the GPU given that there are only 87 nodes per partition.
Since the best forecasting accuracy and speedup were obtained by using 64 partitions, we used it as a default number of partitions in rest of the experiments.
5.2 Impact of training data size
Here, we assess the impact of training data size and show that it has a significant impact on the predictive accuracy.
From the full 36 weeks of training data, we selected the last 1, 2, 4, 12, and 20 weeks of data for training the DCRNN. The last weeks of data were chosen to minimize the impact of highway and sensor upgrades. Figure 4 shows the distribution of MAE of all nodes obtained using box-and-whisker plots. From the plots it can be observed that the medians, the 75% quantiles, and the maximum MAE values show that increasing the training data size decreases the MAE. These results show that DCRNN, similar to other state of the art neural networks [cai2018cascade, al2019character], can leverage large amount of data to improve accuracy. Therefore, we use the entire 36 weeks of training data in rest of the experiments.
5.3 Impact of overlapping nodes and hyperparameter tuning
Here, we demonstrate that the graph-partitioning-based DCRNN achieves high forecasting accuracy using overlapping nodes and hyperparameter search.
We trained the graph-partitioning-based DCRNN with 64 partitions for the California highway network on 32 nodes of Cooley (two DCRNNs per node; 64 GPUs). We refer this variant to DCRNN_64_naive. It took a total training time of 178 minutes. After training, we forecast the speed for 60 minutes on the test data and calculated the MAE for each node. The results are summarized in the first row of Table 1. We observe that MAE values of 1,716, 6,729, 2,266, and 449 nodes are less than 1, between 1 and 3, between 3 and 5, and greater than 5, respectively.
Next, we trained the graph-partitioning-based DCRNN with 64 partitions with overlapping nodes as described in Section 3.3. We down sampled nodes with different distance threshold () values: 0.5 mile, 1 mile, 1.5 miles, 2 miles, and 3 miles. The result showed no significant improvement beyond the 1 mile of threshold; therefore, we used 1 mile as distance threshold for our experiments. In a given partition, while calculating the MAE for each node, we did not consider the overlapping nodes as they originally belong to a different partition, where their MAE values will be computed. We refer this variant to DCRNN_64_overlap. The results are shown in the row 2 of Table 1. We observe that DCRNN_64_overlap completely outperforms DCRNN_64_naive. With reference to the latter, the number of nodes with MAE values less than 1 has increased from 1,716 to 1,837; on the other hand, the number of nodes with MAE values between 1 and 3, 3 and 5, and greater than 5 reduced from 6,729 to 6,687, 2,266 to 2,204, and 449 to 432, respectively. We observe that the training time increased from 178.67 minutes to 221.04 minutes, which can be attributed to the increase in the number of nodes per partition.
Finally, we ran hyperparameter search with DeepHyper for DCRNN_64_naive and
DCRNN_64_overlap. We used 5 months of data (from May 2018 to October 2018) from partition 1. We used 32 nodes of Cooley with a 12 hours of wall-clock time as stopping criterion. DeepHyper sampled 518 and 478 hyperparameter configurations for naive and overlapping approaches, respectively. The best hyperparameter configurations are selected from each and used to train and infer the forecasting accuracy. We refer these two variants as DCRNN_64_naive_hps and DCRNN_64_overlap_hps. The results are shown in the rows 3 and 4 of the Table 1. We observe that DCRNN_64_naive_hps outperforms DCRNN_64_naive, where hyperparameter tuning improved the accuracy of several nodes. The number of nodes with MAE values less 1 and between 1 and 3, have increased from 1,716 to 1,920 and 6,729 to 6,897, respectively. The number of nodes with MAE values between 3 and 5, and greater than 5 got reduced from 2,266 to 1,980, and 449 to 363, respectively. We did not see a significant improvement with DCRNN_64_overlap_hps. The number of node in the MAE bins are similar to DCRNN_64_overlap. Moreover, hyperparameter tuning resulted in an increase in the number of trainable parameters, which led to training time increase from 221.04 min to 461.57 mins.
We did not notice a significant difference in the time required for forecasting on the test data. An exception is DCRNN_64_overlap_hps, where the large number of trainable parameters increases the forecasting time by 1 minute (5.83 mins).
To summarize, we can improve the graph-partitioning-based DCRNNs either by using overlapping nodes from other partitions or by tuning the hyperparameters of DCRNN. Combining both did not show any benefit in our study.
5.4 Multioutput forecasting
Here, we show that a single DCRNN model can be used to predict the speed and flow simultaneously and the forecasting results preserve the fundamental properties of traffic flow.
shows the distribution of MAE of all nodes using box-and-whisker plots. The first and second box plots show the speed forecast from the DCRNN models that are trained to forecast only speed and to forecast speed and flow simultaneously. Similarly, the third and forth box plots are for flow forecasts. The median of MAE from speed only model (first box plot) is 2.02, which got reduced to 1.98 when multioutput model (second box plot) is used. Similarly, the median of MAE from flow only model (third box plot) is 21.20, which got reduced to 20.64 when multioutput model (fourth box plot) is used. We adopted a statistical test to check if the observed MAE values between the two models are significant. We used the paired t-test and found that the multioutput model obtains MAE values that are significantly better than the speed only or flow only model (values of for speed and for flow). The superior performance of multioutput forecasting can be attributed to the multitask learning [sener2018multi]. The key advantage is that it leverages the commonalities and differences across speed and flow learning tasks. This results in improved learning efficiency and consequently forecasting accuracy when compared to training the models separately.
In Figure 6, we show speed and flow forecasting forecasting results of a congested node (ID: 717322 located on the highway 60-E in Los Angeles area) in a scatter plot. We can observe that the speed and flow forecast values closely follow the fundamental flow diagram with three distinct phases of congestion, bounded, and free flow. This forecasting pattern of DCRNN shows that the model has learned and preserved the properties of traffic flow.
6 Related work
Modeling the flow and speed patterns of traffic in a highway network has been studied for decades. Capturing the spatiotemporal dependencies of the highway network is a crucial task for traffic forecasting. The methods for traffic forecasting are broadly classified into two main categories: knowledge-driven and data-driven approaches. In transportation and operational research, knowledge-driven methods usually apply queuing theory[cascetta2013transportation, romero2018queuing, lartey2014predicting, yang2014application] and Petri nets [ricci2008petri] simulate user behaviors of the traffic. Usually, those approaches estimate the traffic flow of one intersection at a time. Traffic prediction for the full highway system of an entire state has not been attempted to date using knowledge-driven approaches.
Data-driven approaches have received notable attention in recent years. Traditional methods include statistical techniques such as autoregressive statistics for time series [williams2003modeling] and Kalman filtering techniques [kumar2017traffic]. These models are mostly used to forecast at a single sensor location and are based on a stationary assumption about the time series data. Therefore, they often fail to capture nonlinear temporal dependencies and cannot predict overall traffic in a large-scale network [li2017diffusion]. Recently, statistical models have been challenged by machine learning methods on traffic forecasting. More complex data modeling can be achieved by these models, such as artificial neural networks (ANNs) [chan2012neural, karlaftis2011statistical]
, and support vector machines (SVMs)[castro2009online, ahn2016highway].
However, SVMs are computationally expensive for large networks, and ANNs cannot capture the spatial dependencies of the traffic network. Furthermore, the shallow architecture of ANNs make the network less efficient compared with a deep learning architecture. Recently,deep learning models such as deep belief networks[huang2014deep]
and stacked autoencoders[lv2015traffic] have been used to capture effective features for traffic forecasting. Recurrent neural networks (RNNs) and their variants, long short-term memory (LSTM) networks [ma2015long]fu2016using], show effective forecasting [cui2018deep, yu2017deep]
because of their ability to capture the temporal dependencies. RNN-based methods can capture contextual dependency in the temporal domain, but spatial dynamics are often missed. To capture the spatial dynamics, researchers have used convolutional neural networks (CNNs). Ma et al.[ma2017learning] proposed an image-based traffic speed prediction method using CNNs, whereas Yu et al. [yu2017spatiotemporal] proposed spatiotemporal recurrent convolutional networks for traffic forecasting. Spatial dynamics have been captured by deep CNNs, and temporal dynamics have been learned by LSTM networks. In both, the highway network has been represented as an image, and the speed of each link is mapped by using color in the images. The model has been tested on 278 links of the Beijing transportation network. Zhang et al. [zhang2016dnn, zhang2017deep] also represented the flow of crowds in a traffic network using grid-based Euclidean space. The temporal closeness, period, and trend of the traffic were modeled by using a residual neural network framework. They evaluated the model on Beijing and New York City crowd flows. They used two datasets: (1) trajectory of taxicab GPS data of four time intervals and (2) trajectory of NYC bike data of one time interval. Trip data included trip duration, starting and ending sensor IDs, and start and end times. The key limitation of these approaches is that they do not capture non-Euclidean spatial connectivity. Du et al. [du2018hybrid] proposed a model with one-dimensional CNNs and GRUs with the attention mechanism to forecast traffic flow on UK traffic data. The contribution of this method is multimodal learning by multiple features (flow, speed, events, weather, and so on) fusion on single time series data of one year (34,876 timestamps in 15-minute intervals). The proposed approach is limited to a narrow spacial dimension, however.
Recently, CNNs have been generalized from a 2D grid-based convolution to a graph-based convolution in non-Euclidean space. Yu et al.[yu2017spatio] modeled the sensor network as a undirected graph and proposed a deep learning framework, called spatiotemporal graph convolutional networks, for speed forecasting. They applied graph convolution and gated temporal convolution through spatiotemporal convolutional blocks. The experiments were done on two datasets, BJER4 and PeMSD7, collected by the Beijing Municipal Traffic Commission and California Department of Transportation, respectively. The maximum size of their data set was 1,026 sensors of California district 7. However, these spectral-based convolution methods require the graph to be undirected. Hence, moving from a spectral-based to a vertex-based method, Atwood and Towsley [atwood2016diffusion] first proposed convolution as a diffusion process across the node of the graph. Later, Hechtlinger et al. [hechtlinger2017generalization] developed convolution to graphs by convolving every node and its closest neighbors selected by a random walk. However, none of these methods capture the temporal dependencies Li et al. [li2017diffusion] first represented diffusion-convolutional recurrent neural network (DCRNN) to capture the spatiotemporal dynamics of the highway network.
Our approach differs from these works in many respects. From the problem perspective, none have addressed a problem size of 11,160 sensor locations covering the fully monitored California highway system. From the solutions perspective, graph-partitioning-based approach for large-scale traffic forecasting, adoption of multinode GPUs, and multioutput forecasting were never investigated before.
7 Conclusion and future work
We described a traffic forecasting approach for a large highway network comprising the entire state of California with 11,160 sensor locations. We developed a graph-partitioning approach to partition the large highway network into a number of small networks, and trained them simultaneously on a moderately sized GPU cluster. We studied the impact of the number of partitions on the training time and accuracy. We showed that 64 partitions gave the best forecasting accuracy and GPU resource usage efficiency with a training time of 178 minutes. We demonstrated that our approach leverages a large training data to improve forecasting accuracy. We developed overlapping nodes approach to include spatially correlated nodes from different partitions and showed significant improvement in accuracy. We tuned the hyperparameters of the graph-partitioning-based DCRNN using DeepHyper and showed improvement in forecasting accuracy. We adapted and trained a single DCRNN model to forecast speed and flow and showed that the accuracy is better than models that predict either speed or flow and that the forecasts preserve the fundamental traffic flow dynamics. The DCRNN model once trained can be run on traditional hardware such as CPUs for forecasting without the need for multiple GPUs and could be readily integrated into a traffic management center. Once integrated into a traffic management center, the scale and accuracy of the forecasting techniques discussed in this work would likely lead to more proactive decision making as well as better decisions themselves given the capability to make large-scale and accurate forecasts regarding future traffic states.
Our current and future work includes 1) Extending the approach for large scale traffic forecasting with mobile device data. Our goal will be to determine if mobile device data can act as a proxy for inductive loop data, which could either be used as a substitute for poorly working loops or extending the scope of the monitoring to areas where loops would be prohibitively expensive. 2) Combining DCRNN with large scale simulation to integrate realistic speed and flow forecasts into active traffic management decision algorithms; and 3) Developing models for route and policy scenario evaluation in adaptive traffic routing and management studies.
This material is based in part upon work supported by the U.S. Department of Energy, Office of Science, under contract DE-AC02-06CH11357. This report and the work described were sponsored by the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) under the Big Data Solutions for Mobility Program, an initiative of the Energy Efficient Mobility Systems (EEMS) Program. The following DOE Office of Energy Efficiency and Renewable Energy (EERE) managers played important roles in establishing the project concept, advancing implementation, and providing ongoing guidance: David Anderson and Prasad Gupte.
The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory ("Argonne"). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan. http://energy.gov/downloads/doe-public-access-plan.