Shortest path query is a fundamental operation in road network routing and navigation. A road network can be denoted as a directed graph where is the set of road intersections, and is the set of road segments. Each edge is associated with a numerical weight representing the length of a road segment or the time required to travel through. The road network is static if both the structure and the edge weights (i.e., and ) do not change over time. As for the real-life road network, the traffic condition changes almost all the time, we model the road network as a dynamic graph. Here we treat this dynamic graph as a series of snapshots, with each snapshot is static by itself but dynamic between each other, and answer the path queries using their corresponding snapshot graphs.
The shortest path problem has been extensively studied and the approaches can be grouped into two categories depending on if an index is created or not. The index-free methods like Dijkstra’s, A* [1, 2], and cache-based , find the path only with the graph information. Therefore, they can adapt to the dynamic environment by simply running on the new graph. But they suffer from low query efficiency or inaccurate results, so various index-based methods [4, 5, 6] have been proposed to speed up the query answering. However, these indexes all take time to construct, and the traffic condition may have already changed before their construction finishes. Therefore, there are two extreme cases to use index on a dynamic graph. The first one is building an index for each snapshot, which is not space efficient and has much redundant information. The other one is creating a big time-dependent index  for the entire time domain like TCH  and T2Hop . However, their index sizes are huge and they essentially require the graph to be static from the perspective of ”change”. Therefore, we aim to seek a balance between the two extremes by identifying some typical snapshots from the dynamic graphs and only build indexes for them. Given queries in a specific current traffic condition, we first match the traffic condition to the most similar typical snapshot and use its index to answer the queries. When none of the existing snapshots is similar enough, we regard the current road network as a new snapshot and construct an index for it.
However, it is unclear how to choose those typical snapshots and how to classify the current traffic condition. We try to represent multiple similar snapshot graphs as one typical graph and then process queries in it at the cost of query accuracy. There are two lines of studies focus on graph similarity measurement. The first one is thegraph edit distance [10, 11, 12]. It uses the minimum edit operation number to transform one graph into another. The other one is the feature-based distance 
, where the similarity is not measured on the graph directly but on the abstraction of a graph. Existing methods in both these two lines consider either attribute similarity or structural similarity. However, the road network is a special graph where the topological structure does not evolve frequently because the road construction and closure are not very common. In addition, the edges and vertices in the road network are not associated with labels. it is the edge weight or the speed that varies with time, and we suppose that the structure remains stable for the road network. Therefore, the existing graph similarity measurement can hardly solve our problem. However, we also represent the graph as a feature vector but use the edge weight vector and vertex vector, and we focus on the speed profile change rather than the change of the topological structure or the associated labels. To the best of our knowledge, it is the first time to specify the representation and the similarity measurement of the road network. Then multiple snapshots are clustered, the representative snapshot in each cluster can be selected as the typical snapshot. When there comes a query, we classify the current road network as the most similar typical snapshot and process the shortest path on it.
To support accurate clustering and classification of the snapshots, three challenges must be addressed. The first challenge is that the original representation of the road network encounters the high-dimension curse. Various dimension reduction techniques such as PCA and LDA are proposed, but they are general methods and none are specified to the road network. Here, we reduce the dimension by calculating the covariance of each edge first and filtering the edges whose values are smaller than the threshold. Also, we propose another two representations with much lower dimension: Edge-based and Vertex-based. The second challenge is how to incorporate the road network features like region property and road network continuity into the selection. The two proposed graph representation methods focus on different aspects of the network feature. The Edge-based graph representation uses path to capture the continuity of the network. The Vertex-based representation selects the ”hot spots” (typical vertices) in networking by evaluating the fluctuation of the traffic condition around the vertices. The third challenge is how to choose the typical graph given the current traffic condition. When there comes a query, we convert the graph in the two proposed ways and then match the current graph with one typical graph by using the classification algorithm. The contributions in this work can be summarized as follows:
We formally study the problem of shortest path query in dynamic graphs.
We propose two categories of the methods to select the typical snapshots: The time-based approaches that choose the snapshots directly, and the graph-representation approaches (edge-based and vertex-based) that consider the features like continuity, region condition.
We present how to do the graph clustering and classification in our graph-representation.
We conduct extensive evaluations using a large real-world road network and traffic condition. The experimental results verify the effectiveness of our approaches.
The remaining of this paper is organized as follows: We first discuss the current literature of the pathfinding and graph similarity measurement in Section 2. Section 3 introduces some common notions and defines the sub-problems: Typical Snapshots Selection and Snapshot Matching. For the first sub-problem, we propose two time-based approaches in Section 4, and present two graph representation-based methods in Section 5. The actual typical snapshot selection and the second matching sub-problem are discussed in Section 5.3. Evaluations of the proposed methods in a real-life dynamic road network are presented in Section 6. Finally, Section 7 concludes the paper.
2 Related Work
2.1 Shortest Path Algorithm
In the past decades, various techniques have been proposed for the shortest path calculation in road networks. The fundamental shortest path algorithms are Dijkstra’s and A* algorithms. The Dijkstra’s is inefficient as it needs to traverse the entire network for the shortest path search. And the A*
improve the query efficiency by directing the traversal towards the destination with the help of the heuristic distance. Then there comes a line of research that accelerates the query answering by pre-calculating the index. Particularly, algorithms such asContraction Hierarchy  prune the search space by referring to information stored in the index. Other algorithms like 2-Hop Labeling  and SILC framework , attempt to materialize all the pairwise shortest path results in a concise or compressed manner such that a given shortest path query can be answered directly via a simple table-lookup or join. These index-based algorithms are usually efficient for query answering so as to return the query result within microseconds, but the major premise behind them is that the road network should be static. Since the index construction is often time-consuming and the road network evolves almost all the time, the rebuilt index cannot always fit the current refreshed network condition. Therefore, these algorithms do not adapt well to the dynamic environment.
Another line of research attempts to process the query in dynamic environment. It uses functions to describe the road condition directly [7, 14]. However, the complexity of finding the fastest path is , where is a large number related to the function. This complexity lower bound determines it is much slower to find a fastest path compared with the static environment. To further speed up the query efficiency, time-dependent indexes like TCH  and T2Hop  are proposed. However, these time-dependent algorithms essentially view the dynamic environment statically, because their time-dependent functions are stable. Any change of their function would result in the failure of the existing index and have to endure a time-consuming reconstruction process. Therefore, some works drop the time-dependent functions and run the query in the dynamic graph directly. Because it is hard to build an index for the dynamic graph, shared computation [1, 2, 16] is introduced to improve the query efficiency. Nevertheless, their efficiency is still not comparable with index-based approaches. In this work, we aim to bring the index back to the dynamic environment with the help of snapshots.
2.2 Graph Similarity Measurement
The graph distance can be measured mainly in two ways: graph edit distance [10, 12] and feature-based distance [17, 18]. Graph edit distance has been widely accepted for the graph similarity measurement, and two graphs whose distance is less than a similarity threshold is considered to be similar. It is a metric which can be used in various graphs such as directed or undirected, labeled or unlabeled, as well as single or multi- graphs. The distance is calculated as the minimal steps of graph edit operations including the insertion, deletion, or alteration of vertex or edge to transform one graph to another. In this way, this method can reflect the topological differences between graphs. However, it is not applicable to our problem because the topological structure of the road network does not change often and is supposed to be static here. For the feature-based distance, most of the existing works focus on the structure-based, attributed-based or structural/attribute distance. In , some labeled edges are selected as the features and one graph is represented as a feature vector where each dimension indicates the existence or the frequency of the corresponding edge. The neighborhood random walk model is proposed to combine the structural closeness and attribute similarity for the graph clustering . However, in our scenario, the edges are associated with the length or travel time rather than the labels, and the structure is supposed to be static as mentioned above. In this work, we aim at distinguishing multiple snapshots by their speed profile, and we need to measure the graph similarity from the combination of edge weights and graph structure. Therefore, the existing graph similarity measurement is difficult to be applied here.
3 Problem Definition
(Road network). Road network is formalized as a dynamic weighted graph , where the vertex (resp. edge ) denoting road intersection (resp. segment) and the edge weight can change with time.
If we take a snapshot of the dynamic graph at some time point , then each edge on the snapshot is associated with only one weight value. Suppose the road traffic condition is constant around a small time interval of the snapshot, then the dynamic graph can be treated as the set of multiple snapshots with the timestamp, that is .
We focus on the shortest path query in the dynamic road network. Given typical snapshots with their corresponding indexes, and the shortest path queries in the current road network, we try to match the current graph to the most similar typical graph and use its index for the query answering. Therefore, two sub-problems typical snapshot selection and snapshot matching appear and they are defined as followed.
(Sub-Problem 1: Typical Snapshots Selection). Given multiple snapshots of a road network, typical snapshots selection puts them into () clusters such that the snapshots in the same cluster are similar and those in different clusters are dissimilar. One representative snapshot is taken from each cluster as the typical snapshot.
(Sub-Problem 2: Snapshot Matching). Given one snapshot and typical snapshots of the road network, where is not necessary in , snapshot matching captures captures the snapshot that is the most similar with from .
Apparently, both of the two sub-problems need the graph similarity measurement. We measure the similarity of road networks by first abstracting the features and then use the distance between the feature vectors as the graph similarity.
To evaluate the difference or quality between the selected typical graph and the actual graph , we need an error measurement. We compute the traveling time and by using the edges of and for each . The error is , and the error between and is . Because the trajectories are collected from the taxi, this measurement focuses more on the actual impact on real-life traveling.
4 Time-Based Typical Snapshot Selection
The dynamic road network can be viewed as a time series of snapshots. Because the traffic on road network changes incrementally in real life and several continuous snapshots can be approximately the same. Based on the observation, we can select the typical snapshots by sampling on the time dimension. In the following, we present two baseline selection methods: uniform sampling and non-uniform sampling.
4.1 Uniform Sampling
Suppose the total snapshot number is . The uniform sampling method selects the snapshots with the same step starting from the snapshot (). In other words, . When , all snapshots are selected, and its error is 0; when
, every odd or even snapshot is selected, and it has some error; when, only one snapshot is selected, and it has the largest error. The number of the typical graph is . Obviously, the error could be inversely proportional to the typical snapshot number , and we test the performance of different . This method can control the number of snapshots, but it cannot guarantee the worst case error. The time complexity is .
4.2 Non-Uniform Sampling
The change rate of traffic conditions differs in each time period. For example, the road network is almost the same from midnight to the early morning because little traffic appears on road. But it can change dramatically during peak hours. Therefore, we select the typical snapshots non-uniformly according to how the traffic changes by time, which can be captured by the path-based error.
The sampling works in a sliding window fashion. First of all, an error threshold is set. After that, we visit the snapshots in the increasing order of timestamps and put the current visiting snapshot into the current window . For each , we compute its error , where and . Then the with the minimum is selected as the typical graph of the current window. If , the windows keep expanding and test the next snapshot. Otherwise, a typical graph selected for the previous windows and a new window is created with as the first snapshot. This procedure runs on until all the snapshots are visited. Apparently, this method can control the worst error, but it cannot determine the number of typical snapshots. The time complexity is .
5 Graph Representation-Based Selection
5.1 Edge-Based Representation
Suppose the road network structure does not change, which means and is steady, then only the weight vectors differ for multiple graphs. Therefore, in our first type of representation, we use the weight vector and the delta weight vector to denote one graph.
5.1.1 Single Edge Representation
If we denote one graph as the weight vector, then one snapshot is directly represented as , which contains every edge’s weight. And its variant is the delta weight vector, that is we can express one snapshot as , where is the weight vector of (treated as referenced graph), and . Because both of and has the same dimension number of , which could be hundreds of thousands in real-life and suffers from the curse of dimension, we have to reduce the dimension number before computing the similarity.
The first approach of dimension reduction we apply is . However, it is a general dimension reduction algorithm and does not perform well in our scenario. In the road network, it is those edges that change dramatically over that distinguish a typical snapshot. Then we use the coefficient of variation
(standard deviation divided by the mean) of each edge to measure how various an edge is and use a threshold to identify those various ones to construct a lower-dimensional weight vector.
5.1.2 Aggregated Edge Representation
The weight vector shows the weight of every edge in a graph, but it loses the information of the connection and continuity of road segments. Usually, it is the continuous road segments in some areas that are busy or congested rather than the individual road segments or all the road segments in one area. Therefore, we try to use the aggregated edge length of multiple paths to represent one graph and we call these paths as typical paths.
Each path is a sequence of connected edges with and the length of a path is . Suppose there are typical paths, then one snapshot is represented as .
To better represent a graph, the typical paths set should meet the following conditions: 1) The coverage of typical paths should be as large as possible to represent the graph completely; 2) The similarity between typical paths should be small to avoid the redundant representation; 3) The length (calculated as the total time passing through) of the same path should vary greatly so as to differ multiple snapshots . And according to the observation of traffic in daily life that the congested road segments are usually within local areas, such as the discontinuous red or yellow segments along one long path, we set the minimum static length and the static maximum length (calculated as the total length) of the candidate typical paths as and , respectively.
To meet the first condition of path selection, we partition the graph evenly into regions . An example of the selected paths in each region is shown in Figure 2. The selected paths number in region is proportional to the vertices number in it with , where is the typical paths total number and is the vertices number in . The paths generated at this step are the candidates. To meet the second condition, we compute the similarity between typical paths in a region and remove one of those that are larger than a threshold. The similarity here is the Jaccard Coefficient () over the edges. In the following, we present different ways to select the candidate typical paths.
Random Selection The simplest way is to randomly select paths in each region with path length between and . First of all, a length threshold is determined randomly. After that, a starting edge is selected randomly, and we choose one of its neighbors randomly. The path keeps growing until the length is larger than . Repeated edge is avoided for better coverage.
Edge-Constrained Selection To increase the representativeness of the typical paths, we select those paths whose edges’ coefficient of variations is no less than a threshold . However, it is inefficient to generate and validate candidates forwardly, so we do the coefficient of variation filtering first and construct a sub-graph only with the highly changing edges. After that, the candidates are generated on this sub-graph instead of the original graph.
Edge-Greedy Selection The random selection ignores the weight variation totally so it suffers from generating the qualified candidates repeatedly, while the edge-constraint selection is limited to a small sub-graph so it faces the headache of high similarity between the candidates. Therefore, we propose a greedy method to generate the candidates considering both the weight variation and path distinction. This approach also runs on the original graph and select the starting edge randomly. As for growing the candidate path, the next selected out-edge is the one with the largest . To maintain the effectiveness of the path, a smaller threshold is applied to validate the edge.
5.2 Vertex-Based Representation
In real life, there always exist some temporal hot spots in the road network, such as the inevitable road intersection during rush hour, the scenic spots on weekend, and the business area after work. Meanwhile, the traffic conditions in other “cold” areas stay normal at the same time. Then how about detecting these “key” vertices (called typical vertices) and use the aggregation of traffic conditions around them to represent the traffic condition on the whole road network? Consequently, two problems need to be solved: 1) how to find the typical vertices? 2) how to use the typical vertices to represent one snapshot for the similarity measurement among different snapshots?
5.2.1 Graph Representation
Suppose these typical vertices in a road network have already been known in advance. Inspired by the tree-based q-gram approach for graph similarity join problem [19, 10], we represent the traffic condition around each typical vertex as the set of vertices that can be reached from in a breath-first-search within a fixed time period (for example, 2 minutes). If a driver arrives at a hot spot, he or she is likely to be blocked by the traffic flow and could pass through fewer road intersections within the time period. Usually, the smoother the traffic condition around in is, the larger the value of will be, and vice versa. Obviously we cannot learn much from the absolute value of , and we care more about the congestion than the smoothness of the traffic. We define the block coefficient of a vertex in as
where represents the maximum reachable vertex number from among multiple snapshots, and it reflects the non-block traffic condition around in other words. The larger the block coefficient , the more congested around at time period .
In the first type of vertex-based representation, we denote one snapshot as the block coefficient of the typical vertices (called vertex-bc representation), that is . We can also represent one snapshot as the vertex set of typical vertices (called vertex-set representation), that is with denoting the vertex set reached from within time in . And the reachable vertex set from the vertex in snapshots can be denoted as .
5.2.2 Typical Vertices Selection
The difference between the hot spots and the ”cold” vertices is that the traffic condition fluctuates more dramatically around the hot spots. Hence, we define the traffic fluctuation of as the coefficient of variation of the block coefficient:
where and denotes the standard deviation and the mean of ..
To select the typical vertices, we visit the vertices in decreasing order of their traffic fluctuation and choose the top vertices. Besides, during the selection, we exclude the vertices that are close to the selected typical vertices because they are likely to capture the traffic condition of the overlapped local area or have the similar traffic fluctuation pattern.
Specifically, We first compute the vertex set for each vertex on each snapshot using BFS. However, the search does not stop at but at and also generates a larger coverage set . is used to compute the block coefficient and traffic fluctuation , while is used to avoid the typical vertices being too close to each other. As shown in Figure 2, the vertex coverage set of the selected vertices have no intersection with each other. The procedure stops when typical vertices are selected. The time complexity is . Because the complexity of the BFS is dependent on a small radius , we use to denote its complexity.
5.3 Graph Clustering and Snapshot Matching
In this section, we discuss the methods to solve the two sub-problems. The previous section introduced two types of graph representations, and we present how to utilize them to select the typical snapshots. Section 5.3.1 presents how the typical snapshots are determined and 5.3.2 solves the snapshot matching sub-problem with graph classification.
5.3.1 Graph Clustering
Because the graphs are represented as low-dimensional vectors, we can utilize general clustering methods to put similar ones together. However, methods that tend to cluster arbitrary shapes like DBSCAN [20, 21] are not suitable for tasks like this because of their errors are not guaranteed. Therefore, we use two types of methods: adaptive K-means based clustering agglomerative hierarchical clustering
adaptive K-means based clusteringand
agglomerative hierarchical clustering that have a distance threshold to do the clustering.
5.3.2 Graph Classification
When the traffic condition changes, we can receive a new snapshot . First of all, is converted into one of the graph representations. After that, it is compared with the existing typical snapshots and obtains the most similar one . If the similarity between satisfies the threshold, we use the index of directly to answer the path queries. Otherwise, is considered as a new typical snapshot, and a new index is also built for it.
In this section, we experimentally evaluate the performance (in terms of the accuracy and efficiency) of the proposed typical snapshot selection and snapshot matching approaches using the real-life road network with real traffic condition.
6.1 Experimental Setup
We execute the experiments on the Beijing road network with 31,2350 vertices and 40,3228 edges. Currently, there are 288 snapshots sampled every 5 minutes from the traffic condition in 1 April 2015. These snapshots are obtained from the taxi trajectories collected during that day. The original trajectory dataset contains 532,868 trajectories and 17,698,668 GPS points. We follow the same process of  to generate the speed profile.
All the algorithms are implemented in C++, compiled with full optimizations, and tested on a Dell R730 PowerEdge Rack Mount Server which has two Xeon E5-2630 2.2GHz (each has 10 cores and 20 threads) and 378G memory. The data are stored on a 12 4TB Raid-50 disk.
6.2 Typical Snapshot Selection
6.2.1 Edge-Based Representation
Figure 3(a) shows the result of the single-edge representation test. These snapshots are clustered by K-means. The edge vector and the edge delta vector are denoted as EdgePCA and EdgeDeltaPCA, with the PCA dimension reduction. And we use EdgeCovaPCA and EdgeDeltaCovaPCA to denote the combine of coefficient of variant and PCA dimension reduction. The performance of single-edge presentation is better than the uniform sampling method only when the typical snapshot number is over 40. Because in this kind of representation, we only consider the weight of edge and ignore the connectivity of edges and the underlying topological structure. Although the graph structure stays the same for each snapshot, it has a great impact on the location of the shortest path.
Figure 3(b) shows the performance of the aggregated-edge representation. The random/edge-constrained/edge-greedy path selections are denoted as RandomPath, ConstrainedPath, and GreedyPath. It can be seen that the aggregated-edge representation performs slightly better than the uniform sampling method. And the performance of these three variants is pretty much the same, which indicates that the typical paths are still not enough to represent the snapshot. But since the edge connectivity is considered in this representation, it performs better than the singe-edge representation (all the lines are below the sampling, while the half of the single-edge’s lines are above the sampling).
6.2.2 Vertex-Based Representation
For the vertex-set representation, we cluster the snapshots by hierarchical clustering and the testing performance is named as vertex-set. And we use both the K-means and Hierarchical Clustering in the vertex-bc representation and the results are denoted as vertex-bc-Hier and vertex-bc-Kmeans as shown in Figure 4.
In terms of vertex-based representation, it can be seen that the shortest path error of vertex-set representation is always smaller than that of the vertex-bc representation regardless of the clustering methods. In vertex-bc representation, we consider both the reachable vertex number and the vertex set distribution overlapping, which is proved reasonable in these experimental results. In terms of typical vertex number, the error decreases distinctly when the typical vertex number rise from 50 to 150. It makes sense because more typical vertices can represent the snapshot and show the traffic characteristics more completely so as to generate more accurate clustering results. When the typical vertex number increase from 150 to 200, the errors are almost the same for all three methods. This indicates that taking less than 150 typical vertices is enough to represent the snapshot. What’s more, fewer typical vertices is good for improving the snapshot matching efficiency. It is interesting to find that the performance of selecting 50 typical vertices is almost the same as that of 200 typical vertices, which again shows the superiority of vertex-set representation.
6.2.3 Time-Based Selection
In this section, we compare the performance of the time-based methods and the graph representation-based methods. For the representation method results, we choose ConstrainedPath from the edge-based, vertex-set from the vertex-based because they are the best of their own categories. The result is shown in Figure 5. The worst method is the uniform sampling, followed by the non-uniform sampling. The three graph representation-based methods are all better than the time-based methods. Specifically, vertex-based is better than edge-based.
6.3 Snapshot Matching
In this section, we evaluate the running time of the snapshot matching procedure. Because the graph representation-based methods have higher accuracy than the time-based methods, we only show their results. The matching time is made up of the representation time, which convert the current snapshot into one of the representations, and the similarity computation time, which compares with the existing typical snapshots and finds the most similar one. Specifically, matching time, where is the number typical snapshots.
The result is shown in Table 1. The Edge-based is the fastest to run because it only needs edge weight concatenation. The Vertex-based is slower because it has to run hundreds of Dijkstra’s to collect the vertex set. Nevertheless, all of these methods can finish in one second, and the matching process like this only needs to run once when the traffic condition changes.
In this paper, we study the problem of supporting the index-based shortest path query answering in the dynamic road network. Because of the dynamic nature of the real-life traffic condition, none of the existing index structures can adapt to the real dynamic environment. On the other hand, although the traffic condition changes over time, it does not change dramatically in a short period. Therefore, we view the dynamic road network as a series of snapshots and only build the indexes on the typical ones. The first problem is how to determine if one snapshot is typical or not. We propose two sets of time-based and graph representation-based approaches to deal with it. After that, when facing a new traffic condition snapshot, we use the snapshot matching to find the most similar typical snapshot, and use its index to answer the path queries. Our extensive experiments use the real-life road network, traffic condition to validate the effectiveness of our methods.
-  M. Zhang, L. Li, W. Hua, and X. Zhou, “Batch processing of shortest path queries in road networks,” in Australasian Database Conference. Springer, 2019, pp. 3–16.
-  ——, “Efficient batch processing of shortest path queries in road networks,” in 2019 20th IEEE International Conference on Mobile Data Management (MDM). IEEE, 2019, pp. 100–105.
-  J. R. Thomsen, M. L. Yiu, and C. S. Jensen, “Effective caching of shortest paths for location-based services,” in Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data. ACM, 2012, pp. 313–324.
-  R. Geisberger, P. Sanders, D. Schultes, and D. Delling, “Contraction hierarchies: Faster and simpler hierarchical routing in road networks,” in International Workshop on Experimental and Efficient Algorithms. Springer, 2008, pp. 319–333.
-  D. Ouyang, L. Qin, L. Chang, X. Lin, Y. Zhang, and Q. Zhu, “When hierarchy meets 2-hop-labeling: Efficient shortest distance queries on road networks,” in Proceedings of the 2018 International Conference on Management of Data. ACM, 2018, pp. 709–724.
-  H. Samet, J. Sankaranarayanan, and H. Alborzi, “Scalable network distance browsing in spatial databases,” in Proceedings of the 2008 ACM SIGMOD international conference on Management of data. ACM, 2008, pp. 43–54.
-  L. Li, W. Hua, X. Du, and X. Zhou, “Minimal on-road time route scheduling on time-dependent graphs,” Proceedings of the VLDB Endowment, vol. 10, no. 11, pp. 1274–1285, 2017.
-  G. V. Batz, D. Delling, P. Sanders, and C. Vetter, “Time-dependent contraction hierarchies,” in Proceedings of the Meeting on Algorithm Engineering & Expermiments. Society for Industrial and Applied Mathematics, 2009, pp. 97–105.
-  L. Li, S. Wang, and X. Zhou, “Time-dependent hop labeling on road network,” in 2019 IEEE 35th International Conference on Data Engineering (ICDE), April 2019, pp. 902–913.
-  X. Zhao, C. Xiao, X. Lin, and W. Wang, “Efficient graph similarity joins with edit distance constraints,” in 2012 IEEE 28th International Conference on Data Engineering. IEEE, 2012, pp. 834–845.
-  K. Gouda and M. Hassaan, “Csi_ged: An efficient approach for graph edit similarity computation,” in 2016 IEEE 32nd International Conference on Data Engineering (ICDE). IEEE, 2016, pp. 265–276.
-  Z. Li, X. Jian, X. Lian, and L. Chen, “An efficient probabilistic approach for graph similarity search,” in 2018 IEEE 34th International Conference on Data Engineering (ICDE). IEEE, 2018, pp. 533–544.
-  L. Chen, Y. Gao, Y. Zhang, C. S. Jensen, and B. Zheng, “Efficient and incremental clustering algorithms on star-schema heterogeneous graphs,” in 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019, pp. 256–267.
-  L. Li, K. Zheng, S. Wang, W. Hua, and X. Zhou, “Go slow to go fast: minimal on-road time route scheduling with parking facilities using historical trajectory,” The VLDB Journal—The International Journal on Very Large Data Bases, vol. 27, no. 3, pp. 321–345, 2018.
-  G. V. Batz, R. Geisberger, S. Neubauer, and P. Sanders, “Time-dependent contraction hierarchies and approximation,” in International Symposium on Experimental Algorithms. Springer, 2010, pp. 166–177.
-  L. Li, M. Zhang, W. Hua, and X. Zhou, “Fast query decomposition for batch shortest path processing in road networks,” in 2020 IEEE 36th International Conference on Data Engineering (ICDE).
-  X. Yan, P. S. Yu, and J. Han, “Substructure similarity search in graph databases,” in Proceedings of the 2005 ACM SIGMOD international conference on Management of data. ACM, 2005, pp. 766–777.
-  Y. Zhou, H. Cheng, and J. X. Yu, “Graph clustering based on structural/attribute similarities,” Proceedings of the VLDB Endowment, vol. 2, no. 1, pp. 718–729, 2009.
-  G. Wang, B. Wang, X. Yang, and G. Yu, “Efficiently indexing large sparse graphs for similarity search,” IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 3, pp. 440–451, 2010.
-  M. Ester, H.-P. Kriegel, J. Sander, X. Xu et al., “A density-based algorithm for discovering clusters in large spatial databases with noise.” in Kdd, vol. 96, no. 34, 1996, pp. 226–231.
-  J. Gan and Y. Tao, “Dbscan revisited: mis-claim, un-fixability, and approximation,” in Proceedings of the 2015 ACM SIGMOD international conference on management of data. ACM, 2015, pp. 519–530.
-  D. Defays, “An efficient algorithm for a complete link method,” The Computer Journal, vol. 20, no. 4, pp. 364–366, 1977.