People are interested in how objects move between locations. There are many research questions related to object movement, such as human mobility, traffic flow, animal migration, and so on. There are two main types of objects people are interested in: unsplittable and splittable objects. Unsplittable objects like humans, animals, and vehicles can only move to one location at one time, while splittable objects like disease, information, and ideas can appear at different locations at the same time.
In this work, we focus on the location order recovery problem for unsplittable objects. Knowing the correct location order of unsplittable objects is crucial for many research questions. For example, a medical system may record all the health services one patient has visited previously. From such health records, we can learn health conditions of one patient in the past. One doctor may analyze the influence of the previous treatments on the patient. In such a situation, the health treatment order is important for a doctor’s analysis. Another example is city planning. When planning city routes, people first analyze the traffic flows in the city. From previous vehicle’s movement history, researchers learned what’s the popular route between important locations. Discovering the correct movement order is the first step for solving these problems.
However, research studying unsplittable object movement is built on the assumption that a location sensor can accurately record when an object visits a location with high temporal resolution. Here a location sensor is just a location recording system that records when an object visits where. A sensor with high temporal resolution can record the current time in seconds or even microseconds, while with low temporal resolution it may only record the current time in hours or days. Because of low temporal resolution, a series of movements would appear in the same time slot, which implies that an unsplittable object visits several different locations simultaneously. The correct order of movements is missing, which can be viewed as a missing data issue.
To demonstrate this problem, we show an example of a trail in high resolution versus a trail in low resolution. Table I below is one trail recorded from Apr. 9 to Apr. 12. The temporal resolution in the left is at the minute level. We can clearly know that the location sequence this object visited is . To the right is the same trail but with temporal resolution at the day level. We know that this object visited location A on April 9 and then moved to B from A on April 10. However, we can only learn that this object visited location B and C on April 11, but do not know the movement order in the time period between April 10 and April 13. Figure 1 is a network representation of the trail in the right table. Each node in the graph is a location record. The dashed directed edges represent undetermined potential movements during April 10 and April 13. We approach this problem by finding the most probable route in the dashed network shown in Figure 1.
Even though high resolution data is often available today, there are still many cases where we cannot obtain high resolution data. A typical example is disease contamination. Patients usually don’t know the exact time they got infected by some disease. As a result, it is hard to determine the transportation path for the disease without knowing the accurate infection time. Low temporal resolution is common in survey research. Respondents in a survey often cannot recall the exact time when an event happened. As a result, it is hard for a respondent to remember the correct order for a series of events that occurred even a week previously. Data storage and privacy concerns can also result in data collected at high resolution being collapsed to a lower resolution level.
In this paper, we study the problem of recovering the correct location order using the Markov transition network between locations. By considering the movement histories, our goal is to find the location order with the highest probability. Based on Markov property, we show this problem is similar to asymmetric traveling salesman problem (ATSP) in the probabilistic space. Our problem is distinct from ATSP. First, objects enter and leave the network of locations at many spots. Hence, we need to incorporate the entering and exiting information in our framework, which can help the location order recovery. Second, an object may exit the location recording system and re-enter again after a long period. In such a situation, a trail should be considered as two separate cases. Third, even though we need to find the correct order among several locations, some locations may always precedes other locations because of time constraints. In our previous example, location B always precedes location D and E. These features require a methodology that is distinct from ATSP.
To the best of our knowledge, our research is the first attempt to solve this location order recovery problem. We propose an efficient framework to recover the order of movements that consists of three distinct processing phases. We demonstrate that adding additional nodes to trails can effectively capture the starting and ending information. We prove that partitioning is a good strategy to overcome re-entry issues. Using the modified and partitioned data, we show that an exact algorithm can solve broken point problem when the number of locations in the same time slot is small. However, this method does not scale well as the number of ”simultaneous locations” increases. We then propose utilizing an ant colony system algorithm. We provide experimental results that prove the efficiency of this framework under various temporal resolutions and with broken points that vary in the number of locations per time slot.
The rest of this paper is organized as follows. In Section II, we summarize related works in studying object movements. A detailed description of our framework is given in Section III. We present the description of our dataset in Section IV. We describe our experimental setup and the corresponding results in Section V. Finally, the conclusions and the implications for future work are presented in Section VI.
There are various ways of representing trails. One straightforward way is to treat a trail as a location sequence. Many algorithms have been proposed to find frequent subsequences in a set of trails, such as PrefixSpan, GSP, FreeSpan. These algorithms work well in sequential pattern mining, but they are limited in that they do not take time intervals into account. Another way to represent trail is to transform trails into hierarchical tree structures. Hung proposed using the probabilistic suffix tree (PST) to represent a trail for each user. Representing a trail as a PST makes it possible to measure the similarity between two trails using editing distance. As such, similar communities based on their trails can be found.
The representation we choose is to build a transition network in which nodes represent locations and edges represent objects moving between locations. Transition networks have been used for similar but different problems. For example, in , Chen et al. tried to find the popular routes given a destination . They used a maximum probability product algorithm to discover the popular route, which is similar with Dijkstra algorithm. Similarly, Luo et al. designed an algorithm to find the most frequent path in a transition network by defining a .
The research previously described is based on trails collected with high temporal resolution. In reality, there are datasets with at various levels of temporal resolution. In some of these, the resolution is not precise enough to determine the accurate order of movements. Insurance claim data for medical procedures often has this limitation. Such low temporal resolution result in a type of missing data problem. When researchers try to assess the movements of an object of interest, obstructions such as limited storage volume, data collapse to preserve privacy, and the cost of higher resolution sensors will result in data that appears as though it is from a low resolution sensor. Missing data of some form is hard to avoid. Researchers in numerous fields have developed techniques for handling missing data[24, 7]. Examples include image noise reduction, user attributes inference[14, 21]
, network traffic recovery by tensor factorizations and compressive sensing signal reconstruction. However, the specific issue of recovery of the original sequence given missing temporal resolution has not been addressed.
The missing data here is the location order resulted from low temporal resolution. Even though there are many papers trying to recover missing data in general[1, 5, 27], most of them assume that what is missing is the data value, not the order. Hence previous methods cannot be applied directly in our problem. The closest study was done by Merrill et al. who found this low temporal resolution issue when analyzing health record data. They used transition networks to model patients’ health records. They resolved the broken point issue by determining a patient’s movement opportunistically by the number of the record. This presumes that record number is a proxy for true order. Such a solution is not the optimal way to find the correct location order, and will not work across contexts. In contrast, our approach is more general.
In the next section, we will show that recovering location order under low temporal resolution is similar but not equivalent to an asymmetric traveling salesmen problem(ATSP) in the probabilistic space. The goal of ATSP is to find a Hamiltonian circuit of a graph and the path length is a minimum. Various algorithms have been proposed to deal with ATSP. In this paper, we use the ant colony system(ACS) to deal with long location sequence with missing order. It is a distributed algorithm that uses a set of cooperating ants to find good solutions of ATSP. There are important distinctions between our location order recovery problem and ATSP. First, implicitly several locations are more likely to be visited at first or at last. We need to incorporate such entering and exiting information in our framework. Second, an object may exit the location recording system and re-enter again after a long period. In such a situation, a trail should be considered as two separate cases. Third, even though we need to find the correct order among several locations, some locations may always precedes other locations because of time constraints.
Problem definition: Formally, a trail is a location sequence represented as where is location and is timestamp, . If there exists a subsequence where and are not the same location, then we cannot know the location order between . We call this trail broken. If there are more than 2 different locations appearing at one time slot, we call this time slot a broken point. It is also possible that there are multiple broken points in one trail and they can appear continuously. The goal of this paper is to recover the true order of movements in all the broken points.
In this paper, we consider an object movement as a Markov process in which the probability for an object moving to the next location is dependent on the current location. Based on this assumption, we build a Markov transition network where an edge from to represents , which is the probability of an object moving to location given current location .
Our framework can be roughly divided into three steps. The first step is adding BEGIN/END nodes as well as partitioning trails into sub-trails.
Because here we also know the time intervals between consecutive location records, we would like to incorporate such information in our recovery framework. In a location record system, an abnormally large time interval between two consecutive location records may imply that this object once left this location system and entered it again after a long time period. For each consecutive location record pair and , if is larger than a threshold, then we call it a gap point. We first partition these trails at each gap point. As a result, we get several separate trails from one original trail.
In order to capture the starting and ending information in a location sequence, we add a ”_BEGIN_” node and a ”_END_” node at the beginning and end of each trail. As a result, the joint probability for a location sequence would change from
The second step is extracting the transition probability from unbroken subsequences in the trails we got after the first step. From those unbroken subsequences, we can learn the probability edge between each location pair as
where is the total number of location in the unbroken subsequence and is the number of objects moving from to . Based on the Markov transition probability, we can build a transition network among locations in the broken points.
After the second step, the problem is transformed into finding the location order in a broken point with the highest probability product , where and are locations preceding and following the broken point respectively. Basically, this is similar to an asymmetric traveling salesman problem(ATSP) which is an NP-hard problem. Because
finding a route which passes all the locations in a single broken point with the highest probability is equivalent to an asymmetric traveling salesman problem. In our problem setting, the distance from location to is . When there are consecutive broken points, the problem becomes more complex. In Figure 2, there are two examples of broken trails. One has only one broken point and the other has three continuous broken points. is the timestamp for the location records in the corresponding layer. In the first example, an object only needs to pass all the locations in layer , which is similar to ATSP. However, in the second example, an object first needs to pass all the locations in layer then and , which differs from ATSP. For simplicity, we did not draw all the possible edges between layers. The location order in is dependent on movement history in and . In this case, some distances between certain location pairs are infinity, eg. an object cannot move directly from layer to without passing and it also cannot move in the wrong directions like from to .
We compared three different algorithms to find such a route with the highest transition probability. The first one is a brute force algorithm, which simply enumerates all the possible routes. Such exact algorithm can achieve the most accurate estimation of the joint transition probability. However, the time complexity iswhere is the maximum number of locations in a broken point and is the number of continuous broken points.
The second algorithm we use is called ant colony system(ACS), which is developed to deal with traditional asymmetric traveling salesman problem. As we discussed above, the distance from location A to location B is . Except for those determined or dashed potential edges, all the other edge distances are infinity. We use ACS algorithm to find the minimum distance in broken points. The time complexity of ACS is where is the number of ants chosen manually and is the number of iterations. We used 10 ants and 300 iterations in our algorithm. Following , we set the parameter in ACS as . Please refer to  for the meaning of these parameters. One thing to note is that even though we already know the true starting location and ending location in our problem, we still need to randomize the starting locations for these ants because in our experiments limiting the starting location to the source node will decrease the exploration ability of this algorithm and thus make the performance worse.
The last algorithm is a greedy algorithm, which can be viewed as a baseline for comparison. It simply starts from the source location and finds the next location with highest transition probability until arriving at the target location. The time complexity of this algorithm is .
We also used a random strategy as a simple baseline. It randomly guess the order of locations that appears in each broken points.
The overall workflow of our framework is shown in Figure 3. To sum up, the framework consists of the following three phases:
Preparation phase: each trail is partitioned at gap points. We treat each partition as an independent trail and add BEGIN/END nodes for each partition.
Probability extraction phase: We extract transition probabilities from the unbroken subsequences in trail partitions we got in phase 1. Then we build Markov transition networks among locations in broken points.
Recover phase: find the location order with maximum transition probability.
There are two datasets we used in this paper. One is data supplied by Columbia University Medical Center. It contains de-identified administrative data for patients who visited in-patient and/or out-patient service sites between July 1 2011 and June 30 2012. There are 94885 records in this data. Each record is a single visit by one patient on one specific day. The basic statistics of this dataset are shown in the Table II. The temporal resolution for the health data is at the day level. Thus we cannot know the true location order when one patient visits several different health services on the same day. Among 5055 patients’ health trails, overall, there are 4241 patients with broken health trails. Because we do not know the true location order for these broken health trails, we only used the unbroken ones to test our framework.
The other dataset is retrieved from program function call records. It is retrieved from a Java program called FindBugs111http://findbugs.sourceforge.net/. There are 61 classes involved in this part of the program records. Each class in the program can be viewed as a location visited by the computer processor. In the original dataset, each row is a function call record in the following format:
call;source class; target class; method called; timestamp in nanoseconds
call-end; source class; target class; method called; timestamp in nanoseconds
This record represents the movement of the computer processor from source class to target class or return from target class. The temporal resolution is at the nanosecond level for this data, so the function trail for this program is unbroken.
|# of records||# of agents||# of locations||
There are three factors that will affect the efficiency of our algorithms: the number of locations in a broken point, the number of broken points in continuity, and the total number of location records in continuous broken points. We examined these three factors in our health record data. As shown in Figure 4
, most of the broken points only contain two locations and few of them appear in continuity. The average length of broken points is 2.506 and the standard deviation is 0.921. The average length of continuous broken points is 3.534 and the standard deviation is 2.802. The average number of broken points in continuity is 1.410 and the standard deviation is 0.92. Therefore, an exact algorithm is feasible for most of the broken points whose lengths are less than 6. However, there are still few continuous broken points with length greater than 10.222Consider a single broken point with length 10, the time complexity is which is 1814400 times higher than length 2. Given a big dataset, using an exact algorithm on these long instances would consume a lot of machine time. When the length of continuous broken points gets larger, an approximation algorithm like ACS is a better option regarding the running time.
As shown in Figure 5, the time interval distribution in health record data roughly follows a power law distribution. However, there are also weekly periodic patterns in the distribution. Patients tend to pay the next visit at the same weekday. Only in very rare cases is the time interval between two consecutive health services larger than one month. That is why we decided to partition health trails if the time interval between two consecutive locations was larger than 28 days.
V-a Experiments setup
|Original unbroken health trail||Broken health trail after change|
|7/12/2011||Adult Medicine||7/11/2011||Adult Medicine|
|7/15/2011||Adult Medicine||7/15/2011||Adult Medicine|
|7/17/2011||Adult Medicine||7/17/2011||Adult Medicine|
|# of broken trails||164||213||250||263||268||278||294||297|
|# of broken points||178||251||296||319||340||353||374||389|
|Avg. length of broken points||2.073||2.223||2.284||2.426||2.482||2.586||2.591||2.627|
|Avg. length of cont. broken points||2.121||2.364||2.449||2.651||2.749||2.917||2.910||3.042|
|Max. length of broken points||4||5||6||5||6||6||6||7|
In order to validate the efficiency of our location order recovery algorithms, we used the unbroken trails in our two datasets as the ground truth. We used two different strategies to create trails with artificial broken points. The first one is manually changing the temporal resolution for the unbroken trails. This can be easily achieved by . We applied this method to health dataset. An example is shown in Table III. In this example, we broke an health trail by changing its temporal resolution from one day to two days.
The original temporal resolution of the health data is one day. In our experiment, we changed the temporal resolution for all unbroken trails and used them as our test data. We broke the health trails by creating eight different low temporal resolution conditions ranging from 2 days to 9 days. Then we extracted the transition probabilities among locations from the resolution changed health trails. Table IV shows the statistics of these artificial broken points under different temporal resolution.
The second test data creation strategy is called order mutation. We first determined the number of locations in a broken point as a fixed number . Then we randomly selected a location visiting record . After that we modified all the timestamps of the following location records between and as and mutated the order of these records. In this way, we can create broken points with the desired sizes. To ensure we have a consistent quality of transition probability, we only randomly mutated 20% of location records in the program function call data. In Table V is an example of trail with a broken point created by order mutation. We selected the length of broken points ranging from 2 to 15 and mutated 20% location records in the program data. Table VI shows the number of broken points for each size.
|Original unbroken program trail||Broken program trail after change|
|Timestamp||Java class||Timestamp||Java class|
|553987065457672||Class 1||553987065457672||Class 1|
|553987065574331||Class 2||553987065574331||Class 3|
|553987065768508||Class 3||553987065574331||Class 4|
|553987065819048||Class 4||553987065574331||Class 2|
|553987100679470||Class 1||553987100679470||Class 1|
|553987100679860||Class 2||553987100679860||Class 2|
|broken point size||2||5||7||10||12||15|
|number of broken points||6810||2772||1980||1387||1155||925|
We used two metrics to measure the performance of the location order recovery framework. The first one is the overall recovery accuracy which is the percentage of broken points that are recovered with the correct location order. The second one is the average hamming distance between recovered location sequences in broken points and the correct ones. It measures the minimum number of substitutions required to change one location sequence into another. We applied it to the order mutation experiment to examine the effectiveness of our framework over broken points with the same size.
V-B Experiment results
We first examined the effectiveness of adding BEGIN/END locations as well as trail partitioning on the health record data. The accuracies are evaluated at each broken points, i.e. 80% accuracy means 80 recovered broken points are correct out of 100 total broken points. In Figure 6, we can see that when we add BEGIN/END locations to the trails, we get better recovery accuracy compared to when we do not add. This implies that adding BEGIN/END locations provides additional information for the exact algorithm when temporal resolution keeps decreasing. We also did similar experiment to evaluate the effect of including time interval factor. As shown in Figure 6, trail partitioning would dramatically improve the location order recovery accuracy consistently. These two cases prove that simply taking the location order recovery problem as an ATSP is not an optimal solution. Adding nodes and trail partitioning would dramatically increase the performance compared with taking it as an ATSP and using an exact algorithm without these two methods.
We further compared the location order recovery performance of different algorithms. In Figure 7 are the location order recovery accuracies we got on the health trail data with BEGIN/END nodes and partitioning. Of the three algorithms, the exact and ACS algorithms work better and their performance is almost the same. The greedy search algorithm’s accuracy is similar to the exact algorithm when the temporal resolution is at a high level. Interestingly, when we compared the accuracy for readmission patients with normal patients, we found that the recovery accuracy for readmission patients’ health trails at 2-day resolution is 77.5% while the accuracy for normal patients without readmission is 86.8%. This suggests that normal patients have more predictable movement, which is consistent with them being either healthy and/or having accurately diagnosed and well managed conditions.
To examine the effectiveness of our framework on broken points of different fixed sizes, we calculated the recovery accuracy and the average hamming distance for broken points of different sizes for the simulated trails. We only used the simulated trails in this experiment because it requires long trails, and the lengths of health trails in the test data are too short. As Table VII shows, the accuracies of an exact algorithm and the ACS algorithm are almost the same. Sometimes ACS is better when the size of broken points is less than 12. The exact algorithm does not scale, and the time to completion prevents us from including its results for cases where the size of the broken points is large. We find that for larger size broken points, the accuracies of the ACS and greedy algorithms get closer. However, on examination of the average Hamming distance we found, as is shown in Figure 8, that there is a big performance difference between the ACS and greedy algorithms.
|broken point size|
V-C Case study
In this section, a case study using health record data is described. Knowing the sequence with which services are used in a health system is critical to improving health services, reducing costs, and improving health outcomes. One of the key question is what services are critical wayports. In this case study we examine whether improved estimation of sequence, by resolving broken points, alters our understanding of what are the critical wayports. We first build a directed location network, where the edge weight represents the number of patients moving from one location to another. We use inverted betweenness to measure the importance of a location. Nodes high in inverted betweenness are wayports. Inverted betweenness is similar with standard betweenness centrality except the edge weights are inverted prior to calculation. The higher of the inverted betweenness the more important that location as a wayport.
|Rank||Truth||With recovery||Without recovery|
|2||Internal Medicine||General Cardiology||Electrophysiology|
|3||General Cardiology||Internal Medicine||Echocardiography|
|4||Adult Medical-Surgical||Emergency||Circulatory physiology|
|7||Circulatory physiology||Circulatory physiology||Adult Medical-Surgical|
|10||Pediatric Cardiology||Pediatric Cardiology||Unknown|
In Table VIII, we show the top ranked services, i.e., which services are the best wayports, under three conditions. Condition 1 uses only those trails in the health record dataset that are unbroken. It can be viewed as the true rank. For condition 2, we changed the temporal resolution to 9 days, which resulted in 297 broken trails. We ignored the broken points and got the second inverted betweenness rank without recovery. Finally, condition 3, we applied our framework to these broken trails and got the third rank after order recovery. As shown in this table, the true rank is very similar with the rank after location order recovery. The Spearman rank-order correlation coefficient between these two ranks is 0.976 which is much larger than the correlation between the truth rank and the rank without recovery(0.491).
Vi Conclusion and Discussion
In this paper, we studied location order recovery for trails with low temporal resolution. It is a missing data issue that prevents us knowing the correct moving order. After defining the concept of broken point, we examined the wide existence of broken points in a real health record dataset.
In our experiments, we designed two strategies to create artificial broken points on two datasets. Hence we can know the ground truth for these broken points. Experiments on these two datasets have shown the effectiveness of our framework. We showed that adding BEGIN/END nodes in the original trails can effectively capture the beginning and ending information. Trail partitioning can dramatically increase the location order recovery accuracy by overcoming the re-entering issue. After this two procedures, we find the transition route with the highest probability. Through the distribution of the length of continuous broken points, we showed that an exact algorithm is feasible for a large portion of them. However, there are still a few broken points with extra large sizes which cannot be handled efficiently by an exact algorithm. Hence we proposed utilizing an ant colony algorithm to recover the true location orders in those super long broken points. In our case study, we showed that the location recovery framework can effectively capture the important locations and the result is highly correlated with the rank got from trails under high temporal resolution.
As with any research there are limitations. One such limitation is that there are some error patterns frequently occurring in our experiment which cannot be dealt with efficiently using our framework. For example,our approach cannot distinguish location sequences and because the joint probability products are the same for these two sequences. A second limitation is that our framework is based on the first-order Markov transition probability between locations. Using higher order Markov transition probability may prove effective at improving the recovery accuracy. The third limitation is the Markov assumption. Future work might consider how to model trails without the Markov assumption.
This paper presented a framework for sequence recover when there portions of a sequence are missing or collapsed due to lack of temporal resolution. Such a framework has applicability to many scenarios involving object movements like human mobility research and traffic flow study. Despite limitations, this framework provides a powerful approach for accurately inferring order despite missing data. Future work should explore the application of this method to diverse data sets. An advantage of the proposed framework is that researchers can plug a variety of algorithms designed for the asymmetric traveling salesman problem into it. This makes it both more generalizable and gives it the potential for further performance improvements. We anticipate that such improvements, in conjunction with this framework, will reduce the fragility of machine learning techniques that are predicated on knowing sequences and so pave the way for improved prediction.
The authors would like to thank all members of the Center for Computational Analysis of Social and Organizational Systems for their valuable advice. These findings were derived from data supplied by Columbia University Medical Center. The author’s thank Jacqueline Merrill, Professor of Nursing in Biomedical Informatics, and her research team for their advice and insight.
-  (2011) Scalable tensor factorizations for incomplete data. Chemometrics and Intelligent Laboratory Systems 106 (1), pp. 41–56. Cited by: §II, §II.
-  (2011) Discovering popular routes from trajectories. In Data Engineering (ICDE), 2011 IEEE 27th International Conference on, pp. 900–911. Cited by: §I, §II.
-  (2001) The asymmetric traveling salesman problem: algorithms, instance generators, and tests. In Workshop on Algorithm Engineering and Experimentation, pp. 32–59. Cited by: §III.
-  (2009) Subspace pursuit for compressive sensing signal reconstruction. IEEE Transactions on Information Theory 55 (5), pp. 2230–2249. Cited by: §II.
-  (1977) Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological), pp. 1–38. Cited by: §II.
-  (1959) A note on two problems in connexion with graphs. Numerische mathematik 1 (1), pp. 269–271. Cited by: §II.
-  (1993) Nonlinear wavelet methods for recovery of signals, densities, and spectra from indirect and noisy data. In In Proceedings of Symposia in Applied Mathematics, Cited by: §II.
-  (1997) Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1 (1), pp. 53–66. Cited by: §III.
-  (1992) An additive bounding procedure for the asymmetric travelling salesman problem. Mathematical Programming 53 (1), pp. 173–197. Cited by: §II.
-  (2008) Understanding individual human mobility patterns. Nature 453 (7196), pp. 779–782. Cited by: §I.
-  (2004) Information diffusion through blogspace. In Proceedings of the 13th international conference on World Wide Web, pp. 491–501. Cited by: §I.
-  (1968) Innovation diffusion as a spatial process.. Innovation diffusion as a spatial process.. Cited by: §I.
-  (2000) FreeSpan: frequent pattern-projected sequential pattern mining. In Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 355–359. Cited by: §II.
On predicting geolocation of tweets using convolutional neural networks. In Social, Cultural, and Behavioral Modeling: 10th International Conference, SBP-BRiMS 2017, pp. 281–291. External Links: Cited by: §II.
-  (2009) Mining trajectory profiles for discovering user communities. In Proceedings of the 2009 International Workshop on Location Based Social Networks, pp. 1–8. Cited by: §II.
-  (2013) Finding time period-based most frequent path in big trajectory data. In Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data, pp. 713–724. Cited by: §II.
Traffic flow prediction with big data: a deep learning approach. IEEE Trans. Intell. Transp. Syst. 16 (2), pp. 865–873. Cited by: §I.
-  (2015) Transition networks in a cohort of patients with congestive heart failure: a novel application of informatics methods to inform care coordination. Applied clinical informatics 6 (3), pp. 548–564. Cited by: §II.
Image noise smoothing using a modified kalman filter. Neurocomputing 173, pp. 1625–1629. Cited by: §II.
-  (2001) Prefixspan: mining sequential patterns efficiently by prefix-projected pattern growth. In icccn, pp. 0215. Cited by: §II.
-  (2017) A probabilistic framework for location inference from social media. arXiv preprint arXiv:1702.07281. Cited by: §II.
First and second order markov chain models for synthetic generation of wind speed time series. Energy 30 (5), pp. 693–708. Cited by: §VI.
-  (1996) Mining sequential patterns: generalizations and performance improvements. Springer. Cited by: §II.
-  (1989) High-resolution image recovery from image-plane arrays, using convex projections. JOSA A 6 (11), pp. 1715–1726. Cited by: §II.
-  (2006) Global transport networks and infectious disease spread. Advances in parasitology 62, pp. 293–343. Cited by: §I.
-  (2008) Going, going, gone: is animal migration disappearing. PLoS Biol 6 (7), pp. e188. Cited by: §I.
-  (2017) RATE: overcoming noise and sparsity of textual features in real-time location estimation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 2423–2426. Cited by: §II.
-  (1999) CRC standard probability and statistics tables and formulae. Crc Press. Cited by: §V-C.