Scalable Spatial Scan Statistics for Trajectories

06/04/2019 ∙ by Michael Matheny, et al. ∙ THE UNIVERSITY OF UTAH 0

We define several new models for how to define anomalous regions among enormous sets of trajectories. These are based on spatial scan statistics, and identify a geometric region which captures a subset of trajectories which are significantly different in a measured characteristic from the background population. The model definition depends on how much a geometric region is contributed to by some overlapping trajectory. This contribution can be the full trajectory, proportional to the time spent in the spatial region, or dependent on the flux across the boundary of that spatial region. Our methods are based on and significantly extend a recent two-level sampling approach which provides high accuracy at enormous scales of data. We support these new models and algorithms with extensive experiments on millions of trajectories and also theoretical guarantees.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Large data sets of trajectories have driven much recent research with the goal of understanding the intentions and causes of various correlations hidden within the data. These data objects are structured to capture movements, interactions, and possibly the intentions of humans and other objects. Yet, when one considers these trajectories in unison, they usually appear as just a tangled mess. And as the data grows (e.g., millions of objects with billions of data points), this task does not seem to aggregate and statistically simplify, rather it just becomes more unwieldy and unmanageable.

We consider new methods that use the underlying geometry of the trajectories to identify statistically significant spatial anomalies. Critically, these models deviate from density-based models (like DBScan [9]) which would only identify populous regions (of course a lot of traffic exists in New York or Beijing!) Rather, our new models and algorithms identify geometric regions where some labeled aspect (e.g., trajectories of sick people) significantly deviates from what is expected or is in contrast to the trajectories of the background population. And our approaches work at enormous scale – required for modern large data sets, and for the statistics to be meaningful.

Developing such models for trajectories, comparing to a background population, is highly motivated: identifying significant population or demographic shifts, pinpointing the likely location responsible for disease due to prolonged exposure among a dynamic population, or geolocating a nefarious wifi access point affecting cell phones which transiently pass by. There are no existing mechanisms for addressing some of these goals specific to trajectory data, and certainly not for massive data sets.

When base objects are points (not trajectories), such comparative anomaly tasks are typically resolved with a Spatial Scan Statistic (SSS) [14]. This is one of the most common tasks within Geographic Information Science, with applications to detecting hotspots with elevated levels of disease, crime, or demographic traits [14, 10, 22, 26, 4]. These identify a region with maximum log-likelihood score , out of a large prescribed family of regions , and have been shown empirically and theoretically to have high statistical power [14, 4]. But this search of all regions (usually associated with a geometric family of shapes) is computationally onerous, and the most common software SatScan [15] is only able to scale by restricting the class of regions to ones specifically chosen by the user. Moreover, there are only a few limited extensions towards trajectory data [23, 17]

and these rely heavily on heuristic aggregation.

Our Contributions. We introduce three new models to allow for geometric analysis of trajectory data. These models (derived in Section 3) identify geometric regions where many trajectories of interest have passed through (full model), spent time in (partial model), or began or ended in (flux model). These models are new and well-motivated, but they have not before been a computationally feasible objective to consider at scale.

We design sampling and scanning algorithms (in Section 6) that allow for extremely scalable methods for identifying anomalous patterns captured in the above geometric models. Our methods are not limited by the data size, instead they depend on the error—spatial and combinatorial—that researchers are willing to approximate the final statistical quantity to. The scalability as well as statistical power of these models are demonstrated on enormous data sets containing up to several million trajectories with over 1 billion waypoints (in Section 7).

Of the three models, we develop a reduction for the flux and partial model to a scalable point-based scanning framework (see Section 6). However for the third model, full, such reductions are not possible, and considerable new scanning and sampling mechanisms are developed: Section 4 designs compact coresets for each trajectory that preserves spatial guarantees and converts trajectories to labeled point sets. Section 5 provides trajectory specific sampling theorems (VC-dimension bounds for trajectory-based range spaces) necessary to apply the underlying sample approach. Section 6.2 develops algorithms that use new data structures to scan the data resulting from trajectory coresets and their samples in a scalable way. The overall accuracy and runtime bounds for of our sampling, coreset, and scanning algorithms are ultimately summarized in Table 3.

2 Preliminaries and Overview

We begin with an overview of the mathematical modeling, geometric, statistical, and algorithmic preliminaries to frame our new contributions.

Trajectory models. We model a trajectory via waypoints as the tracing out of the ordered sequence of connected segments , with . Trajectory has total arclength (or when is clear, just ). Computationally and also as a way of defining regions, it is usually necessary to consider trajectories via just some set of ordered waypoints constructed via one of the methods discussed in Section 4 instead of as an ordered sequence of connected segments. For the partial model the parameterization of the trajectories will be important, and we use arclength by default. We could alternatively use a time-based parametrization, but we otherwise do not explicitly modeling timing information in this paper, leaving this potential extension to future work.

Range spaces. To study spatial anomalies applied to trajectories we need a way to model how or when a trajectory interacts or intersects with a potential region of interest. To do this we review the definition of a range space. A range space is a pair consisting of a set of objects (the ground set) and a set of subsets of (the ranges), that is a subset of all possible subsets . Range spaces are essential objects in both data structures (e.g., for range searching [2]

) and machine learning (for sample complexity of learning 

[27, 11]). For example, classically let be points in , and be the subsets induced by intersection with a disk .

For this paper, we will define new families induced by a set of shapes , focusing on those induced by halfspaces, disks, or axis-aligned rectangles. In particular, we are interested when the ground set is a set of trajectories (or derived appropriately from ). The new models (we will define in Section 3) will specify the definition of intersection , for prescribed spatial regions , to induce a set . That is each induced subset of trajectories corresponds with a potentially anomalous region of interest.

Statistical discrepancy. In all of the forthcoming models, each trajectory has a recorded value and baseline value . It is typically sufficient to consider all trajectories have (that is they are all part of an observed background population) and that where the set are the trajectories of interest – although these can change in accordance with objective function in a variety of statistical settings [15, 1]. Even when we study parts of trajectories and segments of trajectories, these traits (especially the recorded value ) is held fixed for the entire trajectory.

Then let (resp. ) be the fraction of all recorded trajectories (resp. all trajectories) within the range . Modeling as Poisson, the log-likelihood of the recorded rate being different than the baseline rate is , where However, in general, we can computationally reduce to a simpler subproblem [1, 19] with a “linear” model e.g., . Ultimately, the SSS is . It uses the magnitude of (and permutation testing) to determine if the most anomalous region is statistically significant. We will specifically be interested in an approximate variant:

Definition 2.1 (-Approx. Spatial Scan Statistic)

Consider a range space defined by a family of shapes , and a discrepancy function . An -approximate spatial scan anomaly is a shape so that

where . Then the corresponding -approximate spatial scan statistic is .

Figure 1: A sparse sample of the data defines a set of regions and these regions can be efficiently measured on a larger sample that maintains the density of the original data set.

Two-level sampling. The algorithmic goal in this paper is to find an -approximate spatial scan anomaly for trajectory range spaces (with forthcoming definitions in Section 3). Our scanning algorithms are based on recent work for calculating -approximate spatial scan statistics on simple geometric shapes over point sets at scale [20, 18]. The main ideas are to construct a two-level sampling of a large data set into sets and ; see Figure 1. The larger “sample” set (where is the recorded set and is the baseline set) is used as proxy for the density of the data in any range, and the smaller “net” data sets is used to define the combinatorial set of ranges we will consider. That is, defines a subset of shapes ; those which include combinatorial distinct subset of points in , and then is used to estimate the value via . To achieve error in , we (roughly) need to set and , where is the VC-dimension of the range space [18]. Then there exist shape specific methods to quickly scan and evaluate the ranges (induced by ) and values (from [18].

Remaining challenges. Several significant tasks remain to adapt this framework to the new trajectory scanning models defined next. First, we require to spatially approximate the trajectories – the raw trajectories are either unnecessary or too difficult to work with; we describe our methods in Section 4. Second, we need to formalize the definitions of regions from the set , and bound the VC-dimension with respect to the resulting range spaces; we do so in Section 5. Next, for the partial and flux models, we devise reductions to point set variants in Section 6.1. Given these reductions, we can invoke existing fast scanning algorithms [18]. However, for the full model such reductions are not possible, and we need to develop new methods to quickly iterate over all these ranges on ; that is how to quickly “scan” over the net. This varies with the geometric properties of the shapes; we focus on halfspace, disk, and rectangle ranges, and devise new scanning methods for each of them in Section 6.2.

3 New Trajectory Range Models

We introduce three new models of how to define range spaces when applied to trajectory-derived ground sets and geometric ranges which define the region of interest. These models capture: (i) regions with a high percentage of measured trajectories passing from inside to outside (the flux model), (ii) regions with a high percentage of the total arclength of measured trajectory data (the partial range model), and (iii) regions with a high percentage of measured trajectories pass through them (the full range model).

Figure 2: Models of how trajectories intersect (in red) with circular shape (in green). From left to right: point-based model; partial range model; full range model; flux model.

3.1 Flux Model

The simplest version of this problem is the flux model. We search for a shape where a proportionally high number of trajectories start inside the shape , and end outside of the shape , or vice versa.

Mathematical Definition. In this setting, we can reduce each trajectory to two waypoints: the first and the last. To satisfy a range intersection, the first must be inside and the last must be outside (or vice-versa with opposite effect). That is we define two sets (the beginning set) and (the end set). Then we attempt to find the range where or vice-versa. Since we only care about the endpoints of trajectories we can directly reduce this problem to highly-scalable techniques for point-based algorithms (see Section 6.1).

Motivating scenarios. This model arises when finding a boundary that differentiates two types of traffic. For instance say a city would like to place a toll fee that mostly affects tourists (using rental cars as proxy) versus locals (using other GPS databases as proxy). The shape boundaries with high flux are potential good choices. Alternatively, if trajectories document addresses of people over time (for instance the Utah Population Database which tracks addresses for 50 years), this can be used to identify significant migratory patterns of parts of the population. Regions of unusually high flux or any fixed window of time may also be useful for managing crowd control in packed sporting or music events.

[colback=white]

Flux Model Example

In the image above, let the trajectories represent fair paths of a Kakao Taxi driver in Seoul over a day. The red trajectories represent significant tips, and the blue ones all other fairs paths. Identifying a region (in green) where red (high tip) routes started, and left from, without too many blue routes of the same form, will indicate a good place to try to find profitable customers. This would correspond with the disk maximizing under the flux model. 111 Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL.

3.2 Partial Range Model

In the partial range model, we want to find a shape where the weight a trajectory contributes to is proportional to the normalized length of the trajectories inside. A trajectories intersection with a shape is fractional; specifically is the fraction of the total arclength of within . That is if of a trajectory intersects , then or if th of the total length of all trajectories intersect then . The contribution depends on the parametrization of . This could be by arclength, by time, by fuel used, or any other quantity. In this paper we simply use arclength in our experiments.

[colback=white]

Partial Range Model Example

Let the image above model the daily commutes of patients that have checked into Paris hospitals. The red curves correspond with patients who reported a serious lung and stomach symptoms when they arrived to work, consistent with exposure to a dangerous chemical. The blue trajectories are other patients with more standard and mainly benign symptoms. A region where more red trajectories spent significant time, and blue trajectories did not spend much time, would help pinpoint the region with largest potential to be the source of these symptoms. And finding such region corresponds with the disk that maximizes under the partial range model. 111 Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL.

Mathematical Definition. More formally, the set corresponds to the set of points comprising partial trajectories which are inside . The true ground set is then . That is is a subset of so any lies on some . This corresponds to an infinite (not combinatorially defined) range space [27] as where . A random sample of points from this infinite point set falls into the existing theory from [18, 20]. Therefore existing scanning algorithms can be directly applied, even ignoring the relationship between these sampled points and the trajectories, since the ground set is a subset of .

Motivating scenarios. The partial range model is important to help automatically identify regions which statistically lead to some measured characteristic, proportional to how long the object generating the trajectory spends in that region. For instance, consider a mysterious sickness that health officials suspect is tied to prolonged exposure to some chemical event. By finding a compact region where many of the inflicted people spend a considerable amount of time, compared to all in that region, this will provide a candidate location for the epicenter of that exposure. Alternatively, consider a measured set of unprofitable (or highly profitable) taxi/Uber drivers; can we identify regions of a city where they spend a proportionally higher percentage of their time. These and similar scenarios are directly modeled by the partial range scan statistic, and hence demand scalable solutions.

3.3 Full Range Model

In the full range model, we seek to find the shape which intersects the most trajectories of interest, compared to some baseline set of trajectories. A trajectory’s contribution to is not proportional to intersection size, but instead all or nothing.

[colback=white]

Full Range Model Example

Consider the set of limousine trajectories from a day in Beijing in the image above. The red trajectories may represent taxis which reported navigation systems malfunctioning, while the blue ones did not. You suspect a jamming devise, with a fixed radius, was placed by a competitor. Detecting the circular shape (in green) which maximizes the fraction of red trajectories which pass through, while minimizing the fraction of blue, is the most likely model for where a jamming devise could have been located. This corresponds precisely to finding the disk with largest value under the full range model. 111 Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL.

Mathematical Definition The contribution of a trajectory in a shape is binary. It is if there is any point where the trajectory enters the shape, and is only if the trajectory is never inside the shape. And as such ranges are defined , for some shape . There is no need to parameterize the trajectories in this scenario.

We will typically simplify trajectories by creating a small set of labeled points to represent them. In this scenario, we say intersects shape if any intersects .

Trajectory simplification is in fact necessary for several reasons: trajectories with an unbounded number of waypoints do not satisfy VC dimension based approximation guarantees (see Section 5), for certain range spaces there is no obvious way to define a set of regions to enumerate when considering line segments directly, and it is computationally easier than checking for full intersection with in most scenarios. Unfortunately this reduction does not allow direct application of some of the previous approaches in [18], since it needs to only count a trajectory as intersecting a range once even if multiple points are inside.

Motivating scenarios.

This model arises when just the fleeting intersection with a spatial region is enough to trigger a measured event for that entire trajectory. Consider a set of cars with slow leaks from nails in their tires (found at the end of the day); a region many of them passed through would more likely be someplace with nails on the road. Or consider a set of people’s cell phones which a virus, suspected to be infected when they pinged some wifi access point; then just passing near that access point may be enough to trigger the event, and finding a region with high-density of cases of the virus would provide a probable location of the offending access point. Or consider tracking a set of animals (e.g., cows, migrating birds) where a subset become sick; then a full range spatial anomaly may indicate a contaminated watering hole.

4 Spatial Approximation

As described, for a shape and trajectory , the critical operation in the full model is determining . This is far more efficient if we can approximate by a set of approximate waypoints , and then use as a proxy for . The critical aspects of such an approximation is to keep the size of the approximation small, even for long trajectories, and to ensure that the answer to the intersection between a shape and the point set approximates .

Specifically, we desire an -spatial approximation (or just -approximation for short). For a trajectory and any range , we say is an -spatial approximation of under two conditions (see Figure 3(Left)):

  1. (no false positives) If does not intersect , then does not intersect .

  2. (limited false negatives) If for all

    unit vectors

    , intersects , then intersects ; here is a shift of the entire trajectory in a direction by .

Figure 3: Left: Converting a trajectory into a set of points which approximate its shape preserves an -spatial guarantee, since we can find a nearby disk that still intersects the trajectory. Right: The relation between , , and in proof of Lemma 4.3 for some line segment on the boundary of one of our kernels.
Figure 4: Full scanning coresets from left to right: DP for halfplanes, Approx Hull for halfspaces, Grid Kernel for disks (radius ), Grid Kernel for disks (radius ), and Even.

We next list a series of trajectory approximations we study; all bounds assume all trajectories lie in a domain, otherwise is scaled accordingly.

All Waypoints. This baseline simply sets and retains all waypoints. This does not deal with long segments well, and does not achieve an -approximation except for halfspaces, but may be appropriate for data collected at regular and short intervals.

Random Sampling. This baseline randomly samples points from segments proportional to arclength. Let be the total arclength of a trajectory . Based on VC-dimensional [27] and -net [11] arguments, if has constant VC-dimension (as with disks, halfspaces, and rectangles) then for , with constant probability it is an -approximation.

Even. In this sketch, we select points evenly spaced according to arclength, where again is the total arclength of a trajectory. This deterministically creates an -approximation. To preserve the proportionality property for the partial case, we treat the trajectories as if they are chained together to adjust the first selected point from each trajectory – so the first points on trajectory is a distance from last point on trajectory .

DP algorithm. The Douglas-Peucker (DP) algorithm [12] is frequently used in practice as a compression step for trajectory simplification. This method iteratively removes waypoints from the original trajectory in a greedy fashion until removing another one would cause the Hausdorff distance between the original and the simplified one to exceed a chosen parameter . This ensures, for instance, that no query shape can intersect the original trajectory at a depth into without also intersecting the simplified trajectory. For halfspaces this provides an -approximation (see Lemma 4.1). However for rectangles and disks, this guarantee is only over the trajectory’s segments, but not the waypoints , so does not provide an -approximation as desired.

Convex Hull. This puts all vertices on the convex hull of in . This is a -approximation (has no error) for halfspaces.

Lemma 4.1

A halfspace intersects a trajectory if and only if it intersects at least one of its waypoints.

A halfspace intersects part (but not all) of a trajectory if and only if its boundary intersects one (or more) of its segments. If it intersects all of the trajectory, it must contain all waypoints. For a boundary plane to intersect a segment , it must be that one of its waypoints is inside the halfspace and the other is not since the segment is a convex object with these points as the only extreme points. Hence, we can check intersection of a trajectory with by checking if any of its waypoints are in ; otherwise all of the waypoints and the entire trajectory must be outside of .

Approx Hull. We create as the -kernel coreset which approximates the convex hull of  [3]. This provides an -approximation for halfspaces with only points, independent of arclength, but with restriction that all trajectories are in .

Lifting and Convex Hull for Disks. For disks, there is a reduction via a data transformation (the Veronese Map ) that provides similar approximations as the convex hull approach for halfspaces. Given a point set , the intersection of that point set with a disk is preserved under a map to where disks are mapped to halfspaces. For we replace it with . Every disk becomes a halfspace in and contains the same subset of points as the disk did in .

After this transformation, set as the points on the convex hull (or in the -kernel coreset [3]) in . However, because Lemma 4.1 does not apply to disks, this does not have any approximation guarantee. That is, a disk in which intersects a segment but no waypoints, transforms to a halfspace which contains part of a segment (these segments are now quadratic curves, and are not straight), but still no waypoints.

Lemma 4.2

The number of cells of a regular grid with grid cells of size that a polyline can enter is .

We will group cells into 9 groups and analyze each separately. Each cell is in the same group as other cells two hops over in one of the 8 directions (left, right, , etc…). Each cell touches 8 other cells which are not in its group, and each one is in a distinct group. Now within each group, to intersect a cell and enter another one the trajectory must travel a distance of , since it will have to pass the complete vertical or diagonal distance of a cell. Thus, within a group a trajectory of length L can touch at most cells. And in 9 groups, the total number of cells is at most .

Grid Kernel. For disks it makes sense to adjust the approximation for different radii as smaller values of are needed for very large radii disks which can potentially intersect many trajectories. By constructing multiple approximations and scanning each with different radii significant speedups in practice can be realized.

We adjust the gridding technique for disks, and specifically for a family which only considers disks of radius at least . We consider grid cells of size with . Within each grid cell we retain multiple points in , specifically those on a -kernel coreset of the points in that cell.

Lemma 4.3

For a trajectory with arclength , at most points are put in for Grid Kernel, and it is an -approximation for .

The maximum distance between two points in a grid cell is . Thus the -kernel incurs at most error between the convex hull of all points in that cell, and the hull of the approximate ones.

However, a disk may intersect part of a trajectory without intersecting any of the waypoints on the convex hull. But, if the longest possible edge in convex hull is then a disk of radius not containing a waypoint can protrude into the hull at most a distance (see Figure 3(Right))

The sum of these two errors is at most , as desired.

The total number of points associated with a trajectory of length is: the number of cells it intersects times the number of points in each cell . In total this is

5 Trajectory Sampling

To enable efficient scanning for the full model we require that the number of regions grows polynomially with the number of trajectories as otherwise random sampling cannot be used to attain an additive approximation bound. For the trajectory range spaces we consider it was not previously known if this was true. A bound on the VC-dimension of these range spaces would ensure this. It turns out the VC-dimension bounds are tied in some manner to , the number of points representing each trajectory; the number of possible subsets is then a function of the range complexity (, e.g., dimension ) and the trajectory complexity ().

We now consider that each trajectory is represented by exactly labeled points (if it is less than , we can duplicate the last point to increase to ). That is . Next consider an alternative range space where the ground set is the approximate waypoints. The number of ranges induced on a set of labeled points (two ranges are the same if they contain the same set of labels) is upper bounded by the number of unique subsets on the unlabeled points. Therefore the growth function on sets is upper bounded by , where the base range space has VC-dimension .

Lemma 5.1

The growth constant for is where is the growth constant of the range space . Hence the VC-dimension is .

In our context, this means after trajectories in are represented by at most points, then the VC-dimension for ranges defined by disks, halfspaces, or rectangles have VC-dimension .

So as long as the number of trajectory waypoints is bounded by , the sample sizes for and (needed in the two-level framework [20, 18]) are increased by a rather benign near-logarithmic in . We will invoke this in the context of scanning algorithms in Section 6.2.

Bounded is needed. It seems hopeful that a better bound independent of may be possible, but we can show that for halfspaces, disks, and rectangles it is possible to construct cases where the complexity of the trajectories, and hence the VC dimension is unbounded. To see this we will first restrict to the set of ranges induced by halfspaces in . Indeed, we can replace each trajectory with a convex set by Lemma 4.1, so we only need to work with a ground set of all convex sets . However, if the complexity of the trajectories, and hence the convex sets, is unbounded, then so is the VC-dimension even in , as the next lemma shows.

Because disks in are special cases of halfspaces in (by the Veronese map), this bound holds for disks as well.

Lemma 5.2

The VC-dimension of is unbounded, and hence so is the VC-dimension of with no restriction on .

For any integer , we can design a set of convex sets in so that we can shatter the set. Let each be a nearly identical convex polygon with vertices. Each of the vertices, which are in nearly the same location for each polygon, corresponds with one of the different subsets we seek to define. If the vertex is in a polygon that should be in the subset, shift it slightly counter-clockwise; if it is not in the subset, shift it slightly clockwise. Then a halfspace can include the vertex with all points shifted counter-clockwise (those intended for the subset) and nothing else in any polygon. Since we can do this independently for all subsets for any , we can shatter a subset of any size .

Now because a halfspace intersecting a trajectory is equivalent to intersecting any of its waypoints (see Lemma 4.1), then it is equivalent to intersecting the convex hull of those waypoints. Thus, we can generate the same construction with a trajectories with waypoints for any value , and thus if we have no bound on , we have no bound on the VC-dimension of .

For rectangles we can construct a similiar proof.

Lemma 5.3

The VC-dimension of is unbounded if there is no restriction on .

Again for any integer , we can design a set of trajectories in so that we can shatter the set. Each will have vertices such that for each trajectory it has vertices where each vertex takes part in a permutation of on the line and similarly vertices on the line partaking in their own set of permutations. Between the two lines every permutation can be constructed. The corner of a rectangle can then cut off a subset of each permutation independendently of intersecting another subset and since all possible subsets are contained in the permutations, every subset of trajectories can be induced. Thus if we have no bound on , we have no bound on the VC-dimension of .

6 Scalable Algorithms for Finding Trajectory Anomalies

We next describe how to efficiently scan over the trajectory range spaces to efficiently find -approximate spatial scan anomalies on the various range spaces defined for trajectories, and statistical discrepancy functions . In the case of the flux and partial models, we provide new direct reductions to the point-set based scanning algorithms. For the full model these reductions are not possible, and we require the development of several new insights – different ones for each scanning shape. In particular, for scanning under the full model with disks (perhaps the most intuitive definition) we develop new ways (the MultiScale Disk approach) to represent and approximate the range space which becomes much more efficient that what was even previously known about point-set based scanning.

6.1 Reductions for Partial and Flux Models

We first describe two reductions for the flux model and partial range model to algorithms for scanning over points instead of trajectories.

Flux model reduction. For the flux model, the reduction starts again by sampling trajectory subsets . Now we convert each trajectory (or in ) to a point set in (or ) as follows. We first convert every trajectory into only its two endpoints and and place both of these points in (or in ). In we require (recorded) and (baseline) values, and we only focused on the simpler variant which considers the linear model . The we set , and , . Note now that if both , then the total contribution and is ; the points cancel each other out. When only , then the contribution is as desired, and if only , then the contribution is , also as desired.

Theorem 6.1

A flux model scan statistic for the linear statistical discrepancy function on trajectories can be reduced to a point-based scan statistic on the endpoints.

Partial model reduction. For the partial range model, the key quantities for are and , the fraction of all possible contributions from trajectories from the recorded and baseline sets. Since in the partial range model we restrict to parts of trajectories, independently of which trajectory they are part of, we can convert to a point set input as follows. Given the full sets of trajectories , we denote the continuous set of points in these trajectories as . We then take uniform (or weighted) samples of to construct and .

Since the contribution of a point in towards is independent of the contribution of other points on the trajectory (unlike the full model), running a point-based scanning model on will return the same value for any as the trajectory-based partial range model.

Theorem 6.2

A partial range model scan statistic on trajectories can be reduced to a point-based scan statistic on a uniform sample over the trajectory by arclength.

6.2 Scanning under the Full Model

There are three major challenges in extending scanning algorithms to the full model – even after first converting each trajectory into a point set of size . The resulting approaches are multi-faceted, and different for each scanning shape, and summarized in Table 3.

First, in order to obtain the runtime bounds of point-based two-level sampling algorithms, the sets and , were of size roughly and in the point-based model, now need to be of size and , respectively. Each object placed into the “net” or “sample” set now is required to be a set of points (on average) from each of the trajectories sampled. The scanning algorithms have linear time in and moderate polynomial (degree to ) in , so this increases the runtimes beyond the point-based setting by a moderate polynomial factor in . The results for the various spatial approximation techniques are summarized in Table 3. We will demonstrate that for halfspaces and rectangular ranges, this increase is tolerable if the right -spatial approximation is used, but for disks we design a new multi-level approximate scanning approach which works in tandem with the -spatial approximation.

Second, extra bookkeeping is required to maintain which point sets are already intersecting a shape during the scanning process–so that a trajectory is not overcounted when multiple points intersect . For this we use a global integer counter array , where a non-zero counter serves as an indicator that the trajectory is in the shape in question. We maintain and only update when a counter toggles between zero and non-zero. This extra bookkeeping may seem a trivial change, but it prevents the use of some approaches, in particular for rectangle scanning.

Third, to extend the approximation guarantee [20, 19] based on two-level sampling to this setting, we need to bound the VC-dimension of the range space . In Section 5 we showed that for halfspaces, disks, and rectangles, if there is no bound on , then the VC-dimension is unbounded. However, we also showed for all of these shapes that when each trajectory is represented by a point sets of size (the ground set is ), then the VC-dimension is only in dimensions. That is when the objects are point sets of size at most , they only increase the VC-dimension by a benign factor, and hence the coreset sizes for and . These sample bounds are incorporated into Table 3.

       Runtime Overview in the Full Model
SSS runtimes for the full model vary by shape, and error parameters, and methods of spatial approximation. From the scanning perspective they depend on and , shown in the middle column of Table 3. Parameters is the small net size, and is the large sample size are described in Table 3 for allowing -error in . The spatial approximation size is described in Table 3, and depends on the method used to achieve an -approximation. The best runtime bounds for -error on and -spatial error are shown in the right column of Table 3.
Shape Runtime Best Bounds
Rectangles
Halfspaces
Disks
Method Size Error Shapes
Convex Hull
DP Algorithm
Approx Hull
Grid Kernel
Random Sample All
Even All
Gridding All
Sample Size
Sparse Net
Dense Sample
Table 1: Overall algorithm runtimes in terms of sample size ( and ) or error parameters (statistical error , spatial error , and arclength ). hides factors and is for constant probability of success. Derivations of bounds are in Section 6.
Table 2: The -spatial approximation bounds for trajectory coreset algorithms with output of size . Trajectories are of length and with waypoints, and all in a domain . Derivations of bounds in Section 4.
Table 3: Sample size bounds to obtain with probability where is the true maximum and is the found approximation or trajectories of size . Derivations of bounds in Sections 5 and Section 6.2.

Rectangles. We extend a time algorithm for scanning rectangles [18], which in our case becomes . The faster algorithms from that paper [18] (taking and time do not extend because they require a special decomposition for implicit processing of the ranges which cannot accommodate the maintenance of the counter.

This algorithm defines a non-uniform grid by using a scan line in the x and y direction to ensure each row and column has at most trajectories or has at least width whichever is larger. We recommend the Gridding approach for an -approximation since then many of these coordinates are duplicated reducing the effective grid size. Then all of the points in (of size ) are mapped into the appropriate grid cells, and duplicates for the same trajectory can be removed. Then we consider each of the at most rectangles defined on this grid. We can scan them efficiently by fixing every possible upper, bottom, and left side of the rectangle (there are combinations), and then sliding the right side of the rectangle from the left edge until it hits the end of the grid. The entire scanning of the right edge updates the counters at most times. So the total runtime is .

Spatially-approximated Rectangles. An important optimize for rectangles takes advantage of the -spatial approximation. Instead of setting consider all rectangles, we only need to consider endpoints differing by at least , there are no more than of these. Thus the total runtime becomes . This variant is used in experiments.

We can also add a restriction where we short circuit the algorithm if the resulting subgrid will have height or width greater than some set value. This restriction can significantly decrease the runtime.

Halfspaces. To create approximate sets for each trajectory we use the Convex Hull and the Approx Hull methods to reduce to size point sets. The former induces no spatial error, and the later provides and absolute bound on the size (at ).

Let represent the total number of net points required, with trajectories sampled into and then on average requiring points to approximate. Using a combinatorial arrangement view of halfspaces and point sets, these can be scanned in time [6], using advanced techniques from computational geometry. This can be converted into a bound in our setting. We review a simpler model here that takes time on point sets, and in our setting. Let be the points approximating the trajectories in , and similarly let be the points from . For each point sort all points in radially around . Then consider halfspaces with on the boundary, and scan through them radially around updating the the counters and as necessary.

Higher dimensional halfspaces can be reduced to lower dimensional halfplane problems by doing an affine projection down into the -dimensional space. Using the halfplane algorithm to solve these problems gives a runtime of . An optimization we call the “Hull Trick” does an additional pruning step after each projection to dimensions; it creates the convex hull in -dimensions, and only retains the points on the hull before scanning.

Disks. After approximating trajectories by point sets, disk-scanning can be implemented as halfspace scanning for using the Veronese map. This gives runtime bounds of .

However, these shapes are significantly more sensitive to the -spatial approximation technique, especially if there are long edges. For instance in our experimental data we found disks might need or even times more points to generate an -approximation for a trajectory (see Table 4). As such we design a new scanning method that works in concert with the Grid Kernel approach.

MultiScale Disks. The previous disk scanning algorithm is not tractable or scalable, but we can combine a large number of tricks to handle these issues.

We consider scanning over disks with radius in a range , specifically where for a small integer . We decompose this into subranges, so in each the radius is in a range and handle these subranges independently. In experiments, we set (or in some cases ) which covers a natural and intuitive set of radii.

Then we scan in concert with the -spatial approximation method Grid Kernel using parameter . This reduces the size of each trajectory approximation , especially for large ; see Table 4. Assuming the data has been mapped to a range, we can create a grid, with cell edge length . We now consider the subsets of grid cells where the center cell has at least one point in the set of points. Then for each such center cell point (a “pivot” point), we consider only disks with on the boundary, and other boundary points among those in this subset of . Especially for small , the restriction to this local subset of the data greatly reduces the number of points which are considered when constructing disks with respect to each pivot : from all net points to only those points from inside this subset. Furthermore this also significantly reduces the number of sample points in that must be tested for inclusion in each disk.

Together the effects of lower bounding the radius (to reduce ), and upper bounding the radius (to effectively reduce the number of considered regions and the the number of points to scan for each pivot), makes every range of this MultiScale Disk approach quite tractable.

This method can be further improved with the Hull Trick, especially for complex trajectories. Focusing on the disk scanning algorithm for all disks passing through the pivot ; this maps to the problem of scanning all d halfspaces in a different lifted parameter space, as discussed in Section 4. This reduction allows us to apply the Hull Trick from the Halfspace algorithm section or another halfspace coreset method from Table 3 to significantly reduce the number of points that must be considered.

7 Experiments

In this section we show scalability of all of the proposed algorithms, and their effects on the statistical power of the scan statistics. We show for the partial and flux models, through our new direct reductions to recent work, we can compute scan statistics in a scalable yet statistically powerful way. For the full model, the most naive reductions to existing methods are not viable, but our proposed geometric and algorithmic observations for each scanning shape lead to significant speed ups.

We demonstrate these improvements with two types of measurements. The first is directly showing the runtime of the algorithms as a function of either the statistical error parameter or the spatial error parameter . We also measure the discrepancy error, where we attempt to find a large value, and we show at increasing parameter settings how the largest region found approaches as a function of the runtime. This demonstrates that these algorithms are not just fast, but they become statistically powerful in tractable runtimes.

7.1 Setup and Data

All of our code and the scripts used to generate experiments are publicly available on github and our project website 222https://mmath.dev/pyscan. All of our code is written in C++ with an easy to use python wrapper that we hope will allow for researchers to apply these algorithms to many other data sets. Current algorithm implementations are serial, but the algorithms are embarrassingly parallel, so converting to a parallel implementation would be trivial.

Experiments were conducted using the python wrapper on computers with an Intel Core i7-3820 and 64GB of memory. It ran Ubuntu 14.04 with kernel version 3.13.0-147. Code was compiled with GCC version 8.1.0, Boost version: 1.69.0, and Python version 3.4.3. Experiments were run in successive fashion on a per node basis. No experiments were run together on the same node at the same time to minimize the effect of other processes on run time.

We perform experiments on two large trajectory data sets, described next, which have very different conditions. One is diverse, and has trajectories of significantly different sizes and lengths, but the overlap is clustered. The other has more uniform trajectory sizes and lengths, but they are all intermingled.

Open Street Map Trace Data. This data set is our default, and consists of a subset of close to 6 million traces from Open Street maps with 1.282 billion total waypoints. We restrict the set to ones contained completely inside of a rectangular region in Europe with latitude and longitude of . The large extent of these traces means that there are comparatively sparse regions corresponding to rural areas and also densely packed regions such as cities. Many trajectories are restricted to small regions compared to the full domain size.

Beijing Taxi Data. This is a densely packed and highly overlapping set of roughly 3 million trajectories with 129 million total waypoints collected from taxi drivers in Beijing [16]. This data set has a very high sampling frequency per trajectory with % of the points being collected less than 1 minute apart. Many roads have been driven over by hundreds to even thousands of separate traces. Since small regions can be densely packed with trajectories, local scanning approaches that restrict the region of interest to be of small size work comparatively worse. There are comparatively few sparse regions. A set of only 25 of these trajectories are shown in the extended Full Range Model Example. We have restricted the trajectories to be confined to a region with latitude and longitude of .

Name Beijing #Pts OSM #Pts
Grid Kernel km
Grid Kernel km
Grid Kernel km
Grid Kernel km
Grid Kernel km
Grid Kernel km
DP
Approx Hull
Convex Hull
Gridding
Even
Table 4: Setting shows the average value of (size of approximation) for -approximation algorithms.

7.2 Spatial Approximation Size

We first empirically evaluate how well the various -spatial approximation algorithms work on average. We simplify all trajectories in each data set such that is set to – about one city block (when normalized to , for Beijing this is and for OSM it is ). We show the average value of for the various algorithms in Table 4. We observe that the OSM data is much easier to approximate than the Beijing data – and in fact we use a larger as default later which would lead to even smaller values on average. Next we observe that the methods designed for halfspaces (Convex Hull, Approx Hull, and DP) generate very small average values of of to (or for DP on Beijing). For large values of , Grid Kernel can start to approach of around or for OSM, but otherwise the methods for rectangles and disks (Grid Kernel large , Gridding, and Even) have larger average of around for OSM, and over for Beijing. This factor of difference in can cause dramatic slow-downs in the experiments.

Figure 5: Runtime for flux model algorithms (left) and partial model algorithms (right) in terms of input net size .

7.3 Scalability Experiments

We next demonstrate how various parameters affect the runtime, and how various algorithms compare in scalability. The runtime as a function of for flux model and partial model algorithms are shown in Figure 5; setting and as suggested [20]. The partial data is sampled using the Even mechanism. Some runtime curves become linear as becomes the entire data set (12 million), and only increases.

Full Model Runtime on OSM

Baseline Time(sec)

Halfplane Time(sec)

Disk Time(sec)

Best in Class Time(sec)

Inverse Spatial Error Inverse Statistical Error
Figure 6: Runtime for full model scanning as function of inverse spatial error, , or inverse statistical error, , on OSM data. The inverse spatial error and inverse statistical error act as a size parameter for our algorithm since our guarantees and runtime do not depend on the initial data size – only on the statistical or spatial resolution.
Full Model Runtime on Beijing

Disk Time(sec)

Best in Class Time(sec)

Inverse Spatial Error Inverse Statistical Error
Figure 7: Runtime for full model scanning as function of inverse spatial error, , or inverse statistical error, , on Beijing data.

We observe that the rectangle-based algorithms are quite scalable, and are able to set and still complete in about minute. The generic two-level sampling algorithm for halfspace scanning is almost as scalable, and performs better than rectangle when the “Ham” coreset [19] is used. However, the scanning algorithms for disks require several minutes to deal with even , and becoming intractable for anything larger. In summary, using our new reductions to the point-based algorithms, under the partial and flux models, rectangles and halfplanes scanning for trajectory anomalies is already scalable.

Full Scanning on OSM. The algorithms for the full intersection model are significantly more nuanced. These runtimes are shown as a function of and in Figure 6. While the other is varied we set and (for Disk and for Best in Class) and (for Baseline and for Halfplane). Rectangles are restricted to having max side length less than to make them have roughly the same size as the restricted disk variants in all OSM plots. Note that all runtimes are on a logarithmic scale.

The Baseline rows shows the scalability of the basic algorithms using Even

-spatial approximation. Again, for halfspace and rectangle scanning, the runtimes are tolerable, but the disk scanning becomes already intractable for moderate parameter values. Note that now the runtimes are quite noisy due to the high variance in the trajectories length in the OSM data – even if

is small on average, for some trajectories sampled it might be quite large.

The Halfplane row shows the difference in runtimes for the All Waypoints, Convex Hull, and Approx Hull methods for spatial approximation. The All Waypoints and Convex Hull have spatial error (by Lemma 4.1), so horizontal lines with error bars are shown. Convex Hull provides a dramatic improvement over All Waypoints, of 2 to 3 orders of magnitude (i.e., from several minutes to less than a second). The Approx Hull approach shows smaller but tangible improvement over using all hull points, but adds some spatial error.

The Disk row shows another dramatic improvement in scalability as we restrict the radii considered to and apply other improvements. With no bound on the radius, the (Disk + Even) algorithms are intractable. Still invoking Even, but using a grid to prune the scanning to a radius range bound (Small Disk + Even) allows for moderate values of and to complete in minutes. But combining the MultiScale Disk approach with the adaptive Grid Kernel spatial approximation allows the same moderate error parameter runs to complete in s of seconds, and in about or minutes this approach can scale to . Adding the Hull Trick does not induce significant gains here.

Finally in the Best in Class, row we show the best algorithms runtime for each scanning shape – the improvement over the baseline for halfspaces and disks is dramatic. With these algorithms it is now possible to set and complete in about seconds for any shape. Halfspaces and radius restricted disks can set or and complete in about a minute or two; they are now very scalable in . For Rectangles it can set and still complete in a minute or two. On the other hand, the algorithm for rectangles is more tolerant to very small values of .

Full Scanning on Beijing. The runtimes for the algorithms under the full model on the Beijing data set are shown in Figure 7. We restrict for radius-restricted disks, and for rectangles side length is restricted to be less than . The first (Disk) row shows the improvement when scanning with radius-restricted disks. Fixing , only the MultiScale Disk scanning with Approx Hull and the Hull Trick can complete in under an hour, and indeed can scale to in that time. Here the consistent long trajectories greatly benefit from the extra Hull Trick pruning. Fixing only this disk scanning variant can reach a moderate in under an hour.

The second (Best in Class) row shows that now the rectangle scanning times are strictly worse than the radius restricted disk times. The halfspace algorithms have similar runtimes as with the OSM data, and are now much more scalable than the MultiDisk approach. Thus the tangled nature of the Beijing data affects the rectangle scanning the most and the halfspace the least.

Figure 8: Statistical power vs time for partial and flux.

7.4 Statistical Power Tests

In this section we measure the statistical power of these scanning algorithms. We fix the spatial layout of trajectories from the OSM or Beijing data sets, and introduce a spatially anomalous region under the various models by adjusting the and values of trajectories. Such regions (either a halfplane, disk, or rectangle to match the scanning region) are “planted” by choosing a shape and setting a desired rates for inside as , for all data , and the anomaly size . The optimal shape may be different (usually slightly shifted), and so we evaluate these tests by tracking the value (the “Measured Discrepancy”) found for various parameter settings. When this maximum value plateaus, it indicates those parameters settings and runtime have high power and are sufficient to recover the anomalous shapes.

This process is much noisier than just measuring runtime as a function of parameter settings. So in plots we show many data points and fit a trend line using a local average. In the same plots, different scanning shapes naturally converge to distinct Measured Discrepancy values.

For all of these experiments we use a rate of for the data outside of the planted region and a rate of for the data inside of the region.

Flux Scanning Power. The time required to achieve statistical power for the flux scanning model is shown in Figure 8. It shows the Measured Discrepancy values for disk, rectangle, and halfspace algorithms under the flux model on the Beijing data. We fix the planted shape to contain of the data (). As observed earlier, the disk regions are very slow to scan for under this model, and do not achieve high power until hours of runtime. However, the rectangle and halfspace scanning algorithms converge to a high power setting in about seconds to minute – hence for the flux model, we recommend these shapes.

Partial Scanning Power. Figure 8 also shows the time require to achieve high statistical power under the partial model on OSM data. We plant small ranges of size . Each scanning shape (rectangle, disk, halfplane) is in a separate chart. They all eventually achieve high statistical power, but the halfplane scanning algorithms only take about seconds, while the disk and rectangle algorithms require about minutes. We can also observe that Even spatial approximation consistently converges faster than Random Sampling.

Full Scanning Power. Finally, Figure 9 shows how long it takes to achieve high statistical power under the full model. We plant regions in the OSM data with size . We show the results of scaling runtime by varying (setting ) and varying (setting ). For halfplanes, we use Convex Hull which has no spatial error, so it only has its runtime vary as a function of ; and it achieves high power after about hour. For rectangles, it achieves high power through larger values in about minutes in both and scaling. For radius-restricted disks, we show the MultiDisk scanning with Grid Kernel (and Hull Trick); it appears less tied to the choice of as this setting is tied to the resolution of the grid approximation. For a fixed , it sometimes, but does not consistently find a high score region indicating that the setting might be too small, but as increases after about 10 minutes it achieves high power.

Inverse Spatial Error Inverse Statistical Error

Measured Discrepancy

Time (sec) Time (sec)
Figure 9: Statistical Power on OSM data with full model.

8 Previous and Related Work

Our algorithms build upon the recent two-level sampling framework for spatial scan statistics by Matheny et al. [20, 18]. This focused on making scalable -approximate spatial scan statistics over point sets. This line of work provides improvements over non-approximate variants [14, 15], in terms of the number samples, and the runtime of the scanning algorithms for the same shapes we study: halfspaces, disks, and rectangles. The introduction of two-level sample makes these approaches tractable, and better coresets or faster scanning makes them extremely efficient. However, these fast SSS methods only apply for point sets.

There exist other mechanisms for finding anomalous behavior among trajectories; these involves clustering by density and then identifying outliers, or training learning models on pre-labeled data 

[21, 5, 13, 25, 28]. These approaches do not specifically identify spatial regions from characteristics of the trajectories or compare against a background population.

A few papers have attempted to port scan statistics to trajectories, with goal of finding spatial anomalous regions. Pang et al. [23] discretized cities into grids and recorded traffic conditions in each cell. They then computed a likelihood ratio test over all sub-grids to detect the most anomalous region. And Liu et al. [17] partitions a city into regions defined by the road network, and their adjacency defines a graph. Then spatial anomalous links in this graph are scanned over various times to find a timeregion of high traffic. But neither of these approaches directly operate on the trajectories, and fix regions ahead of time which restricts the set and nature of possible anomalies.

An alternative approach (for point sets) removes the notion of shapes, and focuses on clustering the measured objects to find potential anomalies [7, 8, 24, 26, 10]. However, this approach already does not have guarantees about statistical accuracy for point sets, and the task of clustering trajectories appropriately (and in this case one should be concerned about not over-fitting) has its own set of challenges, which are beyond the scope of this paper.

9 Conclusion

We introduce three new models for quantifying spatial anomalies among large sets of trajectories using spatial scan statistics and defined by geometric shapes. These identify regions which exhibit high flux, a large percentage of the total arclength of a measured quantity, or a large percentage of a set of trajectories of interest pass through that region. These models have numerous applications in traffic analysis, disease outbreak monitoring, epidemiology, and demography.

Through either combinatorial, geometric reductions, or new scanning algorithms, we are able to identify these anomalous regions efficiently on data sets containing millions of trajectories with billions of waypoints. This efficiency requires various insights tuned to the families of shapes we considered which include halfplanes, disks, and rectangles. This includes careful ways to approximate trajectories by point sets, and fast enumeration methods. These approximations are backed by a theoretical analysis, which shows guaranteed bounded error in the spatial trajectory approximation () as well as in the measurement of the statistical quantities (). The runtime depends only on these parameters.

And most importantly, we also measure the statistical power of the scanning algorithms. That is, if we plant an anomalous region under each of the models, our scanning algorithms can with high probability, recover that region (or a similarly anomalous one) in tractable amounts of time. Even on the millions of trajectories, high statistical power is often achieved within minutes, or for more challenging variants, in hours.

References

  • [1] Deepak Agarwal, Jeff M. Phillips, and Suresh Venkatasubramanian. The hunting of the bump: On maximizing statistical discrepancy. SODA, 2006.
  • [2] Pankaj K. Agarwal and Jeff Erickson. Geometric range searching and its relatives, 1999.
  • [3] Pankaj K. Agarwal, Sariel Har-Peled, and Kasturi R. Varadarajan. Approximating extent measure of points. J. ACM, 51(4):2004, 2004.
  • [4] Ery Arias-Castro, Rui M. Castro, Ervin Tánczos, and Meng Wang. Distribution-free detection of structured anomalies: Permutation and rank-based scans. JASA, 113:789–801, 2018.
  • [5] Hannah M. Dee and Sergio A. Velastin. How close are we to solving the problem of automated visual surveillance? Machine Vision and Applications, 19(5):329–343, Oct 2008.
  • [6] David P. Dobkin, David Eppstein, and Don P. Mitchell. Computing the discrepancy with applications to supersampling patterns. ACM Trans. Graph., 15(4):354–376, October 1996.
  • [7] Luiz Duczmal and Renato Assunção. A simulated annealing strategy for the detection of arbitrarily shaped spatial clusters. Computational Statistics & Data Analysis, 45(2):269 – 286, 2004.
  • [8] Luiz Duczmal, André L.F. Cançado, Ricardo H.C. Takahashi, and Lupércio F. Bessegato.

    A genetic algorithm for irregularly shaped spatial scan statistics.

    Computational Statistics & Data Analysis, 52(1):43 – 52, 2007.
  • [9] Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. A density-based algorithm for discovering clusters a density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD’96, pages 226–231. AAAI Press, 1996.
  • [10] Dylan Fitzpatrick, Yun Ni, and Daniel Neill. Support vector subset scan for spatial outbreak detection. Online Journal of Public Health Informatics, 9(1), 2017.
  • [11] David Haussler and Emo Welzl. epsilon-nets and simplex range queries. Discrete and Computational Geometry, 2:127–151, 1987.
  • [12] John Hershberger and Jack Snoeyink. Speeding up the Douglas-Peucker line-simplification algorithm. Technical report, UBC, 1992.
  • [13] Weiming Hu, Xuejuan Xiao, Zhouyu Fu, D. Xie, Tieniu Tan, and S. Maybank. A system for learning statistical motion patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(9):1450–1464, Sept 2006.
  • [14] Martin Kulldorff. A spatial scan statistic. Communications in Statistics: Theory and Methods, 26:1481–1496, 1997.
  • [15] Martin Kulldorff. SatScan User Guide. http://www.satscan.org/, 9.6 edition, 2018.
  • [16] Jing Lian and Lin Zhang. One-month beijing taxi gps trajectory dataset with taxi ids and vehicle status. In Proceedings of the First Workshop on Data Acquisition To Analysis, DATA ’18, pages 3–4, New York, NY, USA, 2018. ACM.
  • [17] Wei Liu, Yu Zheng, Sanjay Chawla, Jing Yuan, and Xie Xing. Discovering spatio-temporal causal interactions in traffic data streams. In SIGSPATIAL, KDD ’11, New York, NY, USA, 2011. ACM.
  • [18] Michael Matheny and Jeff M. Phillips. Computing approximate statistical discrepancy. ISAAC, 2018.
  • [19] Michael Matheny and Jeff M. Phillips. Practical low-dimensional halfspace range space sampling. ESA, 2018.
  • [20] Michael Matheny, Raghvendra Singh, Liang Zhang, Kaiqiang Wang, and Jeff M. Phillips. Scalable spatial scan statistics through sampling. In SIGSPATIAL, 2016.
  • [21] B. T. Morris and M. M. Trivedi. A survey of vision-based trajectory learning and analysis for surveillance. IEEE Trans. Circuits and Sys for Video Tech., 18:1114–1127, Aug 2008.
  • [22] Daniel B. Neill and Andrew W. Moore. Rapid detection of significant spatial clusters. In KDD, 2004.
  • [23] Linsey Xiaolin Pang, Sanjay Chawla, Wei Liu, and Yu Zheng. On mining anomalous patterns in road traffic streams. In International Conference on Advanced Data Mining and Applications, 2011.
  • [24] G. P. Patil and C. Taillie. Upper level set scan statistic for detecting arbitrarily shaped hotspots. Environmental and Ecological Statistics, 11(2):183–197, Jun 2004.
  • [25] C. Piciarelli, C. Micheloni, and G. L. Foresti. Trajectory-based anomalous event detection. IEEE Trans. Circuits and Sys for Video Tech., 18:1544–1554, Nov 2008.
  • [26] Toshiro Tango and Kunihiko Takahashi. A flexibly shaped spatial scan statistic for detecting clusters. Inter. J. of Health Geographics, 4:11, 2005.
  • [27] Vladimir Vapnik and Alexey Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theo. of Prob and App, 16:264–280, 1971.
  • [28] Dragomir Yankov, Eamonn Keogh, and Umaa Rebbapragada. Disk aware discord discovery: Finding unusual time series in terabyte sized datasets. Knowl. Inf. Syst., 17(2):241–262, November 2008.