GLoG: Laplacian of Gaussian for Spatial Pattern Detection in Spatio-Temporal Data

09/09/2019 ∙ by Luis Gustavo Nonato, et al. ∙ Universidade de São Paulo Ufes NYU college 0

Boundary detection has long been a fundamental tool for image processing and computer vision, supporting the analysis of static and time-varying data. In this work, we built upon the theory of Graph Signal Processing to propose a novel boundary detection filter in the context of graphs, having as main application scenario the visual analysis of spatio-temporal data. More specifically, we propose the equivalent for graphs of the so-called Laplacian of Gaussian edge detection filter, which is widely used in image processing. The proposed filter is able to reveal interesting spatial patterns while still enabling the definition of entropy of time slices. The entropy reveals the degree of randomness of a time slice, helping users to identify expected and unexpected phenomena over time. The effectiveness of our approach appears in applications involving synthetic and real data sets, which show that the proposed methodology is able to uncover interesting spatial and temporal phenomena. The provided examples and case studies make clear the usefulness of our approach as a mechanism to support visual analytic tasks involving spatio-temporal data.



There are no comments yet.


page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Feature extraction and transformation comprise fundamental steps in the visualization and visual analytic pipeline. In the particular case of spatio-temporal data, such preprocessing steps are of critical importance, as many visual analytic tools rely on features to enable meaningful visualizations of patterns and trends hidden on the data [1, 2, 3]. Over the years, a number of techniques devoted to extract and processing features from spatio-temporal data have been proposed, ranging from simple aggregation schemes [4] to sophisticated topological [5]

and tensor decomposition methods 

[6]. Those techniques are designed to capture specific properties or phenomena present in the data. For instance, topological methods look for locations where data assume extreme values while techniques based on tensor decomposition aim to decompose the data so as to identify regions and time slices with common properties.

Identifying locations where data changes abruptly has long been the goal of many feature extraction methods. Regions of sharp variation of the data typically correspond to locations where data transitions from one ”state” to another, thus being of great relevance for analytical purposes. Feature extraction methods able to detect local changes in a signal have been extensively used in fields such as image processing and computer vision [7], being called edge detection methods. In fact, edge detection has been the building block of methods designed for recognizing and segmenting objects, enhancing details, and performing feature preserving denoising in images and videos. Despite the importance, few has been done towards developing techniques similar to edge detection to assist visual analytic tasks, mainly in applications involving data defined in unstructured domains. A reason for this gap is that most edge detection techniques developed in the context of image processing rely on mathematical tools such as derivatives and convolution, which can hardly be defined on non-regular grid-like domains.

This work brings an alternative to the issue pointed above, proposing an operator able to identify abrupt changes in a signal defined on the nodes of a graph. More specifically, we propose a novel filter, called GLoG, which is the counterpart for graphs of the so-called Laplacian of Gaussian edge detection method (also known as Marr-Hildreth operator) widely used in image processing [8]. Our approach relies on the theory of Graph Signal Processing (GSP) [9], which provides a solid mathematical framework for adapting and extending well known tools from the classical signal processing field to the more general context of graphs [10, 11]. The proposed GLoG filter can identify spatial locations of abrupt changes in a signal, uncovering regions (boundaries) where the signal changes its properties. Moreover, we build upon the GLoG filter to highlight time intervals where “expected” and “unexpected” boundaries take place. In other words, we rely on the GLoG filter to define the concept of entropy of time slices, from which we derive a temporal entropy diagram

. The latter allows the visual identification of time instances where observed boundaries are likely to happen (lower entropy time instants) as well as moments where observed boundaries are less expected (higher entropy time instants). The resulting analysis makes easier the visual identification of expected and unexpected patterns over time. Nevertheless, the Gestalt law of proximity

[12] states that groups of points spatially close to each other are pre-attentively perceived as a common set of abstract features. The boundaries extracted from a signal fit naturally in this concept, turning out a valuable visual resource to reveal spatial patterns and trends.

We show the effectiveness of our approach in synthetic and real data sets containing information about taxi trips in Manhattan, NYC, and geo-referenced criminal activities in the city of São Paulo, Brazil. The provided examples and case studies make clear the usefulness of the proposed boundary detection and entropy diagram as basic tools to support the visual analysis of spatio-temporal data.

In summary, the main contributions of this work are:

  • GLoG (Graph Laplacian of Gaussian), a boundary detection filter that is the graph counterpart of the so-called Laplacian of Gaussian filter typically used in image processing. As far as we know, this is the first time a boundary detection filter is proposed in the context of Graph Signal Processing;

  • the concept of entropy of time slices and associated entropy diagram, which enables the visual identification of expected and unexpected spatial phenomena over time;

  • two case studies that attest the usefulness and potential of the proposed methodology to support visualization assisted analysis of spatio-temporal data. These case studies shows the potential of GLoG operator as a feature extraction method in the context of spatio-temporal data.

We emphasize that the proposed GLoG filter should not be seen as a competitor of other feature extraction techniques. In fact, the GLoG aims to detect a specific property (abrupt change) of a signal, which is not captured by existing methods. Therefore, GLoG can be combined with other techniques to produce features that captures a wide variety of traits present in spatio-temporal data.

2 Related Work

The literature about visualization assisted spatio-temporal data analysis is extensive, ranging from georeferenced time-varying data analysis [1, 13] to dynamic networks [14, 15]. The different methodologies have been reviewed and organized in a number of surveys [16, 17, 18, 19] and books [4, 20]. To better contextualize our contribution, we focus on techniques that rely on feature extraction and data transformation mechanisms to leverage the visualization of spatial and temporal phenomena present in the data. We group spatio-temporal feature extraction and/or data transformation techniques in three categories: temporal, spatial, and spatio-temporal.

Temporal methods aim to uncover patterns by transforming the temporal component of the data associated in each spatial locations. In visual analysis, transformations such as temporal aggregation [21, 22] and moving averages [23] figure among the most common temporal approaches. There are also alternatives that rely on prediction mechanisms to detect unexpected temporal events, which are highlighted during visualization [24]. Classical signal processing methods have also been employed to extract features from time-series associated to specific spatial locations so as to make the visual identification of similar temporal patterns an easier task [25]. When regions made up of several locations have to be handled, spatial data aggregation is firstly employed to combine multiple time-series into a single one, which is then processed to identify patterns and trends [26]. Aggregation tends to attenuate high-frequency events, hampering the identification of patterns associated to abrupt variations of the signal in particular locations. In fact, aggregation can be seen as low-pass filter applied prior to a feature extraction mechanism. The approach proposed in this work naturally combines low pass filter and feature extraction in a single operator.

Spatial transformation schemes aim to uncover spatial patterns present in the data. More specifically, given a time slice (which can correspond to aggregated data from a time interval), spatial methods typically split the spatial domain in subregions, extracting and/or emphasizing features in each subregion. Subregions can be defined based on density information [27] or by grouping spatial locations with similar content [1, 28, 29]. A large number of methods have been proposed to extract features from spatial locations to support visualization tasks. Some examples are the interchangeable matrices [30], which uncover co-occurring spatial events, and topological approaches [5, 3], which identify spatial locations that bear “peculiar” behavior. Topic modeling has been exploited as a mechanism to characterize spatial locations [31]

. Graph signal processing tools such as windowed graph Fourier transform have been employed to extract features from spatial locations 

[32]. Most of the spatial methods described above are designed to extract and identify patterns in fixed time slice, resorting to visualization as a main analytical resource to understand patterns and their dynamics over time. The boundary detection approach proposed in the current work can be seen as a spatial method, although the derived entropy diagram is a temporal visual tool.

Fig. 1: Method pipeline: our methodology apply GLoG filter at each graph signal time slice to compute edge node configurations and define entropy diagram.

Spatio-Temporal transformation techniques operate on temporal and spatial dimensions so as to extract meaningful information that is emphasized during visualization. A typical alternative is to compute separate temporal and spatial features, resorting to visual analytic tools to unify the spatial and temporal analysis. This is the methodology implemented in the MobilityGraphs technique [33], where spatial or temporal signatures are used to cluster trajectories in order to reduce data complexity and visual clutter. Another common use of independent spatial and temporal transformations is the use of spatial signatures to identify regions of interest and temporal features to analyze the behavior of such regions over time [34]. Some techniques dynamically extract temporal and spatial features according to user specified queries, enabling complex analysis of time-stamped trajectories [35, 36], frequent trips [37], and spatial/temporal traffic patterns [38, 39].

More elaborate techniques operate on spatial and temporal information simultaneously, allowing for identifying spatial, temporal, and spatio-temporal patterns. Feature vectors made up of temporal and spatial attributes is an alternative that have been applied in several scenarios 

[40]. Optimization procedures designed to detect periodical patterns in spatial events have also been employed to visually identify tends in epidemiological data [41]. Spatio-temporal wavelet-based signatures [11] have turned out effective to simultaneously characterize spatial locations and their temporal behavior. More recently, transformations based on tensor decomposition are being used as a resource to assist the visualization of spatio-temporal data [6, 2].

The proposed feature extraction and transformation techniques bear a number of properties not simultaneously present in any existing technique. Similarly to topological methods for spatio-temporal analysis [5, 3], the proposed approach is scale invariant, thus producing identical spatial “signatures” for signals that only differ by a scaling factor. However, the capability of generating spatial signatures on regions of large variation of the signal is a differential of our approach when compared to topological schemes. Topological approaches are able to point out the location where a signal reaches its extremal values, but it demands very sophisticated methods (like Morse-Smale complexes computation) to account for where the underlying phenomenon is transitioning, a trait naturally captured by our approach. Moreover, the proposed methodology allows for transforming spatial patterns into a single scalar value (entropy) that indicates the degree of randomness of each time slice. Entropy values are much easier to interpret than, for example, patterns derived from tensor decomposition, which typically demand sophisticated visualization resources to become meaningful [6, 2].

3 Edge Node Configuration and Entropy Diagram to Support Visual Analytic Tasks

The proposed spatio-temporal analytic tool comprises two main steps, i) the detection of boundary nodes in each time slice and ii) the computation of entropy for the time slices, as illustrated in Figure 1. The former relies on the GLoG operator described in Section 3.2

while the computation of entropy derives from the concept of edge node probability discussed in Section 

3.3. Before presenting both concepts, though, we describe the mathematical foundations of Graph Fourier Transform (GFT) and Graph Filtering, which are the basis of our approach.

3.1 Graph Signal Processing: Basic Concepts

This section presents basic concepts of Graph Signal Processing from which our approach derives. A more detailed discussion about basic concepts of Graph Signal Processing can be found in the work by Shuman et al. [9].

Graph Fourier Transform.   We denote by a graph made up of a node set , an edge set , and a weight function that associates a non-negative scalar to each edge in . In our context, is assumed to be connected, that is, for every pair of nodes there is a sequence of adjacent edges connecting those nodes.

The weighted adjacency matrix of , denoted by , is the matrix satisfying if and otherwise. This matrix is used to define the (non-normalized) graph Laplacian, which is given by , where is a diagonal matrix with entries and is the number of nodes in

. The graph Laplacian is a real, symmetric, and semi-positive definite matrix, which ensures a complete set of orthonormal real eigenvectors

, with corresponding non-negative real eigenvalues

, . Moreover, zero is always an eigenvalue of whose corresponding eigenvector is a constant vector.

The eigenvalues and eigenvectors of the graph Laplacian play a similar role as frequencies and basis functions in the classical Fourier theory. More specifically, eigenvalues closer to zero correspond to low frequencies while large eigenvalues correspond to high frequencies. Moreover, eigenvectors associated to small eigenvalues tend to have a less oscillatory behavior than eigenvectors associated to large eigenvalues. A more detailed discussion about the relation between the spectrum of Laplacian matrices and Fourier theory can be found in the work by Shuman et al. [42] and Dal Col et al. [10].

A signal defined on the nodes of is a function that associates a scalar to each node . Denoting the eigenvalues and corresponding eigenvectors of the Laplacian matrix of by and , respectively, and assuming the eigenvalues are sorted in non-decreasing order, (the first strict inequality is due to the assumption that is connected), the Graph Fourier Transform (GFT) of a signal , denoted by , where is the spectral domain (set of eigenvalues), is defined as:


Given the GFT , the original signal can be recovered via the inverse Graph Fourier Transform (iGFT), which is defined as:


If we denote by

the (orthogonal) matrix with columns given by the eigenvectors

, the GFT and iGFT can be obtained via matrix multiplication as follows:


Spectral Filtering.   A graph spectral filter is a function defined in the spectral domain that associates a scalar value to each eigenvalue . The GFT can be seen as a particular instance of a graph spectral filter.

A graph spectral filtering of a signal is defined as:


where is the GFT of , is a graph spectral filter and is the element-wise multiplication. Using the matrix notation defined in Eq. (3) and some algebraic manipulation one can obtain the filtered version of in the graph domain by computing:


where is a diagonal matrix with entries .

The design a proper filter is application dependent. A particularly useful filter in our context is the Gaussian filter given by:


where is the independent variable and is a parameter. The Gaussian filter is a low-pass filter that preserves low frequencies while attenuating the higher ones. Figure 2 illustrates the effect of applying to a noisy step function with two different values of .

Fig. 2: Noisy step function (left) in the yellowish nodes and in the purplish, where is a random noise. Gaussian smoothing with (top right) and (bottom right). The two plots on the left shows the shape of the Gaussian filter related.

3.2 GLoG for Boundary Detection

The boundaries (or edges) of a signal correspond to the locations of abrupt change in the signal. Many different approaches have been proposed to identify edges in the context of image processing and computer vision [8]. Among the existing edge detection methods, the Laplacian of Gaussian (LoG) figures among the most important ones, mainly due to its solid mathematical foundation and optimality criteria [43].

GLoG filter.   The classical LoG is a filter built based on two main principles: 1) boundaries take place on locations where the first derivative (or gradient) of a signal is maximum, or equivalently, the locations where the Laplacian of the signal is zero, which is called the zero-crossings of the Laplacian; 2) zero-crossings may be caused by noise, so a smoothing filter must be applied to the signal before computing the Laplacian. The chosen smoothing filter is the Gaussian filter, as it provides optimal localization in the space and frequency domains [43].

In mathematical terms, the classical LoG filter can be defined as


where is the Laplacian operator, is the Gaussian function, is the signal, and is the convolution operator. The right most expression is a consequence of the derivative rule for convolution.

There are two main issues for adapting Eq.(7) in the context of graphs: how to define the convolution operator and how to compute on a graph structure. The convolution operator is not well defined in graph domains, as it demands a shift mechanism that can not be directly defined on graphs. However, convolution becomes multiplication in the spectral domain, thus, with the help of Graph Signal Processing theory, the convolution between two functions and can be defined as [9]:


where is the element-wise multiplication, and are the GFT of and (defined in Eq. (1)), and is the inverse GFT given in Eq. (2). Using Equations (7) and (8), we can define the Graph Laplacian of Gaussian (GLoG) filter as:

Definition (GLoG):


Fig. 3: GLoG filter applied to the noisy step function depicted in Figure 2 (left). Blue and red colors correspond to nodes where the GLoG filter is negative and positive, respectively.
Fig. 4: GLoG for boundary detection (Section 3.2). From left to right, signal defined on a linear graph, GLoG of the signal, zero-crossing edges, the strongest edges, and the edge nodes obtained by algorithm (highlighted in red over graph signal).

In order to make Eq. (9) of practical use, we must properly define . We know that the classical 2D Fourier transform of is given by  [43]. If we interpret the variable , which is a complex variable in the classical Fourier transform theory, as a real variable in the graph spectral domain, then can be given by the following graph spectral filter:


where is a user defined parameter and is the independent variable in the graph spectral domain. Defining as a graph spectral filter makes the definition and computation of GLoG feasible, being one of the contributions of this work.

Having defined , the GLoG filter (Eq. (9)) becomes a band-pass filter in the spectral graph domain. Moreover, in the particular case where is a regular 2D grid, the GLoG matches the LoG filter as defined in classical signal processing.

Figure 3 shows the result of the GLoG filter when applied to the noisy step function depicted in Figure 2 (left) using three different values of . Blueish and reddish colors correspond to negative and positive values of the GLoG respectively. Notice that the larger the value of the smoother the result of the GLoG filter is.

Fig. 5: Edge node configuration obtained by keep zero-crossing pairs whose score is greater than the average plus (left) and (right)

times the standard deviation

of the scores in the signals depicted in Figure 3 middle, respectively.

Extracting the Strongest Edge Nodes.   The result of applying a GLoG filter to a signal is another signal defined on the nodes of the graph, which we denote by to make clear its dependence of the input signal . A pair of adjacent nodes is a zero-crossing pair if . The nodes belonging to a zero-crossing pair are called edge nodes. We can associate the score to each zero-crossing pair, the larger the score the stronger the signal variation is in that pair, which we call strong edge nodes (or simply edge nodes). “Weak” pairs can be filtered out by considering only zero-crossing pairs whose score is among the largest ones. After computing the stronger edge nodes one can generate a binary signal where if is an edge node and otherwise. The signal is called an edge node configuration of . Figure 4 illustrates all the steps involved in the computation of edge nodes of a signal .

Figure 5 shows edge node configurations for zero-crossing pairs whose score is greater than the mean plus and times the standard deviation of the zero-crossing pair score distribution. Such edges nodes have been computed from the GLoG signal for depicted in Figure 3. Notice that, as expected, the higher the threshold the smaller the number of edge nodes.

Computational Aspects.   The pseudocode for the GLoG filter computation can be stated as in Algorithm 1. The method get_edge_nodes compute edge nodes by first identifying the edge node pairs, that is, pairs of adjacent nodes where

changes its sign. The edge node pairs whose score is larger than a threshold (strong edge node pairs) are returned by the method. In our implementation, the threshold is the third quartile of the edge node pair score distribution.

1:  Compute the eigenvector matrix and eigenvalues from the Laplacian
2:                                 # GFT, Eq. (1)
3:                             # as in Eq. (10)
4:                                    # iGFT, Eq. (2)
5:   get_edge_nodes()          # find edge-nodes
6:  return  
Algorithm 1 GLoG(,)

The spectral filtering can be computed via Chebyshev polynomial approximation [44], which avoids the computation of the whole set of eigenvalues and eigenvectors of , thus making possible to handle large graphs in reasonable computational times. If Chebyshev polynomial approximation is used, only the largest eigenvalue has to be computed in step 1 of the algorithm and step 2 and 3 are merged in a single one (see [44] for details). Our implementation makes use of Chebyshev polynomial approximation.

3.3 Edge Node Probability and Time Slice Entropy

In this section we will show how the boundary detection methodology described in the previous section can be used to assist spatio-temporal data visualization. Lets assume a time series is associated to each node of a graph

, that is, given a set of time slices , there is a function that associates a scalar value to each node in the time slice .

Edge Node Probability.   The GLoG filter (9) can be applied to each time slice in order to identify the edge nodes in

. Assuming that only the strongest edge nodes are kept in each time slice (score larger than the third quartile of the score distribution), one can estimate the probability

of a node being an edge node as follows:


where is the number of time slices. The probability of a node not being an edge node is given by . Notice that the probability is simply the number of time slices where appears as an edge node divided by the total number of time slices.

Time Slice Entropy.   The following function


computes how probable the observed configuration of a node (in time slice ) is in terms of it being or not an edge node. The entropy of the edge node configuration in time slice is given by:

Definition (Entropy):


The entropy measures the degree of randomness of a time slice, the larger the entropy the more unpredictable the time slice is in terms of its edge node configuration. Time slice where the edge node configuration is of low probability presents larger entropy. Therefore, by simply plotting the entropy over time, which we call entropy diagram, one can visualize time slices where the signal presents unexpected (high entropy) or predictable (low entropy) edge node configurations. The definitions of entropy and entropy diagram are another contribution of this work. The spatial distribution of edge nodes in each time slice is also an important feature that can be used to understand a signal.

3.4 Synthetic time-varying data

In order to illustrate how edge nodes and entropy information can help in the visual analysis of spatio-temporal data, we have manufactured a data set by randomly placing points within a 2D unitary square, connecting the points using the Delaunay triangulation. The points correspond to the graph nodes and the Delaunay links to the graph edges. All graph edges are assumed to have unitary weight. A spatio-temporal signal is associated to the nodes of the graph. In each time slice the signal is equal to for nodes lying inside a circle with radius 0.1 and centered in the position , otherwise, is set to . The pair and correspond to a random perturbation in the center of the circle, each ranging in the interval . The random perturbation prevents the signal to be exactly the same in each time slice. A random noise in the interval is also added to in each time slice. One hundred time slices are generated using the procedure described above, however, in time slices the center of the circle is further translated towards the top right or bottom left corner of the unitary square domain, forcing an abrupt change in the signal in those time slices. Figure 6 shows the signal generated as described above. Top left image corresponds to a time slice where the circle is centered close to while bottom left and top right images correspond to time slices where the center of the circle has been shifted towards the top right or bottom left corner of the unitary square. For each time slice, the corresponding edge node configuration is illustrated in the inset on the bottom right.

Fig. 6: Three time slices of the synthetic signal and corresponding edge node configurations (botton right inset). The bottom right image shows edge nodes (red circled nodes) jointly with signal.

Figure 7 presents the entropy diagram of the signal described above. Notice that the time slices where the center of the circle is displaced towards the top and bottom corners of the unit square are revealed, as they have larger entropy values when compared to the remaining time slices. Moreover, since the edge node configuration can be seen as a binary vector in an -dimensional space, we can cluster the high-dimensional vectors in order to identify time slices with similar edge node configuration. The color of the dots in Figure 7

indicates the cluster each time slice belongs to (we have clustered edge node configuration vectors in three groups using a simple k-means). The red group corresponds to time slices where the center of the circles have been shifted towards the bottom left of the unit square, the green group are the time slices where the centers moved towards the top right corner. Blue group gathers time slices where the center of the circles are close to


Fig. 7: Entropy over time. The three patterns revealed correspond to time slices where the signal is shifted towards the top right corner, where the signal is shifted to the bottom left corner, and the remaining ones correspond to time slices where the signal is centered close to . The time slices holding abrupt changes in the signal are clearly revealed by the entropy diagram.

Although simple, the synthetic example discussed in this section illustrates the potential of the GLoG filter and entropy diagram as visual analytic tools for spatio-temporal data. Such usefulness will become more clear in the case studies presented in the following section.

4 Case Studies

In this section we apply our methodology in two case studies involving real data. In both studies we strongly rely on the proposed GLoG filter and associated entropy diagram to assist the visual analysis and identification of spatio-temporal patterns and phenomena. The first case study accounts for the analysis of taxi pick up during one week in downtown Manhattan - NYC. The second case study deals with crime data in the city of São Paulo - Brazil.

Fig. 8: Total number of taxi pick ups (left), edge nodes probability (center), and enhanced total taxi pick up visualization (right).
Fig. 9: Taxi pick up entropy diagram. Entropy tends to increase along the day in weekdays, reaching its maximum around midnight. On Saturday, the gradual increase along the day is not observed, but an abrupt growth in the late evening is. On Sunday, entropy tends to decrease along the day. Therefore, taxi pick up is more random at dawn, specially in the weekends. Edge node configuration (see Figure 10) of each time slice have been clustered in seven groups, revealing the different behavior of taxi pick up along the day.
Fig. 10: Example of edge node configuration along the day in weekdays. The edge node configuration reveals the dynamics of the city, with a concentration of edge nodes in midtown in the early morning, extending to east side in the early afternoon, and moving to downtown in the evening.

4.1 NYC Taxi Data Analysis

The graph domain in this case study is downtown Manhattan street network, where street intersections correspond to graph nodes and the street blocks connecting the intersections make up the set of graph edges. The graph of downtown Manhattan is made up of 4694 nodes and 6350 edges in each time slice. The Taxi data set [38] contains information about one week of taxi pick up in each intersection of downtown Manhattan from August 11th to August 18th, 2013. The data was aggregated into half-hour intervals in each node, giving rise to 336 time slices. The GLoG filter is applied to each time slice independently.

Figure 8 left shows the total number of taxi pick up in each node of the graph (the sum, over the 336 time slices, of the number of taxi pick ups in each node). A few nodes are highlighted in the neighborhood of Penn Station, a major train and metro station hub in NYC, and a few blocks uptown, close to Port Authority, the main bus station terminal in Manhattan. Since the number of taxi pick ups in those two regions are much larger than in other areas of the city, other important locations are overshadowed, becoming difficult to visualize regions where the number of taxi pick ups is relevant but not so large as in the neighborhood of Penn Station and Port Authority. Figure 8 middle shows the probability (see Eq. 11) of each edge node . Since the GLoG filter is scale invariant, it reveals a number of nodes that are not highlighted in Figure 8 left, all corresponding to nodes with high probability of being edge nodes. Nodes presenting a not so high signal intensity but with high probability of being an edge node are usually associated to interesting spatial events that deserve to be analyzed. Figure 8 right corroborates this claim, showing an enhanced total taxi pick up visualization generated by adding the probability to the normalized total number of taxi pick ups. The combined visualization is the equivalent of what is called “image enhancement” in the context of image processing [7]. As annotated in Figure 8 right, besides Port Authority (A), the whole neighborhood of Penn Station, including Korea Town (B), Times Square (C), Grand Central Station (D), the luxurious hotel neighborhood (E), and the region that concentrates most of the consulates in NY (F) are also visible in the enhanced visualization. Even smaller spots like the Meatpacking District (G) and the 9/11 memorial (H) show up after the enhancement process. The region indicated by (K) is also interesting to analyze, as it corresponds to Jersey St., a very quiet two-block street in the middle of Soho where the number of taxi pick ups is much smaller if compared against its neighborhood. Notice that the enhanced visualization not only reveals important areas in Manhattan but also locations with almost no pick ups, which can indicate quiet spots or issues in the traffic as bloched streets and accidents.

Fig. 11: Enhanced taxi pick up visualization for some of the time slices depicted in Figure 10.

Figure 9 shows the taxi pick up entropy diagram. Notice that entropy behaves similarly in the weekdays, reaching its maximum after midnight, dropping down at the dawn, presenting a local maximum right after 6am, increasing consistently along the day. Weekend days also present maximum entropy at dawn, however, the gradual increase along the day is not observed. In contrast, an abrupt growth is observed in late evening on Saturday while entropy decreases along the day on Sunday. Therefore, the entropy diagram shows that taxi pick up becomes more random at dawn, specially in the weekend.

Edge node configuration associated to each time slice have been clustered in seven different groups using -means (as explained in Section 3.4). The number of clusters has been defined by analyzing the silhouette coefficient [45], which indicated the number as the “ideal” number of clusters to balance quality and number of groups. It is possible to see that the groups nicely split the behavior of taxi pick up according to the periods of the day, indicating that the edge node configuration is capturing patterns of taxi pick up. Moreover, the particular behavior taking place during the weekend is also captured by the edge node configuration. Figure 10 shows randomly chosen examples of edge node configurations from out of the groups. The Early Morning group concentrates edge nodes mostly in the neighborhood of main public transportation hubs such as Penn Station, Port Authority, Grand Central, and other smaller path train and metro station terminals. The Morning group shows that taxi pick up spread to northeast of downtown Manhattan, in the area of luxurious hotels and embassies. In the Afternoon group, one can see that taxi pick up start to move downwards to 14th street, with a concentration around Union Square park. Weekday Evening is characterized by a concentration of edge nodes in Soho and Meatpacking district, which are well known commercial areas with famous stores markets and bars. In the Late Evening group, edge nodes spread to East Village, an area with many bars and restaurants that is known by its nightlife. Edge nodes in the Dawn group are mostly concentrated in the south part of the island, presenting a reduction on the number of edge nodes in the northeast area. As pointed out by the entropy diagram, late night and dawn have larger entropy, meaning that the pattern of taxi pick up is more random during those periods.

The visualization of edge node configuration patterns depicted in Figure 10 reveals the dynamics of the city, where people tend to arrive in the public transportation terminals during the morning, staying more concentrated in the north and northeast area during the day, migrating to the south and east side of Manhattan in the evening and late night to enjoy the bars and restaurants located on those areas. Figure 11 shows the enhanced visualization of taxi pick up in six time slices shown in Figure 10, one for each group of edge node configuration. Notice the agreement between the enhanced visualization of taxi pick up and the patterns of edge node configuration, showing that the edge node configuration can be an interesting alternative when analyzing the behaviour of a signal overtime, since even small variation on the signal can be identified as edge nodes.

4.2 Crime Data

In this case study we apply the proposed methodology to analyze crime data in the city of São Paulo - Brazil. The data is available for downloading in the São Paulo Open Data repository ( and it provides, among others, information about date and time of each criminal event, the type of crime, and the census region where the crime event took place. In this study we focused on eleven years of data, from 2007 to 2017, accounting only for passerby robbery as crime type. Since the geolocation of each criminal activity is given by census regions, we have built a graph where nodes correspond to the census region and graph edges correspond to links connecting census regions that geographically intersect each other. Only census regions at a distance of less than five kilometers from the center of São Paulo are considered, resulting on a graph with nodes and edges. We have applied a month aggregation to the data, resulting in time slices.

Fig. 12: Total number of passerby robbery from 2007 to 2017 (left). The probability of each node being an edge node (right).
Fig. 13: Entropy diagram of São Paulo passerby robbery. The highlighted picks correspond to some of the highest and lowest entropy time slices.
Fig. 14: São Paulo crime: High probability edge nodes in low entropy type slices (left ) and low probability edge nodes in high entropy time slices (right). Left column, purple points indicate high probability edge nodes, which are locations where criminal activity is common. Right column, blue points indicate low probability edge nodes, indicating neighborhoods where the criminal activity is unexpected.

Figure 12 shows a visualization of the total number of passerby robbery along the years (left) and the probability (see Eq. 11) of each node being an edge node (right) (purple colors indicate larger probabilities). As in the case of NYC taxi pick up, the number of passerby robbery in the center of São Paulo is much larger than in other regions, overshadowing nodes and regions where the number of passerby robbery is not in the same magnitude as in the city center, but still important to be identified and analyzed.

The entropy diagram of São Paulo passerby robbery is depicted in Figure 13. Three high and three low entropy time slices are chosen to be investigated in detail (highlighted in the entropy diagram). High entropy time slices are associated to (edge node) patterns that are less likely to happen. In the context of crime data, this means that crimes are taking place in “unexpected” locations. In fact, an edge node configuration with high entropy tends to bear a considerable number of low probability () edge nodes. Therefore, by analyzing low probability edge nodes that show up in high entropy time slices one can identify locations where the number of passerby robbery changes abruptly, which can indicate the action of gangs or emerging of juvenile offenders in those location or even a sudden reduction on crime rates, which is also interesting to analyze. The images on the right column in Figure 14 shows low probability (probability lower the ) edge nodes present in the high entropy time slices highlighted in Figure 13. Notice that a number of locations and neighborhoods far from the city center are evident. By analyzing the number of crime events in those locations we noticed that they do not present a large number of passerby robbery but, in the pointed high entropy time slices, the number of criminal events increases substantially. In contrast, the edge node configuration associated to low entropy time slices tend to reveal (edge node) patterns that are probable to happen, thus indicating neighborhoods where criminal activities are expected. Left column in Figure 14 shows high probability edge nodes present in the low entropy time slices indicated in Figure 13. It is easy to see that the most probable edge nodes are located in downtown São Paulo, indicating a frequency of passerby robbery in that area. The analysis above shows the potential of GLoG as a basic tool to support visual analytic tasks, mainly in applications where scale invariance is a basic requirement, as is the case of crime analysis.

5 Discussion and Limitations

The case studies presented in Section 4 shows the effectiveness of the proposed GLoG filter and entropy diagram to support the visual analysis of spatio-temporal data. The entropy diagram reveals the degree of randomness of each time slice, allowing an intuitive visual identification of predictable and unpredictable time slices in terms of their edge node configuration.

The edge node configuration is, by itself, a human interpretable signature from which one can understand the spatial behavior of a signal. With the help of clustering schemes, edge node configuration can uncover time intervals where a signal presents similar behavior, thus helping users to understand spatial phenomena over time.

Our methodology demands essentially two parameters, , which controls the strength of the Gaussian filter, and the threshold used to filter out weak edge node pairs. The latter is defined in terms of the standard deviation of the distribution of edge node pair scores. In the case studies we fixed

and the edge node pair threshold in the third quartile, as those values resulted on interesting analysis and visual patterns. However, those parameters impact in the edge node configuration and thus in the entropy and cluster analysis. The larger the

and the threshold smaller is the number of edge nodes tend to be present in the edge node configuration. Although we have not experienced any difficulties to set those parameters in the two case studies discussed in Section 4, that might not always be the case, and a fine tuning can be necessary depending on the application.

Regarding computational times, the whole process, including the computation of the largest eigenvalue (involved in the Chebyshev approximation), GLoG filtering, edge node pairs identification and threshold, takes only a few seconds, which is pretty reasonable. For instance, the whole process took less than seconds in both case studies.

The proposed methodology opens a wide set of future directions. For instance, it would be interesting to investigate techniques able to track edge nodes over time in order to visualize how the spatial “boundaries” evolves, allowing a better understanding of the dynamics of the data. Another interesting problem is how to build upon boundary detection to design spatial segmentation techniques that respect the edge nodes. This kind of segmentation has been successfully employed in image processing and computer vision. Properly segmenting spatial locations based on data is an major problem in visualization that can greatly benefit from the methodology proposed in this work. The GLoG filter can also be combined with other feature extraction mechanisms such as topological methods, allowing the analysis of ”extremal” and transitional data locations. Moreover, the proposed methodology can also be applied to problems beyond geo-spatial data, as for example high-dimensional data analysis, mainly through dimensionality reduction.

6 Conclusion

In this work we have introduced a novel methodology to process and visualize spatio-temporal data based on the concept of boundary detection and entropy diagram. The proposed methodology strongly relies on spectral filtering from graph signal processing theory, which enables the precise definition of a Laplacian of Gaussian boundary detection filter that operates in graph domains. Edge nodes computed by the proposed machinery can be used as feature vectors as well as to define the notion of entropy of time slices, allowing the identification of low and high probable time slices. Visual analytic tasks involving synthetic and real data sets show the effectiveness and usefulness of the propose methodology to support visualization applications. Moreover, the proposed methodology can also be combined with other feature extraction methods so as to capture different facts of time-varying data, opening a number of possibilities for future work.


This work was supported by: the Moore-Sloan Data Science Environment at NYU; NASA; NSF awards CNS-1229185, CCF-1533564, CNS-1544753, CNS-1626098, CNS-1730396, CNS-1828576; 302643/2013-3 CNPq-Brazil and 2016/04391-2 and 2014/12236-1 São Paulo Research Foundation (FAPESP) - Brazil. C. T. Silva is partially supported by the DARPA MEMEX and D3M programs. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA and São Paulo Research Foundation.


  • [1] G. Andrienko, N. Andrienko, G. Fuchs, and J. Wood, “Revealing patterns and trends of mass mobility through spatial and temporal abstraction of origin-destination movement data,” IEEE Trans. Vis. Comp. Graph., vol. 23, no. 9, pp. 2120–2136, 2017.
  • [2] D. Liu, P. Xu, and L. Ren, “Tpflow: Progressive partition and multidimensional pattern extraction for large-scale spatio-temporal data analysis,” IEEE Trans. Vis. Comp. Graph., vol. 25, no. 1, pp. 1–11, 2019.
  • [3] F. Miranda, H. Doraiswamy, M. Lage, K. Zhao, B. Gonçalves, L. Wilson, M. Hsieh, and C. T. Silva, “Urban pulse: Capturing the rhythm of cities,” IEEE Trans. Vis. Comp. Graph., vol. 23, no. 1, pp. 791–800, 2017.
  • [4] N. Andrienko and G. Andrienko, Exploratory analysis of spatial and temporal data: a systematic approach.   Springer Science & Business Media, 2006.
  • [5] H. Doraiswamy, N. Ferreira, T. Damoulas, J. Freire, and C. T. Silva, “Using topological analysis to support event-guided exploration in urban data,” IEEE Trans. Vis. Comp. Graph., vol. 20, no. 12, pp. 2634–2643, 2014.
  • [6]

    N. Cao, C. Lin, Q. Zhu, Y.-R. Lin, X. Teng, and X. Wen, “Voila: Visual anomaly detection and monitoring with streaming spatiotemporal data,”

    IEEE Trans. Vis. Comp. Graph., vol. 24, no. 1, pp. 23–33, 2018.
  • [7] L. Shapiro and G. C. Stockman, Computer vision.   Prentice Hall, 2001.
  • [8] R. Maini and H. Aggarwal, “Study and comparison of various image edge detection techniques,” International journal of image processing (IJIP), vol. 3, no. 1, pp. 1–11, 2009.
  • [9] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE Signal Processing Magazine, vol. 30, no. 3, pp. 83–98, 2013.
  • [10] A. Dal Col, P. Valdivia, F. Petronetto, F. Dias, C. T. Silva, and L. G. Nonato, “Wavelet-based visual analysis for data exploration,” Computing in Science & Engineering, vol. 19, no. 5, pp. 85–91, 2017.
  • [11] P. Valdivia, F. Dias, F. Petronetto, C. T. Silva, and L. G. Nonato, “Wavelet-based visualization of time-varying data on graphs,” in IEEE VAST.   IEEE, 2015, pp. 1–8.
  • [12] C. Ware, Information visualization: perception for design.   Elsevier, 2012.
  • [13] M. Candela, M. Di Bartolomeo, G. Di Battista, and C. Squarcella, “Radian: visual exploration of traceroutes,” IEEE Trans. Vis. Comp. Graph., vol. 24, no. 7, pp. 2194–2208, 2018.
  • [14] A. Dal Col, P. Valdivia, F. Petronetto, F. Dias, C. T. Silva, and L. G. Nonato, “Wavelet-based visual analysis of dynamic networks,” IEEE transactions on visualization and computer graphics, vol. 24, no. 8, pp. 2456–2469, 2018.
  • [15] S. van den Elzen, D. Holten, J. Blaas, and J. J. van Wijk, “Reducing snapshots to points: A visual analytics approach to dynamic network exploration,” IEEE Trans. Vis. Comp. Graph., vol. 22, no. 1, pp. 1–10, 2016.
  • [16] L. An, M.-H. Tsou, S. E. Crook, Y. Chun, B. Spitzberg, J. M. Gawron, and D. K. Gupta, “Space–time analysis: Concepts, quantitative methods, and future directions,” Annals of the Association of American Geographers, vol. 105, no. 5, pp. 891–914, 2015.
  • [17] G. Andrienko, N. Andrienko, W. Chen, R. Maciejewski, and Y. Zhao, “Visual analytics of mobility and transportation: State of the art and further research directions,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 8, pp. 2232–2249, 2017.
  • [18] B. Bach, P. Dragicevic, D. Archambault, C. Hurter, and S. Carpendale, “A Review of Temporal Data Visualizations Based on Space-Time Cube Operations,” in EuroVis - STARs, R. Borgo, R. Maciejewski, and I. Viola, Eds.   The Eurographics Association, 2014.
  • [19] W. Chen, F. Guo, and F.-Y. Wang, “A survey of traffic data visualization,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 6, pp. 2970–2984, 2015.
  • [20] N. Cressie and C. K. Wikle, Statistics for spatio-temporal data.   John Wiley & Sons, 2015.
  • [21] G. Andrienko, N. Andrienko, G. Fuchs, A.-M. O. Raimond, J. Symanzik, and C. Ziemlicki, “Extracting Semantics of Individual Places from Movement Data by Analyzing Temporal Patterns of Visits,” in Proc. SIGSPATIAL.   ACM, 2013, pp. 9:9–9:16.
  • [22] S. Chen, X. Yuan, Z. Wang, C. Guo, J. Liang, Z. Wang, X. Zhang, and J. Zhang, “Interactive visual discovering of movement patterns from sparsely sampled geo-tagged social media data,” IEEE Trans. Vis. Comp. Graph., vol. 22, no. 1, pp. 270–279, 2016.
  • [23] M. Itoh, D. Yokoyama, M. Toyoda, Y. Tomita, S. Kawamura, and M. Kitsuregawa, “Visual fusion of mega-city big data: an application to traffic and tweets data analysis of metro passengers,” in IEEE Big Data, 2014, pp. 431–440.
  • [24] C. Xia, R. Schwartz, K. Xie, A. Krebs, A. Langdon, J. Ting, and M. Naaman, “Citybeat: Real-time social media visualization of hyper-local city data,” in Proc. WWW, 2014, pp. 167–170.
  • [25] C. Zhang, Y. Zheng, X. Ma, and J. Han, “Assembler: efficient discovery of spatial co-evolving patterns in massive geo-sensory data,” in ACM SIGKDD, 2015, pp. 1415–1424.
  • [26] G. Andrienko and N. Andrienko, “A general framework for using aggregation in visual exploration of movement data,” The Cartographic Journal, vol. 47, no. 1, pp. 22–40, 2010.
  • [27] R. Scheepens, N. Willems, H. van de Wetering, G. Andrienko, N. Andrienko, and J. van Wijk, “Composite density maps for multivariate trajectories,” IEEE Trans. Vis. Comp. Graph., vol. 17, no. 12, pp. 2518–2527, 2011.
  • [28] S. Thakur and A. J. Hanson, “A 3d visualization of multiple time series on maps,” in Information Visualisation.   IEEE, 2010, pp. 336–343.
  • [29] W. Wu, J. Xu, H. Zeng, Y. Zheng, H. Qu, B. Ni, M. Yuan, and L. M. Ni, “Telcovis: Visual exploration of co-occurrence in urban human mobility based on telco data,” IEEE Trans. Vis. Comp. Graph., vol. 22, no. 1, pp. 935–944, 2016.
  • [30] W. Zeng, C.-W. Fu, S. M. Arisona, and H. Qu, “Visualizing interchange patterns in massive movement data,” Computer Graphics Forum, vol. 32, no. 3pt3, pp. 271–280, 2013.
  • [31] D. Chu, D. A. Sheets, Y. Zhao, Y. Wu, J. Yang, M. Zheng, and G. Chen, “Visualizing hidden themes of taxi movement with semantic transformation,” in IEEE PacificVis, 2014, pp. 137–144.
  • [32] D. I. Shuman, B. Ricaud, and P. Vandergheynst, “Vertex-frequency analysis on graphs,” Applied and Computational Harmonic Analysis, vol. 40, no. 2, pp. 260–291, 2016.
  • [33] T. von Landesberger, F. Brodkorb, P. Roskosch, N. Andrienko, G. Andrienko, and A. Kerren, “Mobilitygraphs: Visual analysis of mass mobility dynamics via spatio-temporal graphs and clustering,” IEEE Trans. Vis. Comp. Graph., vol. 22, no. 1, pp. 11–20, 2016.
  • [34] R. Maciejewski, S. Rudolph, R. Hafen, A. M. Abusalah, M. Yakout, M. Ouzzani, W. S. Cleveland, S. J. Grannis, and D. S. Ebert, “A visual analytics approach to understanding spatiotemporal hotspots,” IEEE Trans. Vis. Comp. Graph., vol. 16, no. 2, pp. 205–220, 2010.
  • [35] F. Giannotti, M. Nanni, F. Pinelli, and D. Pedreschi, “Trajectory pattern mining,” in ACM SIGKDD, 2007, pp. 330–339.
  • [36] G. Di Lorenzo, M. Sbodio, F. Calabrese, M. Berlingerio, F. Pinelli, and R. Nair, “Allaboard: Visual exploration of cellphone mobility data to optimise public transport,” IEEE Trans. Vis. Comp. Graph., vol. 22, no. 2, pp. 1036–1050, 2016.
  • [37] L. Yu, W. Wu, X. Li, G. Li, W. S. Ng, S.-K. Ng, Z. Huang, A. Arunan, and H. M. Watt, “iviztrans: Interactive visual learning for home and work place detection from massive public transportation data,” in IEEE VAST.   IEEE, 2015, pp. 49–56.
  • [38] N. Ferreira, J. Poco, H. T. Vo, J. Freire, and C. T. Silva, “Visual exploration of big spatio-temporal urban data: A study of new york city taxi trips,” IEEE Trans. Vis. Comp. Graph., vol. 19, no. 12, pp. 2149–2158, 2013.
  • [39] H. Guo, Z. Wang, B. Yu, H. Zhao, and X. Yuan, “Tripvista: Triple perspective visual trajectory analytics and its application on microscopic traffic data at a road intersection,” in IEEE PacificVis, 2011, pp. 163–170.
  • [40] J. Zhang, E. Yanli, J. Ma, Y. Zhao, B. Xu, L. Sun, J. Chen, and X. Yuan, “Visual analysis of public utility service problems in a metropolis,” IEEE Trans. Vis. Comp. Graph., vol. 20, no. 12, pp. 1843–1852, 2014.
  • [41] Y. Matsubara, Y. Sakurai, W. G. Van Panhuis, and C. Faloutsos, “Funnel: automatic mining of spatially coevolving epidemics,” in ACM SIGKDD, 2014, pp. 105–114.
  • [42] D. I. Shuman, B. Ricaud, and P. Vandergheynst, “Vertex-frequency analysis on graphs,” Applied and Computational Harmonic Analysis, vol. 40, no. 2, pp. 260–291, 2016.
  • [43] D. Marr and E. Hildreth, “Theory of edge detection,” Proc. R. Soc. Lond. B, vol. 207, no. 1167, pp. 187–217, 1980.
  • [44] D. K. Hammond, P. Vandergheynst, and R. Gribonval, “Wavelets on graphs via spectral graph theory,” Applied and Computational Harmonic Analysis, vol. 30, no. 2, pp. 129–150, 2011.
  • [45] P. J. Rousseeuw, “Silhouettes: a graphical aid to the interpretation and validation of cluster analysis,” Journal of computational and applied mathematics, vol. 20, pp. 53–65, 1987.