We propose δ-MAPS, a method that analyzes spatio-temporal data to first identify the distinct spatial components of the underlying system, referred to as "domains", and second to infer the connections between them. A domain is a spatially contiguous region of highly correlated temporal activity. The core of a domain is a point or subregion at which a metric of local homogeneity is maximum across the entire domain. We compute a domain as the maximum-sized set of spatially contiguous cells that include the detected core and satisfy a homogeneity constraint, expressed in terms of the average pairwise cross-correlation across all cells in the domain. Domains may be spatially overlapping. Different domains may have correlated activity, potentially at a lag, because of direct or indirect interactions. The proposed edge inference method examines the statistical significance of each lagged cross-correlation between two domains, infers a range of lag values for each edge, and assigns a weight to each edge based on the covariance of the two domains. We illustrate the application of δ-MAPS on data from two domains: climate science and neuroscience.READ FULL TEXT VIEW PDF
Correlation of interference affects spatio-temporal aspects of various
We present an inductive spatio-temporal learning framework rooted in
Large volumes of spatio-temporal data are increasingly collected and stu...
We propose a new centered autologistic spatio-temporal model for binary ...
Cyber-Physical Systems (CPS) consist of collaborative, networked and tig...
This paper proposes a statistical modeling approach for spatio-temporal ...
How does one find data dimensions that are reliably expressed across
Spatio-temporal data become increasingly prevalent and important for both science (e.g., climate, systems neuroscience, seismology) and enterprises (e.g., the analysis of geotagged social media activity). The spatial scale of the available data is often determined by an arbitrary grid, which is typically larger than the true dimensionality of the underlying system. One major task is to identify the distinct semi-autonomous components of this system and to infer their (potentially lagged and weighted) interconnections from the available spatio-temporal data. Traditional dimensionality reduction methods, such as PCA, ICA or clustering, have been successfully used for many years but they have known limitations when the objective is to infer the functional network between all spatial components of the system.
We propose -MAPS, an inference method that first identifies these spatial components, referred to as “domains”, and then the connections between them (§3). Informally, a functional domain (or simply domain) is a spatially contiguous region that somehow participates in the same dynamic effect or function. The exact mechanism that creates this effect or function varies across application domains; however, the key idea is that the functional relation between the grid cells of domain results in highly correlated temporal activity. If we accept this premise, it follows that we should be able to identify the “epicenter” or core of a domain as a point (or subregion) at which the local homogeneity is maximum across the entire domain. Instead of searching for the discrete boundary of a domain, which may not exist in reality, we compute a domain as the maximum possible set of spatially contiguous cells that include the detected core, and that satisfy a homogeneity constraint, expressed in terms of the average pairwise cross-correlation across all cells in the domain. Domains may be spatially overlapping. Also, some cells may not belong to any domain.
After we identify all domains, -MAPS infers a functional network between them. Different domains may have correlated activity, potentially at a lag, because of direct or indirect interactions. The proposed edge inference method examines the statistical significance of each lagged cross-correlation between two domains, applies a multiple-testing process to control the rate of false positives, infers a range of potential lag values for each edge, and assigns a weight to each edge based on the covariance of the corresponding two domains.
-MAPS is related to clustering, parcellation (or regionalization), network community detection, multivariate statistical methods for dimensionality reduction such as PCA and ICA, as well as functional network and lag inference methods. However, as we discuss in §2 and show with synthetic data experiments in §4, -MAPS is also significantly different than all these methods. -MAPS does not require the number of domains as an input parameter, the resulting domains are spatially contiguous and potentially overlapping, and the inferred connections between domains can be lagged and positively or negatively weighted. Further, the distinction between grid cells that are correlated within the same domain and grid cells that are correlated across two distinct domains allows -MAPS to separate between local diffusion (or dispersion) phenomena and remote interactions that may be due to underlying structural connections (e.g., a white-matter fiber between two brain regions).
We illustrate the application of -MAPS on data from two domains: climate science (§5) and neuroscience (§6). First, the sea-surface temperature (SST) climate network identifies some well-known climate “tele-connections” (such as the lagged connection between the El Nio Southern Oscillation and the Indian ocean) but it also captures less well-known lagged connections that deserve further investigation by the domain experts. Second, the analysis of resting-state fMRI cortical data confirms the presence of three well-known functional brain “networks” (default-mode, occipital, and motor/somatosensory), and shows that the cortical network includes a backbone of relatively few regions that are densely interconnected.
A common approach to reduce the dimensionality of spatio-temporal data is to apply PCA (standard or rotated) or ICA techniques. For instance, in climate science, PCA (also known as Empirical Orthogonal Function (EOF) analysis) has been used to identify teleconnections between distinct climate regions . The orthogonality between PCA components complicates the interpretation of the results making it difficult to identify the distinct underlying modes of variability and to separate their effects, as clearly discussed in . ICA analysis is more common in the neuroscience literature, aiming to identify independent rather than orthogonal components . However, ICA does not provide a relative significance for each component, and the number of independent components should be chosen based on some additional information about the underlying system.
Another broad family of spatio-temporal dimensionality reduction methods is based on unsupervised clustering. Such algorithms can be grouped into region-growing (e.g., [5, 23]), spectral (e.g., the NCUT method often applied in fMRI analysis [10, 38] – but also see a discussion of their limitations ), hierarchical (e.g., [6, 37]), probabilistic (e.g., ) or density based methods . These groups of algorithms are quite different but they share some common characteristics: the resulting clusters may not be spatially contiguous [34, 38]
, every grid cell needs to belong to a cluster (potentially excluding only outliers)[5, 23], and the number of clusters is often required as an input parameter [10, 6] - none of these algorithms account for the fact that clusters may overlap. In particular, the lack of spatial contiguity makes it hard to distinguish between correlations due to spatial diffusion (or dispersion) phenomena from correlations that are due to remote (structural) interactions between distinct effects.
An approach of increasing popularity is to first construct a correlation-based network between individual grid cells, after pruning cross-correlations that are not statistically significant – see . Then, some of these methods analyze the (binary or weighted) cell-level network directly based on various centrality metrics, k-core decomposition, spectral analysis, etc. (e.g., [12, 39]) or they first apply a community detection algorithm (potentially able to detect overlapping communities, e.g., [1, 22, 26]) on the cell-level network and then analyze the resulting communities in terms of size, density, location, overlap, etc. (e.g., [25, 27, 35, 36]). A community however may group together two regions that are, first, not spatially contiguous, and second, different in terms of how they are connected to other regions; an instance of this issue is illustrated in Fig. 4-C in the context of climate data analysis.
The input data is generated from a spatial field sampled on an arbitrary grid . This grid can be modeled as a planar graph , where each vertex in is a grid cell and each edge in represents the spatial adjacency between two neighboring cells. A set of cells is spatially contiguous, denoted by =1, if it forms a connected component in .
The -neighborhood of a cell , denoted by , includes and the set of nearest neighbors to according to an appropriate spatial distance metric (e.g., geodesic distance for climate data, Euclidean distance for fMRI data). The -neighborhood of a cell is always spatially contiguous.
Each grid cell is associated with a time series of length (). We assume that is sampled from a stationary signal and denote by and
its sample mean and variance, respectively. The similarity between the activity of two cellsand is measured with Pearson’s cross-correlation at zero-lag,
Other similarity metrics could be used instead.
The local homogeneity at cell is defined as the average pairwise cross-correlation between the cells in ,
Similarly, we define the homogeneity of a set of cells as the average pairwise cross-correlation between all distinct cells in ,
Intuitively, a domain is a spatially contiguous set of cells that somehow participate in the same dynamic effect or function. The exact mechanism that creates this effect or function varies across application domains; however, the key premise is that the functional relation between the cells of domain results in highly correlated temporal activity (at zero-lag), and thus high values of the homogeneity metric . A given homogeneity threshold examines if the homogeneity of is sufficiently high, i.e., a domain must have . (the selection of is discussed later in this section).
If we accept this premise, it follows that we should be able to identify the “epicenter” or core of a domain A as a cell at which the local homogeneity is maximum across all cells in (and certainly larger than ). In general, the core of a domain may not be a unique cell.
More formally now, suppose that we know that cell is in the core of a domain. The domain rooted at has to satisfy the following three properties: it should include cell , be spatially contiguous, and have higher homogeneity than :
A domain may not have sharp spatial boundaries; instead, it may gradually “fade” into other domains or regions dominated by noise. So, instead of searching for the discrete boundary of a domain, it is more reasonable to compute a domain as the largest possible set of cells that satisfies the previous three constraints.
Domain identification problem: Given the field on the spatial grid , a core cell , and the threshold , the domain is a maximum-sized set of cells that satisfies the three constraints of (4). In Appendix-1 we prove that the decision version of this problem is NP-Hard.
A given spatial field may include several domains. The number of identified domains, denoted by , depends on the threshold . Domains may be spatially overlapping; this is the case when the cells of a region are significantly correlated with two or more distinct domain cores. Also, some cells of the grid may not belong to any domain, meaning that their signal can be thought of as mostly noise (at least for the given value of ). Decreasing will typically result in a larger number of detected domain cores. Further, as decreases, the spatial extent of each domain will typically increase, resulting in larger overlaps between nearby domains.
can simply be a user-specified parameter for the minimum required average cross-correlation within a domain. Another way is to calculate based on a statistical test for the significance of the observed zero-lag cross-correlations. A summary of this method is given next (described in more detail in Appendix-2). We start with a random sample of pairs of grid cells. We then apply the statistical test described in §3.2 (see Equations 6 and 7) to examine if the zero-lag cross-correlation between each of these pairs passes a given significance level (set to unless specified otherwise). is then set to the average of the statistically significant cross-correlations in that sample. The rationale is that the average pairwise cross-correlation among cells that belong to the same domain should be higher than a sample average of statistically significant cross-correlations between cells that can be anywhere on the grid.
Given the NP-Hardness of the previous problem, we propose a greedy algorithm that runs in two phases.
In the first phase, we identify a set of cells, referred to as seeds; each seed is a candidate
core for a domain.
In the second phase, each seed is initially considered as a distinct domain.
Then, an iterative and greedy algorithm attempts to identify the largest possible domains
that satisfy the three constraints of (4)
through a sequence of expansion and merging operations.
The two phases are described next, while
the complete pseudocode is presented in Appendix-3. The source code (including supporting
documentation) will be available on GitHub before the final publication of this paper.
Seed selection. Recall that the core of a domain is a cell of maximum local homogeneity across all cells of that domain. So, one way to detect potential core cells, while the domains are still unknown, is to identify points at which the homogeneity field is locally maximum. Specifically, cell is a seed if and . Let be the set of all identified seeds.
In general, a single domain may produce more than one seed because the local homogeneity field can be noisy and so it may include multiple local maxima, greater than . Further, additional seeds can appear in regions where domains overlap. Consequently, it is necessary to include a merging operation in which two or more seeds are eventually merged into the same domain.
Note that as decreases, the local homogeneity field becomes more noisy and so we may detect
more seeds in the same domain.
On the other hand, larger values of the neighborhood size can oversmooth the homogeneity field,
removing seeds and potentially hiding entire domains.
The latter is more likely if the spatial extent of a domain is smaller than +1 cells.
This observation implies that the spatial resolution of the given grid sets a lower bound
on the size of the functional domains that can be detected.
Domain-merging operation. Two candidate domains and can be merged if they are spatially contiguous and if the homogeneity of their union is sufficiently high, i.e., . Whenever there is more than one pair of domains that can be merged, we greedily choose the pair with the maximum union homogeneity; this greedy choice makes the merged domain more likely to expand further.
The merging operation is performed initially on the set of seeds .
It is also performed after each domain-expansion operation, whenever it is possible to do so.
Domain-expansion operation. A domain is expanded by considering all cells that are adjacent to , and selecting the cell that maximizes ; again, this greedy choice makes the expanded domain more likely to expand further.
The expansion operation is repeated in rounds. At the start of each round, domains are sorted in decreasing order of homogeneity. Then, each domain is expanded by one cell at a time, as previously described, in that order. After every expansion operation, we check whether one or more merging operations are possible. A round is complete when we have attempted to expand each domain once.
A domain can no longer expand if that would violate the homogeneity constraint or if there are no other adjacent cells that can be added into the domain. The domain identification algorithm terminates when no further expansion or merging operations are possible.
Given the identified domains , the next step is to construct a network between domains. Different domains may have correlated activity because of direct or indirect interactions. We refer to as a functional network to emphasize that the edges between domains are based on functional activity and correlations instead of structural or physical connections (“structural network”) or causal interactions (“effective network”).
We associate a domain-level signal with each domain . The definition of this signal depends on the specific application field. For instance, when we analyze climate anomaly time series, the domain-level signal is defined as the cumulative anomaly across all cells of that domain, where the contribution of each signal is weighted by the relative size of that cell (it depends on the cell’s latitude). For fMRI data, the domain-level signal is defined as the average BOLD signal across the cells of that domain.
Two different domains may be located at some distance, and so they may be correlated at a non-zero lag . For this reason, we examine if there is a significant cross-correlation between different domains over a range of lags (). The sample cross-correlation between domains and at a lag
can be estimated as:
denote sample mean and standard deviation estimates, respectively. The selection ofshould be large enough to include the typical signal propagation delays in the underlying system but at the same time it should be much lower than . The cross-correlations for a pair of domains can be represented with a correlogram; an example based on climate sea-surface temperature data (see §5) is shown in Fig. 1.
The next step is to examine the statistical significance of the measured cross-correlation between two domains and . Two uncorrelated signals can still produce a considerable sample cross-correlation if they have a strong auto-correlation structure. This is captured by the Bartlett’s formula , which is an estimator for the variance of (for a fixed value of
). Under the null-hypothesis that the domain-level signals ofand are uncorrelated,
where is the autocorrelation of the time series of domain at lag .
Under the previous null-hypothesis, the expected value of
is zero and the following statistic approximately follows the standard normal distribution:
The approximation is due to the fact that is bounded between . So, we can now perform hypothesis testing for every pair of domains, computing a corresponding -value based on .
Given that there may be several domains in , we need to control the number of false
positive edges that may result from the multiple testing problem. We do so using the False Discovery Rate (FDR)
method of Benjamini and Hochberg .
Specifically, given domains, we need to perform tests (for each potential
edge and for each possible lag value), and compute the -value for each test, based on (7).
Given a False Discovery Rate (the expected value of the fraction of tests that are false positives),
the Benjamini-Hochberg procedure ranks the p-values ( becomes the ’th lowest -value)
and only keeps the first tests (edges), where is the highest -value such that . 111 This formula assumes that the p-values are independent (which is often not true in practice). The case of correlated p-values can be handled replacing qith , but that approach is very conservative, resulting in many false negatives .
Lag inference and edge directionality. We infer the domain-level network as follows. Two domains are connected if there is at least one lag value at which the cross-correlation has passed the FDR test. The standard approach in lag inference is to consider the lag value that maximizes the absolute cross-correlation,
The corresponding correlation is denoted as . There are two problems with this approach. First, it is harder to examine the statistical significance of
because it is the maximum of a set of random variables.222An analytic approach based on extreme-value statistics was proposed in  but it relies on several approximations. Numerical approaches based on frequency-domain bootstrapping, on the other hand, are computationally expensive [21, 24, 31]. Second, it is often the case that there is a range of lag values that produce “almost maximum” cross-correlations, say within one standard deviation from each other. Focusing on and ignoring the rest of the statistically significant and almost equal cross-correlations is not well justified.
Instead, we follow a more robust approach in which an edge of the domain-level network may
be associated with a range of lag values.333In principle, it may be a set of lag values. In practice
though, significant correlations result for a continuous range of lag values.
The lag range that we associate with the edge between and , denoted as , is
defined as the range of lags that produce significant cross-correlations, within one
standard deviation from .
If includes =0, the edge is represented as undirected.
If includes only positive lags, the edge is directed from to
meaning that ’s signal precedes ’s by the given lag range; otherwise, we associate the
opposite direction with that edge.
We emphasize that the directionality of the edges does not imply causality; it only refers
to temporal ordering.
Edge weight and domain strength. How to assign a weight to each domain-level edge in ? A common approach is to consider the (signed) magnitude of the cross-correlation . This is reasonable if all domain signals have approximately the same signal power. In addition, we propose a new edge weight that is based on the covariance of the two domains:
The cross-correlation is computed at lag but we could use the average of all cross-correlations in instead. The weight of an edge can be positive or negative depending on the sign of the corresponding cross-correlation.
Finally, the strength of a network node (domain) is defined as the sum of the absolute weights of all edges of that node (ignoring edge directionality).
The two objectives of this section are to illustrate how the -MAPS method works, and to contrast the results of the latter with commonly used methods such as PCA, ICA, spatial clustering, and overlapping community detection. We rely on synthetic data so that the ground-truth is known.
We construct five domains on a 5070 spatial grid. Each domain is associated with a “mother” time series , (=15). To make the experiment more realistic in terms of autocorrelation structure and marginal distribution, each is a real fMRI time series with length =1200 (see §6). The five mother time series are uncorrelated (absolute cross-correlation 0.05 at all lags), and they are normalized to zero-mean, unit-variance. To create correlations between domains (i.e., domain-level edges), we construct five new time series based on linear combinations of two or more mother time series. For instance, if we set with and , domains and become positively correlated at a lag ; the correlation increases with . The time series are again normalized to zero-mean, unit-variance. We then scale the time series of domain by a factor to control the variance of each domain ().
For simplicity, each domain is a circle with radius . A domain has a “core region” with the same center and radius ; the core is supposed to be the epicenter of that domain. Every point in the core has the same signal (before we add random noise). Outside the core, the signal attenuates at a distance from the center of the domain as follows:
Finally, we superimpose white Gaussian noise of zero-mean, unit-variance on the entire grid. The parameters of the five synthetic domains are shown in Table I. The domains differ in terms of size and power (variance). The spatial extent of the domains is shown in Fig.2-A; domains 1 and 3 overlap with domain 2, while domains 4 and 5 also overlap to a smaller extent. Further, there is a strong and lagged anti-correlation between domains 1 and 3, a weaker positive correlation at zero-lag between domains 4 and 5, and an ever weaker positive correlation at zero-lag between domains 3 and 5. The edges of the domain-level network are also shown in Fig.2-A.
The parameters of -MAPS are set as follows: =4 cells (up-down-left-right), and =0.55 (corresponds to significance level ). In the edge inference step, the FDR threshold is =10% and .
Fig.2-B shows the local homogeneity field as well as the identified seeds (blue dots), while Fig.2-C shows the five discovered domains. As expected, we often identify more than one seed in the core of each domain due to noise; those seeds are eventually merged into the same domain. The local homogeneity field is weaker in domains 4 and 5 (due to their lower variance) but a seed is still detected in those domains. Seeds also appear at the two overlapping regions between (1,2) and (2,3) but those seeds gradually merge with one of the domains in which they appear.
Each domain is a subset of the domain’s true expanse. The reason is that some cells close to the periphery of each domain have very low signal-to-noise ratio (recall that the signal decays to zero at the periphery and so the average correlation between those cells with the rest of their domain does not exceed the threshold). More quantitatively, the inferred domains include about 80%-90% of the ground-truth cells in each domain. In non-overlapping regions this fraction is higher (85%-95% of the cells), while in overlapping regions it drops to 45%-80%. The extent of overlapping regions is harder to correctly identify especially when a domain (e.g., domain 2) overlaps with a stronger domain (e.g., domains 1 or 3); the stronger domain effectively masks the signal of the weaker domain. The average pairwise cross-correlation of the cells in each domain varies between 55%-70% in the ground-truth data, while the inferred domains have slightly higher average cross-correlation (65%-75%) due to their smaller expanse.
Finally, Fig. 2-C shows the inferred domain-level network. -MAPS identifies correctly the three edges and their polarity (positive versus negative correlations). The lag ranges always include the correct value (e.g., the edge between domains 1 and 3 has a lag range [14,15]). Also, the three edges are correctly ordered in terms of absolute cross-correlation magnitude: (1,3) followed by (4,5), followed by (3,5).
We apply EOF analysis using Matlab’s PCA toolbox. Fig. 2-E,F,G show the first three principal components, which collectively account for about 90% of the total variance. A first observation is that domains 4 and 5 are not even visible in these components – they only appear in the next two components, which account for about 5% of the variance each. This is because domains 4 and 5 are smaller and have lower variance. This is a general limitation of PCA: the variance of the analyzed field can be dominated by a small number of “modes of variability”, completely masking smaller/weaker regions of interest and their connections. Second, the first three components do not provide a consistent evidence that domains 1 and 3 are strongly anti-correlated; this is due to their lagged correlation, which is missed by PCA. Third, the first component, which accounts for 40% of the total variance, can be misinterpreted to imply that domain 2 is somehow positively correlated with domains 1 and 3, even though it is actually generated by an uncorrelated signal. This is due to the overlap of domain 2 with domains 1 and 3.
We apply ICA on the synthetic data using Matlab’s FastICA toolbox. To help ICA perform better, we specified the right number of independent components, which is two (domains 1,3,4,5 are indirectly correlated – domain 2 is not correlated with any other). The two independent components are shown in Fig. 2-H,I. Note that only a rough “shadow” of each domain is visible. Domains 1 and 3 appear in different colors, providing a hint that they are anti-correlated, while domains 3 and 5 appear in the same color because they are positively correlated. Overall, however, the components are quite noisy and it would be hard in practice to discover the functional structure of the underlying system if we did not know the ground-truth. The results are even harder to interpret when we request a larger number of components.
We apply the most well-known clustering method, k-means, on our synthetic data. As commonly done with correlation-based clustering, the distance between two cells and is determined by the maximum absolute correlation across all considered lags, as . Fig. 2-J,K shows the resulting clusters for =5 (the number of synthetic domains) and 6, respectively. For =5, domains 1 and 3 form a single cluster because of their strong anti-correlation; the same happens with domains 4 and 5. Further, two of the five clusters (green and brown) cover just noise. The situation changes completely when we request =6 clusters. In that case, the overlapping regions in domain 2 form a single cluster, while domains 1 and 3 are separated in different clusters. Another clustering algorithm, resulting in spatially contiguous clusters , is illustrated in §5 in the context of climate data analysis (see Fig. 4-D).
We apply a state-of-the-art overlapping community detection method, referred to as OSLOM , with the default parameter values. The input to OSLOM is a positively weighted graph: each vertex is a grid cell and an edge between vertices and corresponds to the maximum absolute cross-correlation across all lags of interest. Absolute correlations less than 30% are considered insignificant and the corresponding edges are pruned.444We have experimented with other pruning thresholds between 20%-50% and the results are very similar at the first two hierarchy levels. As most community detection methods, OSLOM does not distinguish between positive and negative correlations. OSLOM provides a hierarchy of communities. When applied to our synthetic data, the first level of hierarchy (not shown) simply groups together domains 1,2,3 in one community (even though domain 2 is uncorrelated with domains 1 and 3), and domains 4,5 in another community. The connection between domains 3 and 5 is missed. The second level of hierarchy is shown in Fig. 2-L. Overall, OSLOM does a better job than PCA/ICA/clustering in detecting the spatial extent of each domain. A small overlap between domains (1,2) and (2,3) is discovered but to a smaller extent than -MAPS. However, a community in OSLOM is not constrained to be spatially contiguous. This is the reason we see some black dots in regions 4 and 5; these are non-contiguous overlaps between the communities that correspond to these two domains.
We first apply -MAPS in the context of climate science. Climate scientists are interested in teleconnections between different regions, and they often rely on EOF analysis to uncover them . Here, we analyze the monthly Sea-Surface Temperature (SST) field from the HadISST dataset , covering 50 years (1956-2005) at a spatial resolution of , and we focus on the latitudinal range of to avoid sea-ice covered regions. Following standard practice, we pre-process the time series to form anomalies, i.e., remove the seasonal cycle, remove any long-term trend at each grid-point (using the Theil-Sen estimator), and transform the signal to zero-mean at each grid point.
-MAPS is applied as follows. We set the local neighborhood to the =4 nearest cells so that we can identify the smallest possible domains at the given spatial resolution. Second, the homogeneity threshold is set to 0.37 (corresponds to a significance level of ). In the edge inference stage, the lag range is =12 months (a reasonable value for large-scale changes in atmospheric wave patterns), and the FDR threshold is set to =3% (we identify about 30 edges and so we expect no more than one false positive).
Fig. 3-A shows the identified domains (the color code will be explained shortly). The spatial dimensionality has been reduced from about 6000 grid cells to 18 domains. 65% of the sea-covered cells belong to at least one domain; the overlapping regions are shown in black and they cover 2% of the grid cells that belong to a domain. The largest domain (domain ) corresponds to the El Nio Southern Oscillation (ENSO), which is also the most important in terms of node strength (see Fig. 3-B). Other strong nodes are domain (part of the “horseshoe-pattern” surrounding ENSO), domain (Indian ocean) and domain (sub-tropical Atlantic). The strength of the edges associated with ENSO are shown in Fig. 3-C. These findings are consistent with known facts in climate science regarding ENSO and its positive correlation with the Indian ocean and north tropical Atlantic, and negative correlations with the regions that surround it in the Pacific (horseshoe-pattern) .
Fig. 3-D shows the inferred domain-level network. The color code represents the (signed) cross-correlation for each edge. The lag range associated with each edge is shown in Fig. 3-E; recall that some edges are not directed because their lag range includes =0. The network consists of five weakly-connected components. If we analyze the largest component (which includes ENSO) as a signed network (i.e., some edges are positive and some negative) we see that it is structurally balanced 
. A graph is structurally balanced if it does not contain cycles with an odd number of negative edges.555For instance, if two friends are both enemies with a third person, they form a balanced social triangle. A structurally balanced network can be partitioned in a “dipole”, so that positive edges only appear within each pole and negative edges appear only between the two poles. In Fig. 3-A, the nodes of these two poles are colored as blue and green (the smaller disconnected components are shown in other colors).
Focusing on the lag range of each edge, domain seems to play a unique role, as it temporally precedes all other domains in the inferred network. Specifically, its activity precedes that of domains , and by about 5-10 months. The lead of south tropical Atlantic SSTs (domain ) on ENSO has recently received significant attention in climate science . Our results suggest that SST anomalies in domain may impact a large portion of the climate system.
Switching to lag inference, we say that a triangle is lag-consistent if there is at least one value in the lag range associated with each edge that would place the three nodes in a consistent temporal distance with respect to each other. For instance, in the case of the first triangle of Fig. 3-F, the triangle is lag-consistent if the edge from to has a lag of 8 months and the edge between and has lag -2 months (meaning that the direction would be from to ); several other values would make this triangle lag-consistent. We have verified the lag-consistency of every triangle in the climate network. One exception is the triangle between domains (), shown at the bottom of Fig. 3-F. However, the large lag in the edge from to can be explained with the triangle between domains (), which is lag-consistent. We emphasize that the temporal ordering that results from these lag relations should not be misinterpreted as causality; we expect that several of the edges we identify are only due to indirect correlations, not associated with a causal interaction between the corresponding two nodes.
For comparison purposes, Fig. 4 shows the results of EOF analysis, community detection, and spatial clustering on the same dataset. The first EOF explains only about 19% of the variance, implying that the SST field is too complex to be understood with only one spatial component. On the other hand, the joint interpretation of multiple EOF components is problematic due to their orthogonal relation . The anti-correlation between ENSO and the horseshoe-pattern regions is well captured in the first component but several other important connections, such as the negative and lagged relation between the south subtropical Atlantic and ENSO (domains and , respectively), are missed.
Fig. 4-C shows the results of the overlapping community detection method OSLOM. Following , the input to OSLOM is a correlation-based cell-level network. Correlations less than are ignored. The weight of each edge is set to the maximum absolute correlation between the corresponding two cells, across all considered lags. OSLOM identifies communities. Community is not spatially contiguous; it covers ENSO, the Indian ocean, a region in the north tropical Atlantic, and a region in south Pacific. This is a general problem with community detection methods: they cannot distinguish high correlations due to a remote connection from correlations due to spatial proximity. In the context of climate, the former may be due to atmospheric waves or large-scale ocean currents while the latter may be due to local circulations.
Finally, Fig. 4-D shows the results of a spatial clustering method , with the same homogeneity threshold we use in -MAPS. That method ensures that every cluster (referred to as “area”) is spatially contiguous but it also requires that there is no overlap between areas and it attempts to assign each grid cell to an area. Consequently, it results in more areas (compared to the number of domains), some of which are just artifacts of the spatial parcellation process. Further, the spatial expanse of an area constrains the computation of subsequent areas because no overlaps are allowed.
Functional magnetic resonance imaging (fMRI) measures fluctuations of the blood oxygenation level dependent (BOLD) signal in the brain. The dynamics of the BOLD signal in gray matter are generally correlated with the level of neural activity. The resulting spatio-temporal field is often analyzed using ICA, clustering or network-based methods to infer brain functional networks .
Here, we illustrate -MAPS on cortical resting-state fMRI data from a single subject (healthy young male adult, subject-ID: 122620) from the WU-Minn Human Connectome Project (HCP) . The data acquisition parameters are described in . The spatial resolution is 2mm in each voxel dimension. The pre-processing of fMRI data requires several steps; we use the “fix-extended” HCP minimal processing pipeline that includes head motion correction, registration to a structural image, masking on non-brain voxels, etc; please see 
. MELODIC ICA and FIX are used to remove non-neuronal artifacts (e.g., physiological noise due to cardiac and respiratory cycles). We also perform bandpass filtering in the range 0.01-0.08Hz, as commonly done in resting-state fMRI.
In this paper, we analyze two scanning runs of the same subject (“scan-1” and “scan-2”). Each scan lasts about 14 minutes and results in a time series of length =1200 (repetition time TR=720msec). We emphasize that major differences across different scanning sessions of the same subject are common in fMRI; studies of functional brain networks often only report group-level averages. The entire cortical volume is projected to a surface mesh (Conte69 32K) resulting in about 65K gray-ordinate points (as opposed to volumetric voxels) . Each point of this mesh is adjacent to six other points; for this reason we set =6. The homogeneity threshold is set to =0.37 (corresponds to significance level ). The maximum lag range is set to 3, i.e., 2.2 seconds, and the FDR threshold is set to = (i.e., we expect one out of 10K edges to be a false positive). The signal of a domain is defined as the average across all voxels in that domain.
The application of -MAPS results in a network with about 850 domains in scan-1 (1120 domains in scan-2). 80% of the domains are smaller than 30-40 voxels (depending on the scan) and 5% of the domains are larger than 250 voxels. The number of edges is 4285 in scan-1 (4200 in scan-2). The absolute value of the cross-correlation associated with each edge is typically larger than 0.5. The fraction of negative edge correlations is about 5% in scan-1 and 20% in scan-2 suggesting that the polarity of some network edges may be time-varying. The lag that corresponds to the maximum cross-correlation is 0 in 70% of the edges and 1 in almost all other cases. 13% of the edges are directed, meaning that lag-0 does not produce a significant correlation for that pair of domains. There is a positive correlation between the degree of a domain and its physical size (the correlation coefficient between degree and (size) is 0.70 for scan-1 and 0.66 for scan-2). Further, the network is assortative meaning that domains tend to connect to other domains of similar degree (assortativity coefficient about 0.7 in both scans).
An important question is whether the -MAPS networks are consistent with what neuroscientists currently know about resting-state activity in the brain. During rest, certain cortical regions that are collectively referred to as the Default-Mode Network (or DMN) are persistently active across age and gender . Other known resting-state networks are the occipital (part of the visual system) and the motor/somatosensory (associated with planning and execution of voluntary body motion). With the terminology of network theory, the previous “networks” would be referred to as communities within the larger functional brain network. To identify communities in the -MAPS network, we applied OSLOM . OSLOM identifies two hierarchical levels in both scans. The first level consists of highly overlapping communities that cover almost the entire cortex. The second hierarchical level is more interesting, resulting in eight communities for scan-1 (nine for scan-2). Fig. 5 shows the three communities (C.1, C.2, C.3) for each scan that have the highest resemblance to the three previously mentioned resting-state networks: C.1 corresponds to the DMN, C.2 corresponds to the occipital resting-state network, and C.3 corresponds to the motor/somatosensory network. C.1 is quite similar across the two scanning sessions and it clearly captures the DMN. In C.2, the extent of the network is smaller in scan-2, which is not too surprising giving the known inter-scan variability of resting-state fMRI. C.3 is also quite similar across the two scans and consistent with the motor/somatosensory network.
To further investigate the structure of those higher degree (and typically larger) domains, we perform k-core decomposition.666A process that starts with the original network (=0), and it removes iteratively all nodes of degree or less in each round so that after the extraction of the ’th core all remaining nodes have degree larger than . The density of the remaining network, after the extraction of =14 cores from the scan-1 network (=16 cores in scan-2) shows a sudden increase by a factor of two. This suggests that the network includes a densely inter-connected backbone, also known as “rich-club”. The size of this backbone is small relative to the entire network: 130 domains in scan-1 (90 in scan-2). Similar observations about the resting-state brain, but using voxel-level network analysis methods, have been previously reported . Fig.6 shows the location of the backbone domains for each hemisphere and for each scan. The regions that are usually associated with the DMN dominate the backbone of both sessions. Interestingly though, scan-1 includes the regions of the motor/somatosensory network, while the backbone of scan-2 is missing those regions. One possible explanation for this discrepancy is that the subject was more relaxed during scan-2, not exerting the mental effort to stay still.
-MAPS results in a correlation-based functional network. A next step would be to infer a causal, or effective network, leveraging the framework of probabilistic graphical models. Instead of attempting to learn the graph structure from raw data, one could use the -MAPS network as the underlying structure and then apply conditional independence tests to remove non-causal edges (e.g., ). Additionally, in many real systems the underlying temporal dynamics are non-stationary. Instead of relying on sliding window-based approaches, which are often sensitive to the duration of the window, an important extension of -MAPS will be to construct dynamic networks by detecting automatically the time periods during which the network remains constant. It would also be interesting to combine the inferred functional network with a structural network that shows the physical connectivity between the identified domains. This is not hard in the case of communication networks but it becomes also feasible for brain networks using diffusion-weighted MRI. The projection of the observed dynamics on the underlying structure can help to characterize the actual function and delay of each system component.
R. C. Craddock, G. A. James, P. E. Holtzheimer, X. P. Hu, and H. S. Mayberg, “A whole brain fMRI atlas generated via spatially constrained spectral clustering,”Hum. Brain Mapp., 33(8):1914 1928, 2012.
A. Hyvärinen, “Fast and robust fixed-point algorithms for independent component analysis,”IEEE Trans. Neural Netw., 10(3):626 634, 1999.
We are given a spatio-temporal field on a grid , a pairwise similarity metric between pairs of grid cells and a threshold . Starting from a grid cell , the goal is to find the largest subset of grid cells that form a single spatially connected component, and whose average similarity exceeds the threshold . The spatial grid can be represented as a planar graph where each grid cell is a node and edges connect adjacent grid cells. Formally we have the following graph optimization problem:
Definition 1. Rooted Largest Connected -Dense Subgraph Problem (rooted LCDS). Given a regular (grid) graph , a weight function (where and symmetric), a threshold , and a node , find a maximum cardinality set of nodes such that , the induced subgraph is connected () and (i.e., ).
To show that rooted LCDS is NP-hard we first consider a variant of the problem in which the induced subgraph has to satisfy two conditions; it has to be a connected subgraph of , and the average weight of the edges in has to exceed . More formally:
Definition 2. Largest Connected -Dense Subgraph Problem (LCDS). Given a regular (grid) graph , a weight function (where and symmetric), and a threshold , find a maximum cardinality set of nodes such that and .
To show that LCDS is NP-hard we use a reduction of the densest connected subgraph problem.
Definition 3. Densest Connected -Subgraph Problem
(DCS). Decision version: Given a graph , and positive integers and , does there exist an induced subgraph on vertices such that this subgraph has at least edges and is connected?
DCkS (also referred to as the connected h-clustering problem) has been shown to be NP-complete on general graphs , as well as on planar graphs . DCS is polynomially time solvable for subclasses of planar graphs of bounded tree width .
Grid graphs, which are the type of graphs that arise in our application domains, are planar bipartite graphs, with non-fixed tree width, and no positive results are known for this subclass of planar graphs.
The work on approximating densest/heaviest connected k-subgraphs is relatively very limited (see recent theoretical result ). It is easy to show that the DCS problem can be easily reduced to an instance of the decision version of the LCDS problem, and hence it is also NP-complete even on planar graphs.
LEMMA 1. The decision version of the LCDS problem is NP-complete on planar graphs.
PROOF. This can be shown via a reduction from the DCS. We reduce an instance of the DCS to an LCDS instance by using the same graph , setting ( is 1 if and only if the pair of nodes is connected by an edge), and .
Now it is easy to show that rooted LCDS is also NP-hard. If a poly-time algorithm existed for the rooted LCDS, then by calling it times with each of the nodes of the graph, we would obtain in poly-time a solution to the NP-hard LCDS.
The threshold intuitively determines the minimum degree of homogeneity that the underlying field must have within each domain. The higher the threshold, the higher the required homogeneity and therefore, the smaller the size of the identified domains.
we propose the following heuristic. We start with a random sample of pairs of grid cells and for each pairwe compute the Pearson correlation at zero lag. To assess the significance of each correlation we use Bartlett’s formula . Under the null hypothesis of no coupling should have zero mean, and a reasonable estimate of its variance is given by
here is the autocorrelation of the time series of grid cell at lag . The scaled values should approximately follow a standard normal distribution. To assess the significance of each correlation we perform a one sided z-test for a given level of significance .
The threshold is set as the average of all significant correlations. A domain is a set of spatially contiguous grid cells, thus we require that the mean pairwise correlation for the cells belonging to the same domain to be higher than the mean pair-wise correlation of randomly picked pairs of grid cells. depends on the choice of the significance level , on the autocorrelation structure of the underlying time series and on the correlation distribution of the field.