Localized-Spatio-temporal-Models
None
view repo
We address the problem of predicting spatio-temporal processes with temporal patterns that vary across spatial regions, when data is obtained as a stream. That is, when the training dataset is augmented sequentially. Specifically, we develop a localized spatio-temporal covariance model of the process that can capture spatially varying temporal periodicities in the data. We then apply a covariance-fitting methodology to learn the model parameters which yields a predictor that can be updated sequentially with each new data point. The proposed method is evaluated using both synthetic and real climate data which demonstrate its ability to accurately predict data missing in spatial regions over time.
READ FULL TEXT VIEW PDF
Spatial and spatio-temporal single-structure point process models are wi...
read it
This paper proposes a log-linear model for the latent intensity function...
read it
Information about the spatio-temporal pattern of electricity energy carr...
read it
We propose δ-MAPS, a method that analyzes spatio-temporal data to
first ...
read it
Network regularization is an effective tool for incorporating structural...
read it
We develop a spatio-temporal model to forecast sensor output at five
loc...
read it
The use of video-imaging data for in-line process monitoring application...
read it
None
Many real-world processes of interest, ranging from climate variables to brain signals, are spatio-temporal in nature, cf. Cressie & Wikle (2011). That is, they can be described as a random quantity that varies over some fixed spatial and temporal domain. Suppose we obtain training points from a real-valued spatio-temporal process,
where denotes the quantity of interest observed at the training point, with spatial coordinate and time . For notational convenience, let denote an unobserved test point in space-time where is unknown. Then a common goal is to predict in unobserved space-time regions using . Specifically, certain spatial regions may have limited data coverage over extended periods of time, as illustrated in Figure 1.
In real-world applications, need not be gathered in a single batch but obtained in parts over time from various sensors, stations, satellites, etc. That is, the dataset is augmented sequentially, i.e., . In these streaming data scenarios, we are interested in continuous refinement of the prediction of at as new data is augmented into .
The unknown data-generating process is often assumed to belong to a class of data models indexed by a parameter . Each model in the class yields a predictor of at test point . A specific set of model parameters is learned using . Examples of commonly used model classes include Gaussian Processes (GP) Rasmussen & Williams (2006), spatio-temporal random effects models Cressie et al. (2010), dynamic factor analysis models Lopes et al. (2008); Fox & Dunson (2015), spatial random effect models extended to incorporate time as an additional dimension Zammit-Mangion & Cressie (2017) (cf. related work section below). For many spatio-temporal applications, the model class should be capable of expressing temporal patterns that change across different spatial regions. Moreover, for streaming data scenarios, the learned parameter and the resulting predictor should be updated in a sequential manner.
Our contribution in this paper is two-fold:
we develop a non-stationary, localized covariance model capable of capturing temporal patterns that change across space, as illustrated in Figure 2 below.
we show how to sequentially learn the covariance model parameters and update the predictor from streaming spatio-temporal data, with a runtime that is linear in .
In Section 2, we relate our work to already existing approaches and introduce a commonly used model class in Section 3. In Section 4 we develop a localized spatio-temporal covariance model to be used in conjunction with a covariance-fitting learning approach. Finally, the proposed method is evaluated using synthetic and real climate data in Sections 5 and 6, respectively.
Notation:
stacks both elements into a single column vector.
, , and denote the Kronecker product, Kronecker delta function, weighted -norm and Moore-Penrose inverse, respectively. Finally, the sample mean is denoted by .A popular model class is the family of GPs, specified by a mean and covariance function Rasmussen & Williams (2006). This approach is computationally prohibitive in its basic form since both learning the model parameters and implementing the predictor requires a runtime on the order of , where is typically large in spatio-temporal applications. The predictor implementation can be approximated using various techniques. One popular approach is to approximate the training data using inducing points which reduces the runtime to Quiñonero-Candela & Rasmussen (2005); Bijl et al. (2015). Moreover, by assuming Kronecker covariance functions it is possible to obtain even shorter runtimes by utilizing the Kronecker structure of the GP covariance matrix Saatçi (2012). If the model class is restricted to stationary covariance functions, the runtimes can be reduced further, cf. Saatçi (2012); Wilson et al. (2014). In the space-time domain, such models are also equivalent to dynamical system models so that can be approximated using a basis expansion and implemented by a Kalman smoother Särkkä et al. (2013). In the above cases, however, and are not updated jointly when obtaining streaming data.
The restriction to stationary covariance models is, moreover, not always adequate to capture temporal patterns that differ across spatial regions. This modeling limitation is addressed by Cressie et al. (2010), where a discrete-time model class is partially specified using a spatial basis function expansion with time-varying expansion coefficients. These are modeled as a first-order vector auto-regressive process. The coefficients thus determine a spatial pattern of the process that evolves at each discrete time-instant. This model class can capture patterns localized to specific regions in space, unlike stationary covariance models. The predictor
can be viewed as a spatial fixed-rank kriging method that is updated via a Kalman filter and thus applicable to streaming data (cf.
Cressie & Johannesson (2008)). The model parameter, however, is learned using a moment-fitting approach and operates on batch rather than streaming datasets. Other work using dynamic factor analysis models
Lopes et al. (2008); Fox & Dunson (2015)similarly allow for time-varying coefficients but with more flexible data-adaptive basis. However, they are implemented using Markov Chain Monte Carlo methods which are computationally prohibitive for the scenarios considered herein.
Moreover, a first-order auto-regressive structure may not accurately capture more complex temporal patterns observed in real spatio-temporal processes. The approach taken by Zammit-Mangion & Cressie (2017)
circumvents this limitation by basis functions that are localized in both space and time. Time locality cannot, however, capture periodic patterns or trends necessary for interpolation over longer periods. The model parameters are learned using an expectation-maximization method which is not readily applicable to streaming data scenarios.
We begin by defining the data vector obtained from . For the test point , we consider the unbiased predictor of as a linear combination of the data Stein (2012):
(1) |
where is a vector of weights which naturally depend on the test point . The weight vector is defined as the minimizer of the conditional mean square prediction error. That is,
(2) |
Since the conditional error is determined by the unknown distribution , we specify a class of data-generating models, using only the mean and covariance Cressie & Wikle (2011):
(3) |
The function captures the expected trend of the entire spatio-temporal process , and when there is no such general trend we set . The function captures the smoothness of the process in space-time and is of dimension . The parameter matrix is diagonal and specifies the relevance of each dimension of similar to the way in which automatic relevance determination is sometimes used within the GP Tipping (2001); Faul & Tipping (2002). Taken together, (3) specifies a class of models, each of which is indexed by the parameters . The covariance parameters , which we collectively
denote by for notational convenience, determine the spatio-temporal covariance structure which depends on the function . In the next section, we will specify to develop a suitable covariance model to capture local spatial and periodic temporal patterns.
For a given model in the class, the optimal weights (2) are given in closed form Stein (2012) as
where the subindex highlights the model parameter dependence. The quantities in are determined by the regressor matrix
and the following covariance matrix
(5) |
with being an oblique projector onto .
The optimal weights are invariant to the mean parameters and to uniform scaling of the covariance parameters . By learning up to an arbitrary scale factor, the predictor (1) is given by the linear combiner weights . If we assume that the process is Gaussian, the model can be learned using the maximum likelihood framework. However, this yields neither a convex problem nor one that is readily solved in a sequential manner as is augmented sequentially. In the next section, we apply a convex covariance-fitting framework to learn the spatio-temporal model using streaming data.
Below we specify the function in (3) such that the spatio-temporal covariance structure can express local spatial patterns with varying temporal periodicities as illustrated in Figure 2. Subsequently, we apply a covariance-fitting methodology for learning the model parameters such that the predictor (1) can be updated sequentially for each new observation Zachariah et al. (2017).
The function varies over a space-time domain and its elements can be thought of as basis functions. It is formulated as a Kronecker product of a time and space bases,
(6) |
for compactness.
We begin by specifying the spatial function as
(7) |
where the basis vector for the spatial dimension,
(8) |
is composed of localized components with a finite support . For notational simplicity, we consider and to be same for each dimension . Based on their computational attractiveness and local approximation properties we use a cubic spline basis Rasmussen & Williams (2006); Wasserman (2006). Then (8) is given by (4), where determines the location of a component. Figure 2(a) illustrates the components as a function of its spatial dimension. We place the centers of each component uniformly across the spatial dimensions.
Using allows for covariance structures that are localized in space in such a way that neighbouring points have a nonnegative correlation and points far from each other have no correlation as determined by the support size . Hence for a given , the resulting covariance structure can capture local spatial patterns of a certain scale and can easily be extended to cover multiple scales by replacing (6) with for example
that accommodates two different support sizes and . The number of basis functions is chosen such that adjacent localized components have overlapping support to cater for points in between them. This requirement is fulfilled by choosing where is the range of the spatial dimension. For example when , the adjacent component have percent overlap. The maximum value of is limited by the number of training points and the computational resources that are available.
The temporal function is also specified by a basis
(9) |
However, to be able to predict missing data of the type illustrated in Figure 1 we cannot rely on a localized basis for extended interpolations over space-time. Due to its good approximating properties we instead apply the periodic basis developed by Solin & Särkkä (2014) defined over a range :
(10) |
Similar to a Fourier basis, allows for periodic covariance structures that capture both fixed and periodic patterns in the data along time with different frequencies. Moreover, as grows, any temporally stationary covariance structure can be captured, cf. Solin & Särkkä (2014). Using (10), the maximum frequency in the model is . Hence, depending on the data and the highest frequency periodic patterns we may expect in it, an appropriate value of can be chosen.
In summary, the proposed spatio-temporal basis in (6) is of dimension and yields a covariance function that may vary temporally with different frequencies specific to different spatial regions, as illustrated in Figure 2. The covariance structure is determined by the parameter , which we learn using a covariance-fitting methodology considered next.
We describe a covariance-fitting approach for learning the model parameter , up to an arbitrary scale factor, from streaming data. Given a training dataset , this approach enables us to update the predictor from (1) in a streaming fashion as . We consider fitting the model covariance structure of the training data , which is parameterized by in (5), to the empirical structure. Let us first define a normalized sample covariance matrix of the training data,
Here corresponds to using . Then the optimal model parameters are given by a covariance-fitting criterion (cf. Cressie (1985); Anderson (1989); Cressie & Johannesson (2008); Stoica et al. (2011)) with minimizer:
(11) |
Here the matrix norm corresponds to a weighted norm which penalizes correlated residuals. The learned parameter is invariant with respect to the mean parameter and can be rescaled by an arbitrary scale factor Zachariah et al. (2017). Moreover, the resulting predictor corresponding to in equation (1) can be written in the equivalent form:
(12) |
where . The -dimensional weight vector is defined as the minimizer
(13) |
where the elements of are given by
For proofs of these relations and a derivation of its computational properties,see Zachariah et al. (2017)
The resulting predictor in (12) is called the Spice
(sparse iterative covariance-based estimation) predictor. It is computed via a convex and sparsifying regularized minimization problem that can be solved using coordinate descent with recursively updated quantities at each new training point
. By exploiting this structure, our predictor can now be updated with streaming data as . A pseudocode implementation is provided in Algorithm 1. The key recursively updated quantities passed from one update to the next are the symmetric matrix and the vectors and of dimension along with the scalar . Here is the weight vector at sample , which is initialized at zero along with the above variables in Algorithm 1. The runtime is linear in and constant in memory. That is, for a fixed training data size , the total runtime of the algorithm is on the order and its memory requirement is . For further details, we refer the reader to the supplementary material. Code available at github.The proposed method has been derived for predictions using large and/or streaming data sets. We now demonstrate its predictive properties using synthetic data and for the sake of reference compare it with a Gpr (Gaussian process regression) method using different covariance functions .
To illustrate a dynamically evolving process, we consider planar a wave in one-dimensional space and time, cf. Figure 3(a). The unknown process is generated according to:
(14) |
where is the speed of the wave along space in units per second, is the wavelength in units of space and
is a zero-mean white Gaussian process with standard deviation
.Note that the process decays exponentially as it propagates through space. For our experiments, we set [spatial units/sec], [spatial units] and . Synthetic data is generated over a uniform grid and a subset of training points are used. Different contiguous space-time blocks are selected as test regions to resemble realistic scenarios in which the coverage of sensors, satellites or other measurement equipment is incomplete. For example, the dashed white boxes in Figure 4 emulate cases where data over a small region is missing most of the time. By contrast, the dashed black boxes correspond to cases when data over large spatial regions is missing some of the time.
The process in these test regions as well as at other randomly missing points is predicted using the proposed method with , and a spatial basis support set to spatial units. This results in being of dimension . The mean-square error (MSE) of the prediction is shown in Figure 3(b) and evaluated using 25 Monte Carlo simulations. The region in the white box extends over almost the entire time dimension, hence there are very few neighbouring training points in time to draw upon for prediction and no information about the periodicities in the region. Instead our method leverages the neighbouring spatial information to obtain a good prediction resulting in a low MSE. Both black boxes are test regions that have neighbouring training points that provide temporal information about the process. However, left region has training points both before and after whereas the right region only has points before, yielding a more challenging prediction problem. Nevertheless, the proposed method is able to learn both the periodic and the local damping patterns to provide accurate predictions in both regions.
We include also the MSE of Gpr using two different covariance functions learned by a numerical maximum likelihood search. While this method is not applicable to the streaming data of interest here, it provides a performance reference. First, we use a Matérn ARD covariance model Rasmussen & Nickisch (2010) to carefully adapt both space and time dimensions. In Figure 3(c) it is seen that the resulting prediction errors are markedly worse for the large missing spatial regions and the method naturally fails to capture the periodic pattern of the process. Next, we use a periodic Matérn ARD covariance model to also capture space-time periodicity. However, the MSE (Figure 3(d)) is degraded throughout, which is possibly due to the non-convex optimization problem used to learn the model parameters. It may lead to local minima issues, including learning erroneous periods.
Here we generate a process that emulates scenarios of temporal periodicities which may vary across spatial regions. This occurs e.g. in climate data. Figure 4(a) shows a realization of a process generated according to
(15) |
where the period differs across space and is zero-mean white Gaussian process with standard deviation . In the upper region of the spatial domain , i.e., the process has a constant mean. In the middle and bottom regions is large and small, respectively. The data is generated over a uniform grid and a subset of points is used for training. A contiguous space-time block, marked by the dashed black box in Figure 5, forms a test region to emulate scenarios where data can be missing over a large spatial region for some time.
For the proposed method we use , and a support of for the spatial basis, so that . For the Gpr we use the periodic Matérn ARD kernel. Figures 4(b) and 4(c) show the MSE performance of the proposed method and Gpr respectively which were obtained using 25 Monte Carlo simulations. The MSE of the proposed method is overall lower than that of Gpr, both in the dashed test region as well as outside it. Unlike the proposed method, Gpr has one parameter to fit to an overall periodic pattern and is thus unable to learn spatially localized patterns. Thus after learning, the process is predicted to be be nearly constant along time for all parts of the spatial region.
We now demonstrate the proposed method for much larger, and possibly streaming, real-world datasets.
As a first application example, we use tropical pacific Sea Surface Temperature (SST) data Wikle (2011). These data represent gridded monthly SST anomalies, in C, from January 1970 through March 2003 over a spatial region from S to N and E to W. The spatial resolution of the data is in both latitude and longitude.
Here we consider data from the first 36 months, making the total number of space-time data points equal to . In the first experiment, training points are sampled randomly across space-time and the missing data constitute the test points. Here we set as the number of training points. For the proposed method we set , and the spatial support to be half of each spatial dimension. Then . Figure 5(a) shows the prediction error histogram of all test points across the spatio-temporal domain. We see that it is centered around zero and its dispersion is considerably narrower than the dynamic range of the data.
In the second experiment, we select a contiguous space-time block as a test region in addition to other test points to evaluate the performance in scenarios where data over entire spatial regions are missing for a period of time. Data falling within the spatial region marked by the black dashed box in Figure 5(c) is missing beyond month 26, as indicated by the black dashed line in Figure 5(d). Here are the number of training points. The prediction error histogram for this second experiment is shown in Figure 5(b) and remains fairly narrow. Figure 5(c) illustrates the predicted SST anomalies [C] for a spatial slice at month . We pick a spatial point in a region where the El Niño effect, i.e., the periodic warming of the equatorial Pacific Sea Surface Sarachik & Cane (2010), is known to be noticeable. The prediction of the SST anomalies at this spatial location across time along with the true SST is illustrated with Figure 5(d). Note that the predictor is able to track the rising temperature deviation also for the missing data.
As a second application example, we use precipitation data from the Climate Research Unit (CRU) time series datasets of climate variations Jones & Harris (2013). The precipitation data consists of monthly rainfall in millimeter over a period from to obtained with high spatial resolution (0.5 by 0.5 degree) over the whole planet. Here we consider a five year period from to and between spatial coordinates W to W and N to N. This yields a total number of data points.
The spatial region indicated by the black dashed box in Figure 6(b) beyond month , as seen in Figure 6(c), constitutes a contiguous test region, in addition to other randomly selected test points. The remaining points are used for training.
For the proposed method we set , and the spatial support to be half of each spatial dimension. Then . Figure 6(a) shows the prediction error histogram for the precipitation test data. It is centered around zero and its dispersion is narrower than the dynamic range of the data. Figure 6(b) shows the contour plot of predicted precipitation for a spatial slice at month . The red cross and plus marker indicate spatial points whose actual and predicted time series are compared in Figures 6(c) and 6(d), respectively. Note that the estimated precipitation tracks the true precipitation well everywhere even to the right of the black dashed line where the data was not seen during training. Note the ability of the predictor to track the different seasonal patterns in the missing regions.
We proposed a method in which a spatio-temporal predictor can be learned and updated sequentially as spatio-temporal data is obtained as a stream. It is capable of capturing spatially varying temporal patterns, using a non-stationary covariance model that is learned using a covariance-fitting approach. We demonstrated, using both simulated and real climate data, that it is capable of producing accurate predictions in large unobserved space-time test regions. In future work, we intend to further improve the computational efficiency of the method by exploiting the spatially localized structure of the covariance model.
(a) Histogram of prediction error of all test points in the first scenario with randomly sampled training points. The plot in red is a fitted Gaussian distribution. Note that the dynamic range of the data is
C . (b) Histogram of prediction error of all test points for the second scenario with a contiguous space-time block as test region. The plot in red is a fitted Gaussian distribution (c) Contour plot of predicted SST for a single spatial slice at time . The red dots denote training points and the black dashed box indicates a contiguous test region. The red cross denotes a point of interest in which the El Niño effect can be observed. (d) Comparison of time series of actual and estimated SST for the point marked by the red cross in 5(c). The data to the right of the black dashed line is part of the contiguous test region and is not used during training.This research was financially supported by the project NewLEADS - New Directions in Learning Dynamical Systems (contract number: 621-2016-06079), funded by the Swedish Research Council.
Journal of Machine Learning Research
, 16:2501–2542, 2015.By expanding the objective function in (11), we obtain the equivalent form
(16) |
where when . Next we define an auxiliary variable that satisfies
or equivalently
(17) |
Using the auxiliary variable and the definition of , we can therefore express the objective function as:
(18) |
where are nonnegative and satisfies the constraint. The minimizing is the learned model parameter. This problem is identified as a convex, semidefinite program, cf. Lobo et al. (1998). We may also add the following normalization constraint,
to match the normalized covariance matrix. This merely adds a linear constraint to problem (18) with a constrained minimizer denoted . We now prove that .
Begin by defining a constant , such that at the minimum of (18). We show that is the only possible value and so both terms in (18) equal each other at the minimum.
Let , and observe that the cost (18) is then bounded by
Thus must satisfy , or . Therefore is the only solution and both terms must be equal at the minimum. We can thus re-write the minimization of (18) as the following problem
(19) |
with minimizer and where is an auxiliary variable.
Next, consider an equivalent problem to (19) obtained by re-defining the variables as . Then and , so that the equivalent problem becomes
(20) |
where . The minimizer of the equivalent problem (20) is therefore . Problem (20) is however identical to the constrained problem
(21) |
whose minimizer is when , which follows from expanding the cost in (11) and the normalization constraint.
Thus we proved that and since the predictor is invariant to uniform scaling of , that is, , we see that the normalization constraint is not relevant for the result.
For further details, see Zachariah et al. (2017).
Consider the following augmented problem
(22) |
Solving for and yields the minimizer
It can be shown that by inserting the minimizing back into (22), we obtain a concentrated cost function which is equal to that in (18). Thus we obtain the sought model parameter from the augmented problem.
Moreover, we can identify . Thus we obtain both and the weights from the augmented problem. Using these facts, we may alternatively solve for first. The second and third terms in (22) can be written as
and
respectively. Then the minimizing hyperparameters
in (22) can be expressed in closed-form:Inserting the expression back in to (22) yields a concentrated cost function
which, after dividing by , equals that in (14). Thus using minimizing weights , after concentrating the augmented problem with respect to , yields
Comments
There are no comments yet.