Learning Localized Spatio-Temporal Models From Streaming Data

02/09/2018 ∙ by Muhammad Osama, et al. ∙ Uppsala universitet 0

We address the problem of predicting spatio-temporal processes with temporal patterns that vary across spatial regions, when data is obtained as a stream. That is, when the training dataset is augmented sequentially. Specifically, we develop a localized spatio-temporal covariance model of the process that can capture spatially varying temporal periodicities in the data. We then apply a covariance-fitting methodology to learn the model parameters which yields a predictor that can be updated sequentially with each new data point. The proposed method is evaluated using both synthetic and real climate data which demonstrate its ability to accurately predict data missing in spatial regions over time.



There are no comments yet.


page 4

page 7

page 9

Code Repositories



view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many real-world processes of interest, ranging from climate variables to brain signals, are spatio-temporal in nature, cf. Cressie & Wikle (2011). That is, they can be described as a random quantity that varies over some fixed spatial and temporal domain. Suppose we obtain training points from a real-valued spatio-temporal process,

where denotes the quantity of interest observed at the training point, with spatial coordinate and time . For notational convenience, let denote an unobserved test point in space-time where is unknown. Then a common goal is to predict in unobserved space-time regions using . Specifically, certain spatial regions may have limited data coverage over extended periods of time, as illustrated in Figure 1.

Figure 1: Example of training points (dots) in over a bounded space-time domain with one spatial dimension. Note that the sampling pattern may be irregular and that it is not possible to provide complete spatial coverage at all times. A typical problem is to predict the process in unobserved regions (shaded).

In real-world applications, need not be gathered in a single batch but obtained in parts over time from various sensors, stations, satellites, etc. That is, the dataset is augmented sequentially, i.e., . In these streaming data scenarios, we are interested in continuous refinement of the prediction of at as new data is augmented into .

The unknown data-generating process is often assumed to belong to a class of data models indexed by a parameter . Each model in the class yields a predictor of at test point . A specific set of model parameters is learned using . Examples of commonly used model classes include Gaussian Processes (GP) Rasmussen & Williams (2006), spatio-temporal random effects models Cressie et al. (2010), dynamic factor analysis models Lopes et al. (2008); Fox & Dunson (2015), spatial random effect models extended to incorporate time as an additional dimension Zammit-Mangion & Cressie (2017) (cf. related work section below). For many spatio-temporal applications, the model class should be capable of expressing temporal patterns that change across different spatial regions. Moreover, for streaming data scenarios, the learned parameter and the resulting predictor should be updated in a sequential manner.

Our contribution in this paper is two-fold:

  • we develop a non-stationary, localized covariance model capable of capturing temporal patterns that change across space, as illustrated in Figure 2 below.

  • we show how to sequentially learn the covariance model parameters and update the predictor from streaming spatio-temporal data, with a runtime that is linear in .

In Section 2, we relate our work to already existing approaches and introduce a commonly used model class in Section 3. In Section 4 we develop a localized spatio-temporal covariance model to be used in conjunction with a covariance-fitting learning approach. Finally, the proposed method is evaluated using synthetic and real climate data in Sections 5 and 6, respectively.


stacks both elements into a single column vector.

, , and denote the Kronecker product, Kronecker delta function, weighted -norm and Moore-Penrose inverse, respectively. Finally, the sample mean is denoted by .

2 Related work

A popular model class is the family of GPs, specified by a mean and covariance function Rasmussen & Williams (2006). This approach is computationally prohibitive in its basic form since both learning the model parameters and implementing the predictor requires a runtime on the order of , where is typically large in spatio-temporal applications. The predictor implementation can be approximated using various techniques. One popular approach is to approximate the training data using inducing points which reduces the runtime to Quiñonero-Candela & Rasmussen (2005); Bijl et al. (2015). Moreover, by assuming Kronecker covariance functions it is possible to obtain even shorter runtimes by utilizing the Kronecker structure of the GP covariance matrix Saatçi (2012). If the model class is restricted to stationary covariance functions, the runtimes can be reduced further, cf. Saatçi (2012); Wilson et al. (2014). In the space-time domain, such models are also equivalent to dynamical system models so that can be approximated using a basis expansion and implemented by a Kalman smoother Särkkä et al. (2013). In the above cases, however, and are not updated jointly when obtaining streaming data.

The restriction to stationary covariance models is, moreover, not always adequate to capture temporal patterns that differ across spatial regions. This modeling limitation is addressed by Cressie et al. (2010), where a discrete-time model class is partially specified using a spatial basis function expansion with time-varying expansion coefficients. These are modeled as a first-order vector auto-regressive process. The coefficients thus determine a spatial pattern of the process that evolves at each discrete time-instant. This model class can capture patterns localized to specific regions in space, unlike stationary covariance models. The predictor

can be viewed as a spatial fixed-rank kriging method that is updated via a Kalman filter and thus applicable to streaming data (cf.

Cressie & Johannesson (2008)). The model parameter

, however, is learned using a moment-fitting approach and operates on batch rather than streaming datasets. Other work using dynamic factor analysis models

Lopes et al. (2008); Fox & Dunson (2015)

similarly allow for time-varying coefficients but with more flexible data-adaptive basis. However, they are implemented using Markov Chain Monte Carlo methods which are computationally prohibitive for the scenarios considered herein.

Moreover, a first-order auto-regressive structure may not accurately capture more complex temporal patterns observed in real spatio-temporal processes. The approach taken by Zammit-Mangion & Cressie (2017)

circumvents this limitation by basis functions that are localized in both space and time. Time locality cannot, however, capture periodic patterns or trends necessary for interpolation over longer periods. The model parameters are learned using an expectation-maximization method which is not readily applicable to streaming data scenarios.

3 Spatio-temporal model class

We begin by defining the data vector obtained from . For the test point , we consider the unbiased predictor of as a linear combination of the data Stein (2012):


where is a vector of weights which naturally depend on the test point . The weight vector is defined as the minimizer of the conditional mean square prediction error. That is,


Since the conditional error is determined by the unknown distribution , we specify a class of data-generating models, using only the mean and covariance Cressie & Wikle (2011):


The function captures the expected trend of the entire spatio-temporal process , and when there is no such general trend we set . The function captures the smoothness of the process in space-time and is of dimension . The parameter matrix is diagonal and specifies the relevance of each dimension of similar to the way in which automatic relevance determination is sometimes used within the GP Tipping (2001); Faul & Tipping (2002). Taken together, (3) specifies a class of models, each of which is indexed by the parameters . The covariance parameters , which we collectively


denote by for notational convenience, determine the spatio-temporal covariance structure which depends on the function . In the next section, we will specify to develop a suitable covariance model to capture local spatial and periodic temporal patterns.

For a given model in the class, the optimal weights (2) are given in closed form Stein (2012) as

where the subindex highlights the model parameter dependence. The quantities in are determined by the regressor matrix

and the following covariance matrix


with being an oblique projector onto .

The optimal weights are invariant to the mean parameters and to uniform scaling of the covariance parameters . By learning up to an arbitrary scale factor, the predictor (1) is given by the linear combiner weights . If we assume that the process is Gaussian, the model can be learned using the maximum likelihood framework. However, this yields neither a convex problem nor one that is readily solved in a sequential manner as is augmented sequentially. In the next section, we apply a convex covariance-fitting framework to learn the spatio-temporal model using streaming data.

4 Proposed method

Below we specify the function in (3) such that the spatio-temporal covariance structure can express local spatial patterns with varying temporal periodicities as illustrated in Figure 2. Subsequently, we apply a covariance-fitting methodology for learning the model parameters such that the predictor (1) can be updated sequentially for each new observation Zachariah et al. (2017).

Figure 2: Example illustrations of spatio-temporal covariance structures that are possible using the proposed function . Contour plots of for at a test point (red cross) and at all other coordinates in the space-time domain. For sake of illustration, the spatial dimension is . (a) Test point . The test point is positively correlated with only neighbouring points in space and has a constant covariance with points across time. (b) Test point . The covariance is local in space as in (a) but across time the covariance decays slowly. (c) Test point . The covariance is periodic.

4.1 Local-periodic space-time basis

The function varies over a space-time domain and its elements can be thought of as basis functions. It is formulated as a Kronecker product of a time and space bases,


for compactness.

We begin by specifying the spatial function as


where the basis vector for the spatial dimension,


is composed of localized components with a finite support . For notational simplicity, we consider and to be same for each dimension . Based on their computational attractiveness and local approximation properties we use a cubic spline basis Rasmussen & Williams (2006); Wasserman (2006). Then (8) is given by (4), where determines the location of a component. Figure 2(a) illustrates the components as a function of its spatial dimension. We place the centers of each component uniformly across the spatial dimensions.

Figure 3: (a) Components of the local spatial basis and , respectively. Each component is centered at red crosses on the spatial axes and . (b) Components of the periodic temporal basis .

Using allows for covariance structures that are localized in space in such a way that neighbouring points have a nonnegative correlation and points far from each other have no correlation as determined by the support size . Hence for a given , the resulting covariance structure can capture local spatial patterns of a certain scale and can easily be extended to cover multiple scales by replacing (6) with for example

that accommodates two different support sizes and . The number of basis functions is chosen such that adjacent localized components have overlapping support to cater for points in between them. This requirement is fulfilled by choosing where is the range of the spatial dimension. For example when , the adjacent component have percent overlap. The maximum value of is limited by the number of training points and the computational resources that are available.

The temporal function is also specified by a basis


However, to be able to predict missing data of the type illustrated in Figure 1 we cannot rely on a localized basis for extended interpolations over space-time. Due to its good approximating properties we instead apply the periodic basis developed by Solin & Särkkä (2014) defined over a range :


Similar to a Fourier basis, allows for periodic covariance structures that capture both fixed and periodic patterns in the data along time with different frequencies. Moreover, as grows, any temporally stationary covariance structure can be captured, cf. Solin & Särkkä (2014). Using (10), the maximum frequency in the model is . Hence, depending on the data and the highest frequency periodic patterns we may expect in it, an appropriate value of can be chosen.

In summary, the proposed spatio-temporal basis in (6) is of dimension and yields a covariance function that may vary temporally with different frequencies specific to different spatial regions, as illustrated in Figure 2. The covariance structure is determined by the parameter , which we learn using a covariance-fitting methodology considered next.

4.2 Learning method for streaming data

We describe a covariance-fitting approach for learning the model parameter , up to an arbitrary scale factor, from streaming data. Given a training dataset , this approach enables us to update the predictor from (1) in a streaming fashion as . We consider fitting the model covariance structure of the training data , which is parameterized by in (5), to the empirical structure. Let us first define a normalized sample covariance matrix of the training data,

Here corresponds to using . Then the optimal model parameters are given by a covariance-fitting criterion (cf. Cressie (1985); Anderson (1989); Cressie & Johannesson (2008); Stoica et al. (2011)) with minimizer:


Here the matrix norm corresponds to a weighted norm which penalizes correlated residuals. The learned parameter is invariant with respect to the mean parameter and can be rescaled by an arbitrary scale factor Zachariah et al. (2017). Moreover, the resulting predictor corresponding to in equation (1) can be written in the equivalent form:


where . The -dimensional weight vector is defined as the minimizer


where the elements of are given by

For proofs of these relations and a derivation of its computational properties,see Zachariah et al. (2017)

The resulting predictor in (12) is called the Spice

(sparse iterative covariance-based estimation) predictor. It is computed via a convex and sparsifying regularized minimization problem that can be solved using coordinate descent with recursively updated quantities at each new training point

. By exploiting this structure, our predictor can now be updated with streaming data as . A pseudocode implementation is provided in Algorithm 1. The key recursively updated quantities passed from one update to the next are the symmetric matrix and the vectors and of dimension along with the scalar . Here is the weight vector at sample , which is initialized at zero along with the above variables in Algorithm 1. The runtime is linear in and constant in memory. That is, for a fixed training data size , the total runtime of the algorithm is on the order and its memory requirement is . For further details, we refer the reader to the supplementary material. Code available at github.

  Input: and
     if  then
     end if
  until number of iterations equal L
Algorithm 1 Learning from streaming datasets

5 Synthetic data

The proposed method has been derived for predictions using large and/or streaming data sets. We now demonstrate its predictive properties using synthetic data and for the sake of reference compare it with a Gpr (Gaussian process regression) method using different covariance functions .

5.1 Damped planar wave

To illustrate a dynamically evolving process, we consider planar a wave in one-dimensional space and time, cf. Figure  3(a). The unknown process is generated according to:


where is the speed of the wave along space in units per second, is the wavelength in units of space and

is a zero-mean white Gaussian process with standard deviation 


Note that the process decays exponentially as it propagates through space. For our experiments, we set  [spatial units/sec],  [spatial units] and . Synthetic data is generated over a uniform grid and a subset of training points are used. Different contiguous space-time blocks are selected as test regions to resemble realistic scenarios in which the coverage of sensors, satellites or other measurement equipment is incomplete. For example, the dashed white boxes in Figure 4 emulate cases where data over a small region is missing most of the time. By contrast, the dashed black boxes correspond to cases when data over large spatial regions is missing some of the time.

The process in these test regions as well as at other randomly missing points is predicted using the proposed method with , and a spatial basis support set to spatial units. This results in being of dimension . The mean-square error (MSE) of the prediction is shown in Figure 3(b) and evaluated using 25 Monte Carlo simulations. The region in the white box extends over almost the entire time dimension, hence there are very few neighbouring training points in time to draw upon for prediction and no information about the periodicities in the region. Instead our method leverages the neighbouring spatial information to obtain a good prediction resulting in a low MSE. Both black boxes are test regions that have neighbouring training points that provide temporal information about the process. However, left region has training points both before and after whereas the right region only has points before, yielding a more challenging prediction problem. Nevertheless, the proposed method is able to learn both the periodic and the local damping patterns to provide accurate predictions in both regions.

We include also the MSE of Gpr using two different covariance functions learned by a numerical maximum likelihood search. While this method is not applicable to the streaming data of interest here, it provides a performance reference. First, we use a Matérn ARD covariance model Rasmussen & Nickisch (2010) to carefully adapt both space and time dimensions. In Figure 3(c) it is seen that the resulting prediction errors are markedly worse for the large missing spatial regions and the method naturally fails to capture the periodic pattern of the process. Next, we use a periodic Matérn ARD covariance model to also capture space-time periodicity. However, the MSE (Figure 3(d)) is degraded throughout, which is possibly due to the non-convex optimization problem used to learn the model parameters. It may lead to local minima issues, including learning erroneous periods.

Figure 4: (a) Realization of the damped planar wave process defined in (14). (b) MSE of the proposed method. The red dots denote training points. The white and black dashed boxes represent contiguous space-time test regions where the data is missing. (c) MSE of Gpr using the Matérn ARD covariance model. (d) MSE of Gpr using using periodic Matérn ARD covariance function.

5.2 Varying seasonalities across space

Here we generate a process that emulates scenarios of temporal periodicities which may vary across spatial regions. This occurs e.g. in climate data. Figure 4(a) shows a realization of a process generated according to


where the period differs across space and is zero-mean white Gaussian process with standard deviation . In the upper region of the spatial domain , i.e., the process has a constant mean. In the middle and bottom regions is large and small, respectively. The data is generated over a uniform grid and a subset of points is used for training. A contiguous space-time block, marked by the dashed black box in Figure 5, forms a test region to emulate scenarios where data can be missing over a large spatial region for some time.

For the proposed method we use , and a support of for the spatial basis, so that . For the Gpr we use the periodic Matérn ARD kernel. Figures 4(b) and 4(c) show the MSE performance of the proposed method and Gpr respectively which were obtained using 25 Monte Carlo simulations. The MSE of the proposed method is overall lower than that of Gpr, both in the dashed test region as well as outside it. Unlike the proposed method, Gpr has one parameter to fit to an overall periodic pattern and is thus unable to learn spatially localized patterns. Thus after learning, the process is predicted to be be nearly constant along time for all parts of the spatial region.

Figure 5: (a) Realization of the process defined in (15) with varying periods across space. (b) MSE of the proposed method which is able to learn different periodic temporal patterns across space. The red dots denote training points. The black dashed box marks a contiguous test region. (c) MSE of Gpr using periodic Matérn ARD covariance model.

6 Real data

We now demonstrate the proposed method for much larger, and possibly streaming, real-world datasets.

6.1 Pacific Sea Surface Temperature

As a first application example, we use tropical pacific Sea Surface Temperature (SST) data Wikle (2011). These data represent gridded monthly SST anomalies, in C, from January 1970 through March 2003 over a spatial region from S to N and E to W. The spatial resolution of the data is in both latitude and longitude.

Here we consider data from the first 36 months, making the total number of space-time data points equal to . In the first experiment, training points are sampled randomly across space-time and the missing data constitute the test points. Here we set as the number of training points. For the proposed method we set , and the spatial support to be half of each spatial dimension. Then . Figure 5(a) shows the prediction error histogram of all test points across the spatio-temporal domain. We see that it is centered around zero and its dispersion is considerably narrower than the dynamic range of the data.

In the second experiment, we select a contiguous space-time block as a test region in addition to other test points to evaluate the performance in scenarios where data over entire spatial regions are missing for a period of time. Data falling within the spatial region marked by the black dashed box in Figure 5(c) is missing beyond month 26, as indicated by the black dashed line in Figure 5(d). Here are the number of training points. The prediction error histogram for this second experiment is shown in Figure 5(b) and remains fairly narrow. Figure 5(c) illustrates the predicted SST anomalies [C] for a spatial slice at month . We pick a spatial point in a region where the El Niño effect, i.e., the periodic warming of the equatorial Pacific Sea Surface Sarachik & Cane (2010), is known to be noticeable. The prediction of the SST anomalies at this spatial location across time along with the true SST is illustrated with Figure 5(d). Note that the predictor is able to track the rising temperature deviation also for the missing data.

6.2 Precipitation data

As a second application example, we use precipitation data from the Climate Research Unit (CRU) time series datasets of climate variations Jones & Harris (2013). The precipitation data consists of monthly rainfall in millimeter over a period from to obtained with high spatial resolution (0.5 by 0.5 degree) over the whole planet. Here we consider a five year period from to and between spatial coordinates W to W and N to N. This yields a total number of data points.

The spatial region indicated by the black dashed box in Figure 6(b) beyond month , as seen in Figure 6(c), constitutes a contiguous test region, in addition to other randomly selected test points. The remaining points are used for training.

For the proposed method we set , and the spatial support to be half of each spatial dimension. Then . Figure 6(a) shows the prediction error histogram for the precipitation test data. It is centered around zero and its dispersion is narrower than the dynamic range of the data. Figure 6(b) shows the contour plot of predicted precipitation for a spatial slice at month . The red cross and plus marker indicate spatial points whose actual and predicted time series are compared in Figures 6(c) and 6(d), respectively. Note that the estimated precipitation tracks the true precipitation well everywhere even to the right of the black dashed line where the data was not seen during training. Note the ability of the predictor to track the different seasonal patterns in the missing regions.

7 Conclusion

We proposed a method in which a spatio-temporal predictor can be learned and updated sequentially as spatio-temporal data is obtained as a stream. It is capable of capturing spatially varying temporal patterns, using a non-stationary covariance model that is learned using a covariance-fitting approach. We demonstrated, using both simulated and real climate data, that it is capable of producing accurate predictions in large unobserved space-time test regions. In future work, we intend to further improve the computational efficiency of the method by exploiting the spatially localized structure of the covariance model.

Figure 6:

(a) Histogram of prediction error of all test points in the first scenario with randomly sampled training points. The plot in red is a fitted Gaussian distribution. Note that the dynamic range of the data is

C . (b) Histogram of prediction error of all test points for the second scenario with a contiguous space-time block as test region. The plot in red is a fitted Gaussian distribution (c) Contour plot of predicted SST for a single spatial slice at time . The red dots denote training points and the black dashed box indicates a contiguous test region. The red cross denotes a point of interest in which the El Niño effect can be observed. (d) Comparison of time series of actual and estimated SST for the point marked by the red cross in 5(c). The data to the right of the black dashed line is part of the contiguous test region and is not used during training.
Figure 7: (a) Histogram of prediction error for all test points of precipitation data. The dynamic range of the data is millimeters (b) Contour plot of predicted precipitation for a single spatial slice at time . Data inside the black dashed box marks the contiguous test region and is not seen during training. The red dots denote training points. (c) The actual and estimated precipitation time series for the the spatial point marked by red cross in figure 6(b). (d) The actual and estimated precipitation time series for the the spatial point marked by red addition sign in figure 6(b). The data to the right of the black dashed line is part of the contiguous test region and is not used during training.


This research was financially supported by the project NewLEADS - New Directions in Learning Dynamical Systems (contract number: 621-2016-06079), funded by the Swedish Research Council.


  • Anderson (1989) Anderson, Theodore Wilbur. Linear latent variable models and covariance structures. Journal of Econometrics, 41(1):91–119, 1989.
  • Bijl et al. (2015) Bijl, Hildo, van Wingerden, Jan-Willem, Schön, Thomas B, and Verhaegen, Michel. Online sparse Gaussian process regression using FITC and PITC approximations. IFAC-PapersOnLine, 48(28):703–708, 2015.
  • Cressie (1985) Cressie, Noel. Fitting variogram models by weighted least squares. Journal of the International Association for Mathematical Geology, 17(5):563–586, 1985.
  • Cressie & Johannesson (2008) Cressie, Noel and Johannesson, Gardar. Fixed rank kriging for very large spatial data sets. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):209–226, 2008.
  • Cressie & Wikle (2011) Cressie, Noel and Wikle, Christopher K. Statistics for spatio-temporal data. John Wiley & Sons, 2011.
  • Cressie et al. (2010) Cressie, Noel, Shi, Tao, and Kang, Emily L. Fixed rank filtering for spatio-temporal data. Journal of Computational and Graphical Statistics, 19(3):724–745, 2010.
  • Faul & Tipping (2002) Faul, Anita C and Tipping, Michael E. Analysis of sparse Bayesian learning. In Advances in neural information processing systems, pp. 383–389, 2002.
  • Fox & Dunson (2015) Fox, Emily B and Dunson, David B. Bayesian nonparametric covariance regression.

    Journal of Machine Learning Research

    , 16:2501–2542, 2015.
  • Jones & Harris (2013) Jones, P. and Harris, I. Climatic research unit (CRU) time-series (ts) version 3.21 of high resolution gridded data of month-by-month variation in climate (jan. 1901–dec. 2012). NCAS British Atmospheric Data Centre, 2013.
  • Lobo et al. (1998) Lobo, Miguel Sousa, Vandenberghe, Lieven, Boyd, Stephen, and Lebret, Hervé. Applications of second-order cone programming. Linear algebra and its applications, 284(1-3):193–228, 1998.
  • Lopes et al. (2008) Lopes, Hedibert Freitas, Salazar, Esther, and Gamerman, Dani. Spatial dynamic factor analysis. Bayesian Analysis, 3(4):759–792, 2008.
  • Quiñonero-Candela & Rasmussen (2005) Quiñonero-Candela, Joaquin and Rasmussen, Carl Edward. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6(Dec):1939–1959, 2005.
  • Rasmussen & Nickisch (2010) Rasmussen, Carl Edward and Nickisch, Hannes. Gaussian processes for machine learning (gpml) toolbox. Journal of Machine Learning Research, 11(Nov):3011–3015, 2010.
  • Rasmussen & Williams (2006) Rasmussen, Carl Edward and Williams, Chris. Gaussian processes for machine learning, volume 1. MIT press Cambridge, 2006.
  • Saatçi (2012) Saatçi, Yunus. Scalable inference for structured Gaussian process models. PhD thesis, Citeseer, 2012.
  • Sarachik & Cane (2010) Sarachik, Edward S. and Cane, Mark A. The El Nino-southern oscillation phenomenon. Cambridge University Press, 2010.
  • Särkkä et al. (2013) Särkkä, Simo, Solin, Arno, and Hartikainen, Jouni. Spatiotemporal learning via infinite-dimensional Bayesian filtering and smoothing: A look at Gaussian process regression through Kalman filtering. IEEE Signal Processing Magazine, 30(4):51–61, 2013.
  • Solin & Särkkä (2014) Solin, Arno and Särkkä, Simo. Hilbert space methods for reduced-rank Gaussian process regression. arXiv preprint arXiv:1401.5508, 2014.
  • Stein (2012) Stein, Michael L. Interpolation of spatial data: some theory for kriging. Springer Science & Business Media, 2012.
  • Stoica et al. (2011) Stoica, Petre, Babu, Prabhu, and Li, Jian. New method of sparse parameter estimation in separable models and its use for spectral analysis of irregularly sampled data. IEEE Trans. Signal Processing, 59(1):35–47, 2011.
  • Tipping (2001) Tipping, Michael E. Sparse Bayesian learning and the relevance vector machine. Journal of machine learning research, 1(Jun):211–244, 2001.
  • Wasserman (2006) Wasserman, L. All of Nonparametric Statistics. Springer Texts in Statistics. New York: Springer-Verlag, 2006.
  • Wikle (2011) Wikle, Christopher K. Sea surface temperature anomaly data, 2011. URL http://faculty.missouri.edu/~wiklec/datasets.html.
  • Wilson et al. (2014) Wilson, Andrew G, Gilboa, Elad, Nehorai, Arye, and Cunningham, John P. Fast kernel learning for multidimensional pattern extrapolation. In Advances in Neural Information Processing Systems, pp. 3626–3634, 2014.
  • Zachariah et al. (2017) Zachariah, Dave, Stoica, Petre, and Schön, Thomas B. Online learning for distribution-free prediction. arXiv preprint arXiv:1703.05060, 2017.
  • Zammit-Mangion & Cressie (2017) Zammit-Mangion, Andrew and Cressie, Noel. FRK: An R package for spatial and spatio-temporal prediction with large datasets. arXiv preprint arXiv:1705.08105, 2017.

Learning Localized Spatio-Temporal Models From Streaming Data: Supplementary Material

8 Reformulating of fitting criterion

By expanding the objective function in (11), we obtain the equivalent form


where when . Next we define an auxiliary variable that satisfies

or equivalently


Using the auxiliary variable and the definition of , we can therefore express the objective function as:


where are nonnegative and satisfies the constraint. The minimizing is the learned model parameter. This problem is identified as a convex, semidefinite program, cf. Lobo et al. (1998). We may also add the following normalization constraint,

to match the normalized covariance matrix. This merely adds a linear constraint to problem (18) with a constrained minimizer denoted . We now prove that .

Begin by defining a constant , such that at the minimum of (18). We show that is the only possible value and so both terms in (18) equal each other at the minimum.

Let , and observe that the cost (18) is then bounded by

Thus must satisfy , or . Therefore is the only solution and both terms must be equal at the minimum. We can thus re-write the minimization of (18) as the following problem


with minimizer and where is an auxiliary variable.

Next, consider an equivalent problem to (19) obtained by re-defining the variables as . Then and , so that the equivalent problem becomes


where . The minimizer of the equivalent problem (20) is therefore . Problem (20) is however identical to the constrained problem


whose minimizer is when , which follows from expanding the cost in (11) and the normalization constraint.

Thus we proved that and since the predictor is invariant to uniform scaling of , that is, , we see that the normalization constraint is not relevant for the result.

For further details, see Zachariah et al. (2017).

9 Equivalent form of the predictor

Consider the following augmented problem


Solving for and yields the minimizer

It can be shown that by inserting the minimizing back into (22), we obtain a concentrated cost function which is equal to that in (18). Thus we obtain the sought model parameter from the augmented problem.

Moreover, we can identify . Thus we obtain both and the weights from the augmented problem. Using these facts, we may alternatively solve for first. The second and third terms in (22) can be written as


respectively. Then the minimizing hyperparameters

in (22) can be expressed in closed-form:

Inserting the expression back in to (22) yields a concentrated cost function

which, after dividing by , equals that in (14). Thus using minimizing weights , after concentrating the augmented problem with respect to , yields