Gaussian Process Kernels for Popular State-Space Time Series Models

In this paper we investigate a link between state- space models and Gaussian Processes (GP) for time series modeling and forecasting. In particular, several widely used state- space models are transformed into continuous time form and corresponding Gaussian Process kernels are derived. Experimen- tal results demonstrate that the derived GP kernels are correct and appropriate for Gaussian Process Regression. An experiment with a real world dataset shows that the modeling is identical with state-space models and with the proposed GP kernels. The considered connection allows the researchers to look at their models from a different angle and facilitate sharing ideas between these two different modeling approaches.

• 5 publications
• 5 publications
08/11/2020

Deep State-Space Gaussian Processes

This paper is concerned with a state-space approach to deep Gaussian pro...
10/09/2015

p-Markov Gaussian Processes for Scalable and Expressive Online Bayesian Nonparametric Time Series Forecasting

In this paper we introduce a novel online time series forecasting model ...
02/05/2020

An introduction to state-space modeling of ecological time series

State-space models (SSMs) are an important modeling framework for analyz...
04/11/2022

A Bayesian Decision Support System in Energy Systems Planning

Gaussian Process (GP) emulators are widely used to approximate complex c...
06/07/2015

Computationally Efficient Bayesian Learning of Gaussian Process State Space Models

Gaussian processes allow for flexible specification of prior assumptions...
07/15/2021

Hida-Matérn Kernel

We present the class of Hida-Matérn kernels, which is the canonical fami...
12/06/2021

Traversing Time with Multi-Resolution Gaussian Process State-Space Models

Gaussian Process state-space models capture complex temporal dependencie...

I Introduction and Motivation

Time series modeling and prediction is one of oldest topics in statistics. The very first statisticians already dealt with time dependent data. For example, Beveridge wheat price (years 1500 to 1869) or Wolfer’s sunspot number (years 1610-1960) [1] are examples of very early time series. Nowadays time series analysis and forecasting is ubiquitous in many fields of science and engineering. Econometricians, physicists, statisticians, biologists, climatologists etc. encounter time dependent data in their daily work.

Since this problem is very old and very wide-spread, different fields of science developed their own sets of methods for analysis and forecasting of time series. For instance, in statistics and econometrics domains the most common models are state-space (SS) models [2][3]. In the physics domain the dominating class of models constitute nonlinear dynamical models [4]

. In the machine learning area time series are usually modeled by neural networks, fuzzy systems and Gaussian Processes. An overview of time series forecasting can be found in

[5].

One historically important subclass of the state-space models is autoregressive integrated moving average (ARIMA). It is still widely used and considered one of the best [6] in time series analysis. A structural time series model

(STM) is a version of ARIMA where some time series components like trends and periodicities are imposed explicitly. It has an advantage over the pure ARIMA methodology that model misspecification is much less probable

[3]. Moreover, STM is a way to introduce prior information and desired behavior into a time series model. Often a practitioner finds it difficult to consider and comprehend different forecasting methods from different domains. This paper is intended to shorten the gap between widely used STM models and Gaussian Processes (GPs) used in machine learning. The term structural time series model and state-space time series model are used interchangeably in this paper [2].

Basic state-space models are usually presented in the books [2][3] as discrete time models with Gaussian errors. A structural time series framework allows to combine several basic state-space models into more complex ones. There are generalizations of discrete-time SS models to continuous time [3, Chap. 9] which after a certain procedure may be converted back to the discrete time. Since the errors in the basic SS models are assumed to be Gaussian, those are also GP models, however a direct systematic connection to Gaussian Processes used in machine learning is unknown to authors. The goal of this paper is to provide explicit connections between GP models and structural time series models.

Gaussian Processes are an important class of models in machine learning [7]. Modeling of time series has been widely addressed by GP community [8]. The modeling principles differ significantly from the state-space models. Modeling is done in continuous time and the main object to model is covariance function (and optionally mean function). There exist a known connection between continuous-discrete state space model and Gaussian process [9]. The advantage of representing the GP in SS form is that the inference can be done in time where is the number of data points, while the classic GP regression requires operations. However, if the amount of data points is relatively small , or we use some modification of standard GP, the difference in computational time can become negligible [10] on modern computers.

In this paper we derive several GP covariance functions which correspond to the main structural time series models. This explicit connection is useful for the researches with different background. State-space modelers can see that their methods are equivalent to certain Gaussian Processes and they can try to use various extension developed in the GP literature. GP specialists on the other hand can analyze the covariance functions corresponding to state-space models and borrow some ideas from there.

Ii Structural Time Series Models and Gaussian Processes

Random (Stochastic) process is a collection of random variables

parametrized by the set . If is a set of integers () then the random process is discrete. If is real-valued () the process is continuous.

The random process can be completely described by the infinite number of distribution function of the form for any positive integer and arbitrary selected time points . Although this description is complete it is cumbersome. Therefore, often in practice only the first two distribution functions are taken into account.

These first two distribution functions allow to define the first moments of the random process: mean and covariance. Using these first two moments we can define the important class of random processes - Wide-Sense Stationary (WSS) Random Process. For a random process to be WSS it is sufficient that the mean is constant, variance is finite, and covariance function depends only on difference between time points. More detailed information can be found in any book about stochastic processes e.g.

[1].

Ii-a Gaussian Process (GP)

A Gaussian process is a random process where for arbitrary selected time points

is multivariate Gaussian.

To define a Gaussian process it is necessary to define a mean function and covariance function .

Ii-B State-Space Models

The state-space model is the model of the form:

 zn=An−1zn−1+qn(% state / dynamic equation) (1) yn=Hnzn+ϵn(measurement % equation)

It is assumed that (scalar) are the observed values of this random process. The noise terms and are, in basic case, assumed to be Gaussian. This is the assumption we do in this paper. When the noise terms are Gaussian the random process is also Gaussian and we find the explicit form of covariance function for the most popular state-space models.

The Kalman filter algorithm allows to make inference about the model (

1). It computes the different conditional distributions of the hidden state as well as a likelihood of the model [11].

In the model (1) the state variable is assumed to be discrete. There exist equivalent versions where the state variable is continuous and it is called continuous-discrete state-space model [3]. The relationships between continuous-discrete state-space models and Gaussian processes have been recently highlighted [9]. In this paper the connection is made more explicit and clear.

Ii-C Combining Models / Structural Time-Series (STS) Models

The structural time series framework is a way to construct state-space models and incorporate the desired properties or prior information into them. These properties are fixed level, trend, periodicity and quasi-periodicity (cyclicity) [2][3]. The ability to incorporate prior information is an advantage of the STS modeling framework over more general ARIMA approach. A certain state-space model corresponds to each aforementioned property. Let’s show how to combine these models additively. Suppose that , so is a sum of trend and periodic component. It is possible to write it in a single state-space model:

 [z(tr)nz(per)n]=⎡⎣A(tr)n−100A(per)n−1⎤⎦⎡⎣z(tr)n−1z(per)n−1⎤⎦+[q(tr)nq(per)n] (2) yn=[H(tr)nH(per)n]⎡⎣z(tr)n−1z(per)n−1⎤⎦+ϵn

It can be easily seen that and are uncorrelated random processes if their noise terms are uncorrelated. In this case the covariance function of is:

 Cov[yk,yk+n]=H(tr)nCov[z(tr)k,z(tr)k+n](H(tr)n)T+ (3) +H(per)nCov[z(per)k,z(per)k+n](H(per)n)T+δ(n=0)σ2ϵ

Here is a Kronecker delta which equals 1 when . So, the covariance is a sum of two covariances (matrices

are often 1) and a white noise term from the measurement equation. This useful property will be utilized in the subsequent sections.

Iii Bayesian Linear Regression in State-Space Form

At first, recall the Bayesian Linear Regression (BLR) in the state-space form. Assume that we have

measurements , which are observed at time points . Further, assume that there is a linear dependency between measurements and time:

 yk=θtk+ϵkθ∼N(m0,P0)- prior of the % parameter θϵk∼N(0,σ20)- % Gaussian white noise (4)

.

is a parameter of the model and the prior for it is , is a Gaussian white noise: . In this formulation, the BLR provides us the posterior distribution of

which we are not currently interested in. Besides, it provides the posterior predictive distribution which for any set of time points

yields the distribution of corresponding measurements. It is well know[7] that the same posterior predictive distribution can be obtained by Gaussian Process Regression (GPR) with the kernel:

 y∼GP(m0t,P0ttT+σ20I) (5)

We are interested in representing the BLR model in the state-space form because it allows us to look at the model in the sequential form when data arrives one by one. Moreover, the Kalman filter type inference which is the standard for the linear state-space models scales linearly with the number of samples, while Gaussian Process or batch BLR scales cubically[7]. There are several ways to express BLR in the state-space form, the one we are interested in is written below [11, p. 37]:

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩[xkθk]=[1Δtk−101][xk−1θk−1]yk=[10][xkθk]+ϵk,where: (6) x0=0∼N(0,0),θ0∼N(m0,P0),ϵk∼N(0,σ20) Δtk−1=tk−tk−1,and it is assumed that t0=0.

Now let’s check that the state-space model listed above is indeed equivalent to Bayesian Linear Regression. Looking at the equation for we see that for all , so it does not change with time. Since we have that:

 x1 =t1θ0=θt1 x2 =x1+(t2−t1)θ1=t2θ1+t1(θ0−θ1)=t2θ1=θt2 ⋮ xk =xk−1+(tk−tk−1)θk−1 =tkθk−1+tk−1(θk−2−θk−1)=tkθk−1=θtk

So, we see that and if we insert the obtained result into the equation for : which exactly coincides with the original BLR formulation. Using the obtained state-space model we can find the covariance matrix of . It would be the same as the one in Eq. (5). We are going to explicitly derive the covariance function for the more general state-space model in the next section.

In this section we have shown the equivalence of Gaussian Process Regression with covariance matrix in Eq. (5) and state-space formulation in Eq. (6). These two models are also equivalent to the Bayesian Linear Regression.

Iv General State-Space Model with Random Noise

In this section we derive the covariance function form for a more general state-space model than in the previous section. In the literature this model is called Local Linear Trend Model (LLLM). It is shown that this general state-space model under the special setting of parameters becomes equivalent to the well-known time series models: local level model, BLR, connection with the quasi-periodic (cyclic) model is very close as well. Derivation of covariance function provides us a useful connection to the Gaussian Process Regression for the aforementioned models. The general state-space model is:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩[xkθk]=[1Δtk−101][xk−1θk−1]+⎡⎣q(1)kq(2)k⎤⎦yk=[10][xkθk]+ϵk,where:ϵk∼N(0,σ20) (7) Δtk−1=tk−tk−1,it is assumed that t0=0, [x0θ0]∼N([c0m0],[K000P0]) ⎡⎣q(1)kq(2)k⎤⎦∼N([00],[q20Δtk−100g20Δtk−1])

As we can see the difference with the state-space model from the previous section consist of extra noise terms in the dynamic (or state) equation. Another difference is non-zero prior distribution for the initial state variable . Now it is distributed as a Gaussian random variable: .

Iv-a Noise in Dynamic Equation

In this subsection the extra noise terms which appear in the dynamic equation are briefly discussed. In the two dimensional noise term

the two components are independent and Gaussian distributed. Consider, for example, the first component

. It is a classical Wiener process[7] also called standard Brownian motion and is a generalization of a simple random walk to the continuous time when time measurements are not necessary equidistant. Its covariance function is and it is a basic example of nonstationary Gaussian Process.

Iv-B Covariance Function Derivation

Before commencing the derivation of the covariance function we consider an important property of the state-space model in Eq. (7). Denote:

 A[Δtk−1]=[1Δtk−101] (8)

We can easily verify that:

 A[Δtk] A[Δtk−1]=[1Δtk01][1Δtk−101]= (9) =[1Δtk+Δtk−101]=A[Δtk+Δtk−1]

This is a convenient property which will be utilized during the derivation and later in the Sec. V. Consider the covariance function:

 Cov[yk,yk+n]=E[(yk−E[yk])(yk+n−E[yk+n])]= (10) =E[(xk+ϵk−E[xk])(xk+n+ϵk+n−E[xk+n])]= =Cov[xk,xk+n]+δ(n=0)σ20

Therefore, we see that in order to find the covariance function of it is enough to find the covariance function of

and add the Kronecker symbol mentioned above. So, we can ignore the measurement equations right now and write our state-space model in the vector form:

 zk=A[Δtk−1]zk−1+qk;zk=[xkθk];qk=⎡⎣q(1)kq(2)k⎤⎦ (11)

.

Lets express through the initial conditions and noise terms:

 zk=A[Δtk−1]zk−1+qk= (12) =A[Δtk−1](A[Δtk−2]zk−2+qk−1)+qk= =A[Δtk−1+Δtk−2]zk−2+A[Δtk−1]qk−1+qk=…= =A[Δtk−1+Δtk−2+…+Δt0]z0+A[Δtk−1+… …+Δtk−2+Δt1]q1+…+A[Δtk−1]qk−1+qk

Here we use the property from Eq. (9) of the transition matrix. We see that is a sum of terms each of which is a vector times matrix with different arguments. Vectors are . Arguments of matrix are also sum of terms and the number of terms decreases one by one from in to zero in front of . We can easily compute the mean of , taking into account the fact that the mean and expanding the expressions for

 E[zk]=A[Δtk−1+Δtk−2+…+Δt0]E[z0]=A[tk][c0m0] (13)

Having the expression for and its mean we can compute the covariance . The computation is quite straightforward using Eq. (12) and the fact that and all are mutually independent. The final answer is presented below:

 Cov[zk,zk+n]=A[Δtk−1+Δtk−2+…+Δt0]Cov[z0,z0] (14) A[Δtk+n−1+Δtk+n−2+…+Δt0]T+A[Δtk−1+… +Δtk−2+Δt1]Cov[q1,q1]A[Δtk+n−1+Δtk+n−2+… +Δt1]T+⋯+ICov[qk,qk]A[Δtk+n−1+…+Δtk]T

As we see the expression is the sum of terms where the arguments of and are different while arguments in are the same.

Now suppose we want to compute all possible covariances up to some maximal time index , i. e. . These covariances can be written in a matrix consisting of blocks (because - one block is ), and so in total it is a matrix. In the next formula we present the form of this matrix, and later the expression for the single components are provided. To simplify the notations and make them more vivid, suppose :

 ⎡⎢ ⎢ ⎢ ⎢⎣Cov[z0,z0]Cov[z0,z1]Cov[z0,z2]Cov[z0,z3]Cov[z1,z0]Cov[z1,z1]Cov[z1,z2]Cov[z1,z3]Cov[z2,z0]Cov[z2,z1]Cov[z2,z2]Cov[z2,z3]Cov[z3,z0]Cov[z3,z1]Cov[z3,z2]Cov[z3,z3]⎤⎥ ⎥ ⎥ ⎥⎦= (15) =P{T}D0(P{T})T

We are not interested in computing the first row and column of this covariance matrix since variable does not correspond to any real observation, it is just an initial random variable. Also, in the above formula equals:

 P{T}= (16) ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A[0]000A[Δt0]A[0]00\parA[Δt1+Δt0]A[Δt1]A[0]0\parA[Δt2+Δt1+Δt0]A[Δt2+Δt1]A[Δt2]A[0]\par⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦

Here each element of the matrix is a block. The notation means that some matrix operator is applied to the matrix . Currently we are not specifying what are and , it is done later in this section when we obtain covariances of .

Matrix in Eq. (15) is a block diagonal matrix written below:

 D0=⎡⎢ ⎢ ⎢ ⎢⎣Cov[z0,z0]0000Cov[q1,q1]0000Cov[q2,q2]0000Cov[q3,q3]⎤⎥ ⎥ ⎥ ⎥⎦ (17)

It can be verified that expressions in Eq. (14) and Eq. (15) are equal. Covariances are diagonal matrices shown in Eq. (7). Thus, we have derived the expression for . However we are not interested in it as is. We would like to know the covariances because they are directly related with covariances of the observed variable which is shown in Eq. (10

). It means that we are interested in the covariance matrix consisting of odd columns and rows of the matrix

. To derive it consider the structure of the expression which is the main building block in covariance functions in Eq. (14) and Eq. (15):

 A[∑Δtm][q20Δti−100g20Δti−1]AT[∑Δtn]= (18) =[1∑Δtm01][q20Δti−100g20Δti−1][10∑Δtn1]= =⎡⎢⎣q20Δti−1+g20Δti−1(∑Δtm)(∑Δtn)⋅⋅⋅⎤⎥⎦

In the above formula we are interested in the top left element which is emphasized by the rectangle, because it gives exact covariances of from the covariances . Now we see that the required covariance consist of two parts which correspond to the two terms in the sum above. The first term is affected only by the top diagonal entry of the middle matrix in the initial product. The second term is affected by the bottom diagonal entry and the arguments of the matrices . Now we are ready to write the required correlations by looking at Eq. (14), analyzing contributions of each term there and taking into account Eq. (18). Representation (15) is also useful in deriving the second part of the following result:

 ⎡⎢⎣Cov[x1,x1]Cov[x1,x2]Cov[x1,x2]Cov[x2,x1]Cov[x2,x2]Cov[x2,x3]Cov[x3,x1]Cov[x3,x2]Cov[x3,x3]⎤⎥⎦=Cov1[⋅]+Cov2[⋅] (19)

where:

 Cov1[xk,xk+n]=K0+q20Δt0+q20Δt1+… (20) +q20Δtk−1=K0+q20tk

Another way to write is:

 Cov1[xk,xk+n]=K0+q20min(xk,xk+n) (21)

The expression for is written below:

 Cov2[⋅]=TDTT (22) where: T=⎡⎢ ⎢ ⎢⎣0000Δt0000Δt1+Δt0Δt100Δt2+Δt1+Δt0Δt2+Δt1Δt20⎤⎥ ⎥ ⎥⎦ D=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣P00000g20Δt00000g20Δt10000g20Δt2⎤⎥ ⎥ ⎥ ⎥ ⎥⎦

The matrix can also be represented as:

 T=⎡⎢ ⎢ ⎢⎣0000100011001110⎤⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢⎣0000Δt0000Δt1Δt100Δt2Δt2Δt20⎤⎥ ⎥ ⎥⎦ (23)

In the Eq. (22) we must ignore the first row and the first column so that the resulting matrix is . It is possible to write this formula directly by matrices but this form is useful for the derivation of quasi-periodic (cyclic) covariance in the next section. The Eq. (19) is the final answer for the covariance function of the model stated in Eq. (7). Now given time points we can compute the covariance function, the mean function which is given in Eq. (13) and use Gaussian Process Regression in a regular way. The sample paths from GP with this covariance function are presented on Fig. 1.

Several standard structural time series models are actually versions of the general model described above, they are listed later in this section. The Bayesian Linear Regression model considered in Eq. (6) in Section III is a lucid representative as well. If we set as in the expression for BLR, then and in only the first element in the diagonal matrix is non-zero. Then, expanding all we easily get that the covariance becomes equal to the one of BLR (Eq. 5).

Iv-C Local Level Model

Local Level Model (LLM) is the simplest model among the structural time series models [2]. Its standard representation in the literature is:

 {xk=xk−1+qk;qk∼N(0,q20)yk=xk+ϵk;ϵk∼N(0,σ20) (24) x0∼N(c0,K0)

As we can see this is a random walk expressed by dynamic variable additionally submerged into white noise . The covariance of this model as was mentioned in IV-A is: . Now if we generalize this model to arbitrary time intervals it can be written as:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩[xkθk]=[1Δtk−101][xk−1θk−1]+⎡⎣q(1)kq(2)k⎤⎦yk=[10][xkθk]+ϵk,where:ϵk∼N(0,σ20) (25) Δtk−1=tk−tk−1,it is assumed that t0=0, [x0θ0]∼N([c00],[K0000]) ⎡⎣q(1)kq(2)k⎤⎦∼N([00],[q20Δtk−1000])

The parameters which are nullified with respect to the general model (LLLM) are denoted by boxes. We can see that the equation for is a redundant equation because is initialized as zero and corresponding noise term is also zero. The covariance function could also be obtained by using the formula for the general covariance function and putting to zeros the corresponding coefficients.

In the end of this section it is worth to mention that although the LLM is the simplest structural time series model, it can be successfully applied to the real world data [2, p. 16].

Iv-D Local Linear Trend Model (LLLM)

The next model we consider is called Local Linear Trend Model (LLLM). As was previously said it is the same as general model discussed in this section. The slope variable is changing by random walk, and the coordinate variable has also random walk components similarly to LLM. One can consider a simplification of LLLM: only changes by random walk but does not. As stated in [2, p. 44] this simplified model produces smoother sample paths than general LLLM.

V Periodic and Quasi-Periodic (Cyclic) Modeling

In the structural time series framework there are several models for periodicities and cycles (quasi-periodicities). We consider here the most popular model which is frequently used for cyclic modeling [2, p. 44]:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩[xkx⋆k]=[cos(ωcΔtk−1)sin(ωcΔtk−1)−sin(ωcΔtk−1)cos(ωcΔtk−1)][xk−1x⋆k−1]++⎡⎣q(1)kq(2)k⎤⎦yk=[10][xkx⋆k]+ϵk,where:ϵk∼N(0,σ20) (26) Δtk−1=tk−tk−1,it is assumed that t0=0, [x0x⋆0]∼N([m0m0],[P000P0]) ⎡⎣q(1)kq(2)k⎤⎦∼N([00],[g20Δtk−100g20Δtk−1])

The presented equations are already a generalization of discrete time model which is usually encountered in the books [2] [3], to the continuous time model. The are used to express the uneven time sampling. If the sampling is even all the equal to one.

Notice that the model is completely symmetric with respect to the vector . The initial conditions are symmetric and the noise is symmetric. If we suppose no noise in the model then it is straightforward to show that the covariance function of is:

 Cov[xk,xk+n]=P20cos[ωc(tk+n−tk)] (27)

So, it is a periodic covariance function. The process can be considered as a random process where randomness originates only from the initial conditions. This process is also wide sense stationary since the covariance function depend on the difference of the time points. Again if we suppose that the the noise vector is absent from the dynamic model (i.e. ) then the variable is just a cosine wave. This can be deduced by considering which is a sum of cosine and sine with coefficients which are initial values: . This sum can be represented as a cosine wave where the phase depend on those coefficients. Also, we need to consider the property (28) which is discussed soon. Hence, without extra white noise the is a cosine wave, however with the presence of white noise the deviations from the strict periodicity are possible.

V-a Quasi-Periodic (Cyclic) Covariance function

Let’s consider the dynamic matrix. Its spectral decomposition is written below:

 A[Δtk−1]= (28) [cos(ωcΔtk−1)sin(ωcΔtk−1)−sin(ωcΔtk−1)cos(ωcΔtk−1)]= =12[11i−i][eiωcΔtk−100e−iωcΔtk−1][1−i1i]

Using this it is easy to show that the property (9) is valid again. Therefore, we conclude that all the results which are derived in the section IV and which are based on the property (9) are also valid. In particular expressions (14) and (15) are valid which already give us the results for the covariance matrices of . Repeating the same steps as are done to dive the covariance formula (19) we can derive the similar formula for the cyclic model (26). The derived covariance function consist of two parts as in Eq. (19), however we must exclude the first row and the first column form the covariance matrix provided below similarly to formula (22). The two parts and are written below:

 Cov1[⋅]=L{Cos{T}}D(L{Cos{T}})T (29)

In this expression matrices and are exactly the same as in Eq. (22). There are two new matrix operations which are nested: and . The first one leaves the lower triangular part (including the main diagonal) of the argument matrix intact, and put zeros to the upper-triangular part. The second one applies function element-wise to the matrix.

Similarly,

 Cov2[⋅]=L{Sin{T}}D(L{Sin{T}})T (30)

Here, is used instead of with the similar meaning - element-wise application of function to the argument matrix.

Thus, we obtained the expression for the covariance matrix of the quasi-periodic model Eq. (26). Hence, it is now possible to model this cyclic state-space model as a Gaussian Process with the obtained covariance function. The GP sample paths with cyclic covariance function are shown on the Fig. (a)a (b)b.

If the data contains several frequencies or periodicities then the corresponding state-space models can be combined in the measurement equations for , see subsection II-C. In GP regression this is equivalent to the summation of covariance functions.

Also, if the periodic pattern in the data is not close to cosine wave then we need to take more harmonics to model this pattern. Then we need to combine several frequencies: ( harmonics) as described in the previous paragraph.

V-B Gaussian Periodic covariance function

It is interesting to compare the periodicity modeling approach proposed above with the approach used in Gaussian Process Regression (GPR). In GPR there exist a periodic covariance function [7, p. 92] which is expressed as:

 Cov[t1,t2]=exp(−sin2(ωc(t1−t2)2)) (31)

This is also a periodic covariance function with a frequency . Sample paths from GP with periodic covariance are presented on Fig. (c)c. Since the covariance function is periodic it is possible to represent it as a Fourier series with a harmonics . This is exactly the case which can be represented by combining state-space models and which is described in the previous subsection. Thus, the periodic covariance function used in GPR can be represented by equivalent random process in the state-space form. It is done in the paper [12].

In the same paper the question of representing the quasi-periodic covariance function is also discussed. The quasi-periodic covariance function is a multiplication of some stationary covariance functions (e.g. Matern covariance) [12] and the periodic one in Eq. (31). The random process which is modeled by quasi-periodic covariance function has no fixed period, the period length is fluctuating. Sample paths of quasi-periodic covariance are shown on Fig. (d)d. By using noise we also deviate from strict periodicity, however there is no direct correspondence between model in Eq. (26) and quasi-periodic covariance function in the paper [12]. This question requires further investigation and is not touched here anymore.

Vi Damped Trend Model

In this section we consider damping trend model. It is similar to the general model Eq. (7), except that a slope gradually decreases. Here we present only the dynamic equation for this model because the rest is the same as in Eq. (7).

 [xkθk]=[1Δtk−10ϕ][xk−1θk−1]+⎡⎣q(1)kq(2)k⎤⎦ (32)

The damping factor is denoted by the box around it. It must satisfy so that the trend to be damping.

Next we present the covariance function of this damping trend. The derivation is omitted because it is very similar to the derivation LLLM model Eq. (19). The first part of the covariance is the same as in Eq. (21). The second is also similar to Eq. (22) except that matrix must be substituted to:

 T= (33) ⎡⎢ ⎢ ⎢ ⎢⎣000010001ϕ