# Efficient parameter estimation for parabolic SPDEs based on a log-linear model for realized volatilities

We construct estimators for the parameters of a parabolic SPDE with one spatial dimension based on discrete observations of a solution in time and space on a bounded domain. We establish central limit theorems for a high-frequency asymptotic regime. The asymptotic variances are shown to be substantially smaller compared to existing estimation methods. Moreover, asymptotic confidence intervals are directly feasible. Our approach builds upon realized volatilities and their asymptotic illustration as response of a log-linear model with spatial explanatory variable. This yields efficient estimators based on realized volatilities with optimal rates of convergence and minimal variances. We demonstrate efficiency gains compared to previous estimation methods numerically and in Monte Carlo simulations.

• 4 publications
• 1 publication
10/02/2019

### Parameter estimation for SPDEs based on discrete observations in time and space

Parameter estimation for a parabolic, linear, stochastic partial differe...
03/19/2019

### Entropies and their Asymptotic Theory in the discrete case

We present some new nonparametric estimators of entropies and we establi...
04/30/2021

### Adaptive Realized Hyperbolic GARCH Process: Stability and Estimation

In this paper, we propose an Adaptive Realized Hyperbolic GARCH (A-Reali...
01/17/2021

### Variance Estimation and Confidence Intervals from High-dimensional Genome-wide Association Studies Through Misspecified Mixed Model Analysis

We study variance estimation and associated confidence intervals for par...
01/22/2022

### Parameter estimation for linear parabolic SPDEs in two space dimensions based on high frequency data

We consider parameter estimation for a linear parabolic second-order sto...
10/25/2018

### Geometrically Convergent Simulation of the Extrema of Lévy Processes

We develop a novel Monte Carlo algorithm for the simulation from the joi...
11/27/2018

### Spatial log-Gaussian Cox processes in Hilbert spaces

A new class of spatial log-Gaussian Cox processes in function spaces is ...

## 1 Introduction

Dynamic models based on stochastic partial differential equations (SPDEs) are recently of great interest, in particular their calibration based on statistics, see, for instance,

Hambly and Søjmark (2019), Fuglstad and Castruccio (2020), Altmeyer and Reiß (2021) and Altmeyer et al. (2022). Bibinger and Trabs (2020), Cialenco and Huang (2020) and Chong (2020) have independently of one another studied the parameter estimation for parabolic SPDEs based on power variation statistics of time increments when a solution of the SPDE is observed discretely in time and space. Bibinger and Trabs (2020) pointed out the relation of their estimators to realized volatilities which are well-known as key statistics for financial high-frequency data in econometrics. We develop estimators based on these realized volatilities which significantly improve upon the M-estimation from Bibinger and Trabs (2020). Our new estimators attain smaller asymptotic variances, they are explicit functions of realized volatilities and we can readily provide asymptotic confidence intervals. Since generalized estimation approaches for small noise asymptotics in Kaino and Uchida (2021a), rate-optimal estimation for more general observation schemes in Hildebrandt and Trabs (2021), long-span asymptotics in Kaino and Uchida (2021b), and with two spatial dimensions in Tonaki et al. (2022) have been built upon the M-estimator from Bibinger and Trabs (2020), we expect that our new method is of interest to the community to further improve parameter estimation for SPDEs. Our theoretical framework is the same as in Bibinger and Trabs (2020). We consider for a linear parabolic SPDE

 dXt(y)=(θ2∂2Xt(y)∂y2+θ1∂Xt(y)∂y+θ0Xt(y))dt+σdBt(y), (1)

with one space dimension. The bounded spatial domain is the unit interval , which can be easily generalized to some arbitrary bounded interval. Although estimation methods in case of an unbounded spatial domain are expected to be similar, the theory is significantly different, see Bibinger and Trabs (2019). is a cylindrical Brownian motion in a Sobolev space on . The initial value is assumed to be independent of . We work with Dirichlet boundary conditions: , for all . A specific example is the SPDE model

 dXt(y)=(∂Xt(y)∂y+κ2∂2Xt(y)∂y2)dt+σdBt(y), (2)

used for the term structure model of Cont (2005).

We focus on parameter estimation based on discrete observations of a solution on the unit square . The spatial observation points , , have at least distance from the boundaries at which the solution is zero by the Dirichlet conditions.

###### Assumption 1.

We assume equidistant high-frequency observations in time , , where , asymptotically. We consider the same asymptotic regime as in Bibinger and Trabs (2020), where , such that , for some , and is uniformly in bounded from below and from above.

This asymptotic regime with more observations in time than in space is natural for most applications. Hildebrandt and Trabs (2021) and Bibinger and Trabs (2020) showed that in this regime the realized volatilities

are sufficient to estimate the parameters with optimal rate , while Hildebrandt and Trabs (2021) establish different optimal convergence rates when the condition is violated and propose rate-optimal estimators for this setup based on double increments in space and time.

The natural parameters which are identifiable under high-frequency asymptotics are

 σ20:=σ2/√θ2andκ:=θ1/θ2, (3)

the normalized volatility parameter , and the curvature parameter . While Bibinger and Trabs (2020) focused first on estimating the volatility when the parameters and are known, we consider the estimation of the curvature parameter in Section 2. We present an estimator for known and a robustification for the case of unknown . In Section 3 we develop a novel estimator for both parameters, , which improves the M-estimator from Section 4 of Bibinger and Trabs (2020) significantly. It is based on a log-linear model for with explanatory spatial variable . Section 4 is on the implementation and numerical results. We draw a numerical comparison of asymptotic variances and show the new estimators’ improvement over existing methods. We demonstrate significant efficiency gains for finite-sample applications in a Monte Carlo simulation study. All proofs are given in Section 5.

## 2 Curvature estimation

Section 3 of Bibinger and Trabs (2020) addressed the estimation of in (1) when and are known. Here, we focus on the estimation of from (3), first when is known and then for unknown volatility. The volatility estimator by Bibinger and Trabs (2020), based on observations in one spatial point , used the realized volatility . The central limit theorem with rate for this estimator from Theorem 3.3 in Bibinger and Trabs (2020) yields equivalently that

 (4)

with a numerical constant analytically determined by a series of covariances. Since the marginal processes of in time have regularity 1/4, the scaling factor for is natural. To estimate consistently we need observations in at least two distinct spatial points. A key observation in Bibinger and Trabs (2020) was that under Assumption 1, realized volatilities in different spatial observation points de-correlate asymptotically. From (4) we can hence write

 (5)

with i.i.d. standard normal and remainders , which turn out to be asymptotically negligible for the asymptotic distribution of the estimators. The equation

 (6) +log(1+Rn,j1+√ΓπΔnZj),

and an expansion of the logarithm yield an approximation

 (7)

In the upcoming example we briefly discuss optimal estimation in a related simple statistical model.

###### Example 1.

Assume independent observations , where is unknown and are known. The maximum likelihood estimator (mle) is given by

 ^μ=∑mi=1Yiς−2i∑mi=1ς−2i. (8)

The expected value and variance of this mle are

 E[^μ]=μ     and     Va r(^μ)=(m∑i=1ς−2i)−1.

Note that can be viewed as Fisher information of observing . The efficiency of the mle in this model is implied by standard asymptotic statistics.

If we have independent observations with the same expectation and variances as in Example 1

, but not necessarily normally distributed, the estimator

from (8

) can be shown to be the linear unbiased estimator with minimal variance.

If is known, this and (7) motivate the following curvature estimator:

 (9)
###### Theorem 1.

Grant Assumptions 1 and 2 with , , and . Then, the estimator (9) satisfies, as , the central limit theorem (clt)

 √nmn(^κn,m−κ) d⟶ N(0,3Γπ1−δ+δ2). (10)

Typically will be small and the asymptotic variance close to . Assumption 2 poses a mild restriction on the initial condition and is stated at the beginning of Section 5. The logarithm yields a variance stabilizing transformation for (4) and the delta-method readily a clt for log realized volatilities with constant asymptotic variances. This implies a clt for the estimator as , and when is fix. The proof of (10) is, however, not obvious and based on an application of a clt for weakly dependent triangular arrays by Peligrad and Utev (1997).

If is unknown the estimator (9) is infeasible. Considering differences for different spatial points in (6) yields a natural generalization of Example 1 and the estimator (9) for this case:

 (11)

This estimator achieves as well the parametric rate of convergence , it is asymptotically unbiased and satisfies a clt. Its asymptotic variance is, however, much larger than the one in (10).

###### Theorem 2.

Grant Assumptions 1 and 2 with , , and . Then, the estimator (11) satisfies, as , the clt

 √nmn(^ϰn,m−κ) d⟶ N(0,12Γπ(1−2δ)2). (12)

The clts (10) and (12) are feasible, i.e. the asymptotic variances are known constants and do not hinge on any unknown parameters. Hence, asymptotic confidence intervals can be constructed based on the theorems.

## 3 Asymptotic log-linear model for realized volatilities and least squares estimation

Applying the logarithm to (5) and a first-order Taylor expansion

 log(a+x)=log(a)+xa+O(x2a2), x→0,

yield an asymptotic log-linear model

 (LLM)

for the rescaled realized volatilities, with i.i.d. standard normal and remainders , which turn out to be asymptotically negligible for the asymptotic distribution of the estimators. When we ignore the remainders , the estimation of

is then directly equivalent to estimating the slope parameter in a simple ordinary linear regression model with normal errors. The intercept parameter in the model (

LLM) is a strictly monotone transformation

 α(σ20)=log(σ20√π) (13)

of . To exploit the analogy of (LLM) to a log-linear model, it is useful to recall some standard results on least squares estimation for linear regression.

###### Example 2.

In a simple linear ordinary regression model

 Yi=α+βxi+ϵi , i=1,…,m,

with white noise

, homoscedastic with variance , the least squares estimation yields

 ^β =∑mj=1(xj−¯x)(Yj−¯Y)∑mj=1(xj−¯x)2, (14a) ^α =¯Y−^β¯x, (14b) with the sample averages ¯Y=m−1∑mj=1Yj, and ¯x=m−1∑mj=1xj. The estimators (14a) and (14b) are known to be BLUE (best linear unbiased estimators) by the famous Gauß-Markov theorem, i.e. they have minimal variances among all linear and unbiased estimators. In the normal linear model, if ϵii.i.d.∼N(0,σ2), the least squares estimator coincides with the mle and standard results imply asymptotic efficiency. The variance-covariance matrix of (^α,^β) is well-known and Va r(^β) =σ2∑mj=1(xj−¯x)2, (14c) Va r(^α) =σ2∑mj=1x2jm∑mj=1(xj−¯x)2, (14d) Co v(^α,^β) =−σ2¯x∑mj=1(xj−¯x)2. (14e)

For the derivation of (14a)-(14e) in this example see, for instance, Example 7.2-1. of Zimmerman (2020).

We give this elementary example, since our estimator and the asymptotic variance-covariance matrix of our estimator will be in line with the translation of the example to our model (LLM).

The M-estimation of Bibinger and Trabs (2020) was based on the parametric regression model

 (15)

with non-standard observation errors . The proposed estimator

 (16)

was shown to be rate-optimal and asymptotically normally distributed in Theorem 4.2 of Bibinger and Trabs (2020). In view of the analogy of (LLM) to an ordinary linear regression model, however, it appears clear that the estimation method by Bibinger and Trabs (2020) is inefficient, since ordinary least squares is applied to a model with heteroscedastic errors. In fact, generalized least squares could render a more efficient estimator related to our new methods. In model (15), the variances of depend on via the factor . This induces, moreover, that the asymptotic variance-covariance matrix of the estimator (16) depends on the parameter . In line with the least squares estimator from Example 2, the asymptotic distribution of our estimator will not depend on the parameter.

Writing , our estimator for reads

 ^κLSn,m (17a) The estimator for the intercept is ˆαLS(σ20) (17b)

We shall prove that the OLS-estimator (17a) for in our log-linear model is in fact identical to the estimator from (11).

###### Theorem 3.

Grant Assumptions 1 and 2 with , , and . The estimators (17a) and (17b) satisfy, as , the bivariate clt

 √nmn((^κLSn,mˆαLS(σ20))−(κα(σ20))) d⟶ N(0,Σ),

with the asymptotic variance-covariance matrix

 Σ=⎛⎜⎝12Γπ(1−2δ)26Γπ(1−2δ)26Γπ(1−2δ)24Γπ1−δ+δ2(1−2δ)2⎞⎟⎠.

Different to the typical situation with an unknown noise variance in Example 2, the noise variance in (LLM) is a known constant. Therefore, different to Theorem 4.2 of Bibinger and Trabs (2020), our central limit theorem is readily feasible and provides asymptotic confidence intervals.

An application of the multivariate delta method yields the bivariate clt for the estimation errors of the two point estimators.

###### Corollary 4.

Under the assumptions of Theorem 3, it holds that

 √nmn((^κLSn,m(^σ20)LS)−(κσ20)) d⟶ N(0,~Σ),

where is obtained from with the inverse of (13), with

 ~Σ=⎛⎜ ⎜ ⎜⎝12Γπ(1−2δ)26σ20Γπ(1−2δ)26σ20Γπ(1−2δ)24σ40Γπ(1−δ+δ2)(1−2δ)2⎞⎟ ⎟ ⎟⎠.

Here, naturally the parameter occurs in the asymptotic variance of the estimated volatility and in the asymptotic covariance. The transformation or plug-in, however, still readily yield asymptotic confidence intervals.

## 4 Numerical illustration and simulations

### 4.1 Numerical comparison of asymptotic variances

The top panel of Figure 1 gives a comparison of the asymptotic variances for curvature estimation, , of our new estimators to the minimum contrast estimator of Bibinger and Trabs (2020). We fix . While the asymptotic variance-covariance matrix of our new estimator is rather simple and explicit, the one in Bibinger and Trabs (2020) is more complicated but can be explicitly computed from their Eq. (21)-(23).

The uniformly smallest variance is the one of from (9). It is visualized with the yellow curve which is constant in , i.e. the asymptotic variance does not hinge on the parameter. This estimator, however, requires that the volatility is known. It is thus fair to compare the asymptotic variance of the minimum contrast estimator from Bibinger and Trabs (2020) only to the least squares estimator based on the log-linear model, since both work for unknown . While the asymptotic variance of the new estimator, visualized with the grey curve, does not hinge on the parameter value, the variance of the estimator by Bibinger and Trabs (2020) (brown curve) is in particular large when has a larger distance to zero. All curves in the top panel of Figure 1 do not depend on the value of . Our least squares estimator in the log-linear model uniformly dominates the estimator from Bibinger and Trabs (2020). For , the asymptotic variances of the two least squares estimators would coincide in . However, due to the different dependence on , the asymptotic variance of the estimator from Bibinger and Trabs (2020) is larger than the one of our new estimator also in . The lower panel of Figure 1 shows the ratios of the asymptotic variances of the two least squares estimators for both unknown parameters. Left we see the ratio for curvature estimation determined by the ratio of the grey and brown curves from the graphic in the top panel. Right we see the ratio of the asymptotic variances for estimating , as a function depending on different values of . This ratio does not hinge on the value of .

### 4.2 Monte Carlo simulation study

The simulation of the SPDE is based on its spectral decomposition (20) and an exact simulation of the Ornstein-Uhlenbeck coordinate processes. In Bibinger and Trabs (2020) it was suggested to approximate the infinite series in (20) by a finite sum up to some spectral cut-off frequency which needs to be set sufficiently large. In Kaino and Uchida (2021b) this procedure was adopted, but they observed that choosing the cut-off frequency too small results in a strong systematic bias of simulated estimates. A sufficiently large cut-off frequency depends on the number of observations, but even for moderate sample sizes a cut-off frequency of was recommended by Kaino and Uchida (2021b). This leads to tedious, long computation times as reported in Kaino and Uchida (2021b). A nice improvement for observations on an equidistant grid in time and space has been presented by Hildebrandt (2020)

using a replacement method instead of truncation of large frequencies. The replacement method approximates addends with large frequencies in the Fourier series using a suitable set of independent random vectors instead of simply cutting off these terms. The spectral frequency to start with replacement can be set much smaller than a cut-off for truncation. We thus use Algorithm 3.2 from

Hildebrandt (2020) here, which allows to simulate a solution of the SPDE with the same precision as the cut-off method while reducing the computation time from several hours to less than one minute. In Hildebrandt (2020) we also find bounds for the total variation distance between approximated and true distribution allowing to select an appropriate trade-off between precision and computation time. We implement the method with as the spectral frequency to start with replacement. We simulate observations on equidistant grid points in time and space. We illustrate results for a spatial resolution with , and a temporal resolution with . This is in line with Assumption 1. We simulated results for and , as well. Although the ratio of spatial and temporal observations is more critical then, the normalized estimation results were similar. If the condition is violated, however, we see that the variances of the estimators decrease at a slower rate than . The implementation of our estimators is simple, since they rely on explicit, simple transformations of the observed data.

Figure 2 compares empirical distributions of normalized estimation errors

1. of (grey) vs. Bibinger and Trabs (2020) (brown) and

2. of with known (yellow) compared to (grey),

for small curvature in the left two columns, and larger curvature in the right two columns. The plots are based on a Monte Carlo simulation with 1000 iterations, and for , , and . We use the standard R density plots with Gaussian kernels and bandwidths selected by Silverman’s rule of thumb. The dotted lines give the corresponding densities of the asymptotic limit distributions. We can report that analogous plots for different parameter values of look (almost) identical. With increasing values of and , the fit of the asymptotic distributions becomes more accurate, otherwise the plots look as well similar as long as .

As expected, the efficiency gains of the new method are much more relevant for larger curvature. In particular, in the third plot from the left for , the new estimator outperforms the one from Bibinger and Trabs (2020) significantly. In the first plot from the left for , instead, the two estimators have similar empirical distributions. The fit of the asymptotic normal distributions is reasonably well for all estimators. This is more clearly illustrated in the QQ-normal plots in Figure 4. Using the true value of , as expected, the estimator outperforms the other methods. We compare it to our new least squares estimator in the second and fourth plots from the left.

Figure 3 draws a similar comparison of estimated volatilities , for unknown , using the estimator (17b) from the log-linear model and the estimator from Bibinger and Trabs (2020). While for in the left panel the performance of both methods is almost similar, for in the right panel our new estimator outperforms the previous one. Figure 5 gives the QQ-normal plots for the estimation of . All plots are based on 1000 Monte Carlo iterations. The QQ-normal plots compare standardized estimation errors to the standard normal. For the estimator from Bibinger and Trabs (2020) we use an estimated asymptotic variance based on plug-in, while for our new estimators the asymptotic variances are known constants.

## 5 Proofs of the theorems

### 5.1 Preliminaries

The asymptotic analysis is based on the eigendecomposition of the SPDE. The eigenfunctions

and eigenvalues

of the differential operator

 Aθ:=θ0+θ1∂∂y+θ2∂2∂y2

are given by

 ek(y) =√2sin(πky)exp(−θ12θ2y),y∈[0,1], (18) λk =−θ0+θ214θ2+π2k2θ2,k∈N, (19)

where form an orthonormal basis of the Hilbert space with

 ⟨f,g⟩θ :=∫10eyθ1/θ2f(y)g(y)dyand∥f∥2θ:=⟨f,f⟩θ.

Let be the initial condition. We impose the same mild regularity condition on as in Assumption 2.2 of Bibinger and Trabs (2020).

###### Assumption 2.

In (1) we assume that

1. either for all and holds true or
;

2. are independent.

We refer to Section 2 of Bibinger and Trabs (2020) for more details on the probabilistic structure and the discussion of Assumption 2.

For a mild solution of (1), we have the spectral decomposition

 Xt(y)=∑k≥1xk(t)ek(y),with xk(t)=⟨Xt,ek⟩θ, (20)

in that the coordinate processes satisfy the Ornstein-Uhlenbeck dynamics:

 dxk(t)=−λkxk(t)dt+σtdWkt,xk(0)=⟨ξ,ek⟩θ, (21)

with independent one-dimensional Wiener processes .

We denote for some integrable random variable

, its compensated version by

 ¯¯¯¯Z:=Z−E[Z].

### 5.2 Proof of Theorem 1

A first-order Taylor expansion of the logarithm and Proposition 3.1 of Bibinger and Trabs (2020) yield that

 (22)

for some spatial point . The remainders called in (5), and in (LLM), are contained in the last two addends. This yields for the estimator (9) that

 (23)

where we conclude the order of the remainder, since under Assumption 1 it holds that

 ∑mnj=1yj∑mnj=1y2jΔn=O(Δn).

Since under Assumption 1, , it suffices to prove a clt for the leading term from above:

where includes the th squared increment of the realized volatility . Although this leading term is linear in the realized volatilities, we cannot directly adopt a clt from Bibinger and Trabs (2020) due to the different structure of the weights. Thus, we require an original proof for the clt for which we can reuse some ingredients from Bibinger and Trabs (2020).

We begin with the asymptotic variance. We can adopt Lemma 6.4 from Bibinger and Trabs (2020) and Proposition 6.5 and obtain for any that

 =Γσ40n(1+O(Δηn+Δ1/2nδ−1)), (24) =O(Δ3/2n(|y−u|−1+δ−1)), (25)

for any spatial points and . We obtain that

 =limn→∞nmnπσ40(∑mnj=1y2j)2(Γσ40∑mnj=1y2jn(1+O(Δηn)) +O(Δ3/2n(∑j≠lyjyl∣∣yj−yl∣∣+m2nδ−1)))

The assumption that , , is used only for the convergence of the Riemann sum in the last step. For the covariances, we used Assumption 1 and an elementary estimate

 ∑j≠lyjyl∣∣yj−yl∣∣=O(mn∑r=1mn∑l=1(l+r)lmnr)=O(m2n+m2nlog(mn)). (26)

Since under Assumption 1, the remainders are negligible.

Next, we establish a covariance inequality for the empirical characteristic function. There exists a constant

, such that for all :

 ∣∣Co v(exp(itQba),exp(itQvb+u))∣∣≤Ct2u3/4√Va r(Qba)Va r(Qvb+u), (27)

where , for natural numbers .

Let , the case can be derived separately and is in fact easier. We can write

 Qvb+u =√mn√πσ20∑mnj=1y2jmn∑j=1(A1(yj)+A2(yj))yjeκyj =B1+B2,

where , , include the sums over , and have already been considered in Proposition 6.6 of Bibinger and Trabs (2020). This decomposition is useful, since is independent of . From the proof of Proposition 6.6 in Bibinger and Trabs (2020), we have for all that

 Va r(A1(yj))≤~Cσ4(v−b−u+1)Δn(u−1)3/2,

with some constant , and from Eq. (59) of Bibinger and Trabs (2020) that

 Co v(A1(yj),A1(yl))=O(Δ3/2n(v−b−u+1)(u−1)3/21∣∣yj−yl∣∣).

Thereby, we obtain that

 Va r(B1)=πmnσ40(∑mnj=1y2j)2(mn∑j=1e2κyjy2jV% a r(A1(yj)) +∑j≠leκ(yj+yl)yjylCo v(A1(yj),A1(yl))) ≤πmnσ40(∑mnj=1y2j)2~Cσ4(v−b−u+1)Δn(u−1)3/2e2κmn∑j=1y2j +O(1mnΔ3/2n(v−b−u+1)(u−1)3/2m2nlog(mn)) ≤C′(v−b−u+1)Δn(u−1)3/2+O(Δ3/2n(v−b−u+1)(u−1)3/2mnlog(mn)),

with a constant , where we use that is bounded and (26). Since , we find a constant , such that

 Va r(B1)≤C′′(v−b−u+1)Δn(u−1)3/2. (28)

With the variance-covariance structure of , we obtain with some constants , , that

 Va r(Qvb+u) ≥C1mnπσ40(∑mnj=1y2j)2mn∑j=1y2jv∑i=b+uVa r(ζn,i)e2κyj =C2Δnmn(v−b−u+1)∑mnj=1y2j≥C3(v−b−u+1)Δn. (29)

Since Eq. (54) from Bibinger and Trabs (2020) applied to our decomposition with and , yields that

(28) and (29) imply (27).

A Lindeberg condition for the triangular array is obtained by the stronger Lyapunov condition. It is satisfied, since

 ≤m−2nπ2σ80(m−1n∑mnj=1y2j)4e4κn∑i=1mn∑j,k,u,v=1E[(XiΔn−X(i−1)Δn)2(yj) (XiΔn−X(i−1)Δn)2(yk)(XiΔn−X(i−1)Δn)2(yu)(XiΔn−X(i−1)Δn)2(yv)] ×E[(XiΔn−X(i−1)Δn)8(yk)]E[(XiΔn−X(i−1)Δn)8(yu)] ×E[(XiΔn−X(i−1)Δn)8(yv)])1/4 =O(m2nnΔ2n)=O(m2nΔn).

In the last step the inner sum is estimated with a factor , and we just use the regularity of . As , we conclude the Lyapunov condition which together with (27) and the asymptotic analysis of the variance yields the clt for the centered triangular array by an application of Theorem B from Peligrad and Utev (1997).

### 5.3 Proof of Theorem 3

We establish first the asymptotic variances and covariance of the estimators before proving a bivariate clt. With (22), we obtain that

since for the remainders it holds true that

 max(∑mnj=1yjOP(Δn)mn(¯¯¯y)2−∑mnu=1y2u,¯¯¯yOP(mnΔn)mn(¯¯¯y)2−∑mnu=1y2u)=OP(Δn).

We can use (24) and (25) to compute the asymptotic variance:

 =limn→∞(nmnπσ40(∑mnu=1y2u−mn(¯¯¯y)2)Γσ40n(1+O(Δηn)) =limn→∞Γπm−1n∑mnu=1y2u−(¯¯¯y)2

We used that the sum of covariances is of order

 O(m−1nΔ1/2n∑j≠l(yj−¯¯¯y)(yl−¯¯¯y)∣∣yj−yl∣∣)=O(Δ1/2nmnlog(mn))=O(1). (30)

For the estimator (17b), we obtain that

such that

With (24) and (25) and analogous steps as above, the asymptotic variance yields

 =limn→∞Γπ∑mnu=1y2u∑mnu=1y2u−mn(¯¯¯y)2 =Γπ(1−2δ)−1∫1−δδy2dy(1−2δ)−1∫1−δδy2dy−((1−2δ)−1∫1−δδydy)2.

The covariance terms for spatial points are asymptotically negligible by a similar estimate as in (30). The asymptotic covariance between both estimators yields

 limn→∞nmnCo v(ˆαLS(σ20),^κLSn,m) =limn→∞mnΓπ¯¯¯y(∑mnu=1y2u−mn(¯¯¯y)2)(∑mnu=1y2u−mn(¯¯¯y)2)2 =limn→∞Γπ¯¯¯ym−1n∑mnu=1y2u−(¯¯¯y)2 =Γπ(1−2δ)−1∫1−δδydy(1−2δ)−1∫1−δδy2dy−((1−2δ)−1∫1−δδydy)2.

The covariance terms for spatial points are asymptotically negligible by a similar estimate as in (30). Computing the elementary integrals and simple transformations yield the asymptotic variance-covariance matrix in Theorem 3.

To establish the bivariate clt, it suffices to prove the clt for the -valued triangular array

Here, we use the notation for compensated random variables introduced in Section 5.1 for the squared time increments. The first entry of this vector is the leading term of , and the second entry of