Dynamic models based on stochastic partial differential equations (SPDEs) are recently of great interest, in particular their calibration based on statistics, see, for instance,Hambly and Søjmark (2019), Fuglstad and Castruccio (2020), Altmeyer and Reiß (2021) and Altmeyer et al. (2022). Bibinger and Trabs (2020), Cialenco and Huang (2020) and Chong (2020) have independently of one another studied the parameter estimation for parabolic SPDEs based on power variation statistics of time increments when a solution of the SPDE is observed discretely in time and space. Bibinger and Trabs (2020) pointed out the relation of their estimators to realized volatilities which are well-known as key statistics for financial high-frequency data in econometrics. We develop estimators based on these realized volatilities which significantly improve upon the M-estimation from Bibinger and Trabs (2020). Our new estimators attain smaller asymptotic variances, they are explicit functions of realized volatilities and we can readily provide asymptotic confidence intervals. Since generalized estimation approaches for small noise asymptotics in Kaino and Uchida (2021a), rate-optimal estimation for more general observation schemes in Hildebrandt and Trabs (2021), long-span asymptotics in Kaino and Uchida (2021b), and with two spatial dimensions in Tonaki et al. (2022) have been built upon the M-estimator from Bibinger and Trabs (2020), we expect that our new method is of interest to the community to further improve parameter estimation for SPDEs. Our theoretical framework is the same as in Bibinger and Trabs (2020). We consider for a linear parabolic SPDE
with one space dimension. The bounded spatial domain is the unit interval , which can be easily generalized to some arbitrary bounded interval. Although estimation methods in case of an unbounded spatial domain are expected to be similar, the theory is significantly different, see Bibinger and Trabs (2019). is a cylindrical Brownian motion in a Sobolev space on . The initial value is assumed to be independent of . We work with Dirichlet boundary conditions: , for all . A specific example is the SPDE model
used for the term structure model of Cont (2005).
We focus on parameter estimation based on discrete observations of a solution on the unit square . The spatial observation points , , have at least distance from the boundaries at which the solution is zero by the Dirichlet conditions.
We assume equidistant high-frequency observations in time , , where , asymptotically. We consider the same asymptotic regime as in Bibinger and Trabs (2020), where , such that , for some , and is uniformly in bounded from below and from above.
This asymptotic regime with more observations in time than in space is natural for most applications. Hildebrandt and Trabs (2021) and Bibinger and Trabs (2020) showed that in this regime the realized volatilities
are sufficient to estimate the parameters with optimal rate , while Hildebrandt and Trabs (2021) establish different optimal convergence rates when the condition is violated and propose rate-optimal estimators for this setup based on double increments in space and time.
The natural parameters which are identifiable under high-frequency asymptotics are
the normalized volatility parameter , and the curvature parameter . While Bibinger and Trabs (2020) focused first on estimating the volatility when the parameters and are known, we consider the estimation of the curvature parameter in Section 2. We present an estimator for known and a robustification for the case of unknown . In Section 3 we develop a novel estimator for both parameters, , which improves the M-estimator from Section 4 of Bibinger and Trabs (2020) significantly. It is based on a log-linear model for with explanatory spatial variable . Section 4 is on the implementation and numerical results. We draw a numerical comparison of asymptotic variances and show the new estimators’ improvement over existing methods. We demonstrate significant efficiency gains for finite-sample applications in a Monte Carlo simulation study. All proofs are given in Section 5.
2 Curvature estimation
Section 3 of Bibinger and Trabs (2020) addressed the estimation of in (1) when and are known. Here, we focus on the estimation of from (3), first when is known and then for unknown volatility. The volatility estimator by Bibinger and Trabs (2020), based on observations in one spatial point , used the realized volatility . The central limit theorem with rate for this estimator from Theorem 3.3 in Bibinger and Trabs (2020) yields equivalently that
with a numerical constant analytically determined by a series of covariances. Since the marginal processes of in time have regularity 1/4, the scaling factor for is natural. To estimate consistently we need observations in at least two distinct spatial points. A key observation in Bibinger and Trabs (2020) was that under Assumption 1, realized volatilities in different spatial observation points de-correlate asymptotically. From (4) we can hence write
with i.i.d. standard normal and remainders , which turn out to be asymptotically negligible for the asymptotic distribution of the estimators. The equation
and an expansion of the logarithm yield an approximation
In the upcoming example we briefly discuss optimal estimation in a related simple statistical model.
Assume independent observations , where is unknown and are known. The maximum likelihood estimator (mle) is given by
The expected value and variance of this mle are
Note that can be viewed as Fisher information of observing . The efficiency of the mle in this model is implied by standard asymptotic statistics.
If we have independent observations with the same expectation and variances as in Example 1
, but not necessarily normally distributed, the estimatorfrom (8
) can be shown to be the linear unbiased estimator with minimal variance.
If is known, this and (7) motivate the following curvature estimator:
Typically will be small and the asymptotic variance close to . Assumption 2 poses a mild restriction on the initial condition and is stated at the beginning of Section 5. The logarithm yields a variance stabilizing transformation for (4) and the delta-method readily a clt for log realized volatilities with constant asymptotic variances. This implies a clt for the estimator as , and when is fix. The proof of (10) is, however, not obvious and based on an application of a clt for weakly dependent triangular arrays by Peligrad and Utev (1997).
This estimator achieves as well the parametric rate of convergence , it is asymptotically unbiased and satisfies a clt. Its asymptotic variance is, however, much larger than the one in (10).
3 Asymptotic log-linear model for realized volatilities and least squares estimation
Applying the logarithm to (5) and a first-order Taylor expansion
yield an asymptotic log-linear model
for the rescaled realized volatilities, with i.i.d. standard normal and remainders , which turn out to be asymptotically negligible for the asymptotic distribution of the estimators. When we ignore the remainders , the estimation of
is then directly equivalent to estimating the slope parameter in a simple ordinary linear regression model with normal errors. The intercept parameter in the model (LLM) is a strictly monotone transformation
of . To exploit the analogy of (LLM) to a log-linear model, it is useful to recall some standard results on least squares estimation for linear regression.
In a simple linear ordinary regression model
with white noise
with white noise, homoscedastic with variance , the least squares estimation yields
|with the sample averages , and . The estimators (14a) and (14b) are known to be BLUE (best linear unbiased estimators) by the famous Gauß-Markov theorem, i.e. they have minimal variances among all linear and unbiased estimators. In the normal linear model, if , the least squares estimator coincides with the mle and standard results imply asymptotic efficiency. The variance-covariance matrix of is well-known and|
We give this elementary example, since our estimator and the asymptotic variance-covariance matrix of our estimator will be in line with the translation of the example to our model (LLM).
The M-estimation of Bibinger and Trabs (2020) was based on the parametric regression model
with non-standard observation errors . The proposed estimator
was shown to be rate-optimal and asymptotically normally distributed in Theorem 4.2 of Bibinger and Trabs (2020). In view of the analogy of (LLM) to an ordinary linear regression model, however, it appears clear that the estimation method by Bibinger and Trabs (2020) is inefficient, since ordinary least squares is applied to a model with heteroscedastic errors. In fact, generalized least squares could render a more efficient estimator related to our new methods. In model (15), the variances of depend on via the factor . This induces, moreover, that the asymptotic variance-covariance matrix of the estimator (16) depends on the parameter . In line with the least squares estimator from Example 2, the asymptotic distribution of our estimator will not depend on the parameter.
Writing , our estimator for reads
|The estimator for the intercept is|
Different to the typical situation with an unknown noise variance in Example 2, the noise variance in (LLM) is a known constant. Therefore, different to Theorem 4.2 of Bibinger and Trabs (2020), our central limit theorem is readily feasible and provides asymptotic confidence intervals.
An application of the multivariate delta method yields the bivariate clt for the estimation errors of the two point estimators.
Under the assumptions of Theorem 3, it holds that
where is obtained from with the inverse of (13), with
Here, naturally the parameter occurs in the asymptotic variance of the estimated volatility and in the asymptotic covariance. The transformation or plug-in, however, still readily yield asymptotic confidence intervals.
4 Numerical illustration and simulations
4.1 Numerical comparison of asymptotic variances
The top panel of Figure 1 gives a comparison of the asymptotic variances for curvature estimation, , of our new estimators to the minimum contrast estimator of Bibinger and Trabs (2020). We fix . While the asymptotic variance-covariance matrix of our new estimator is rather simple and explicit, the one in Bibinger and Trabs (2020) is more complicated but can be explicitly computed from their Eq. (21)-(23).
The uniformly smallest variance is the one of from (9). It is visualized with the yellow curve which is constant in , i.e. the asymptotic variance does not hinge on the parameter. This estimator, however, requires that the volatility is known. It is thus fair to compare the asymptotic variance of the minimum contrast estimator from Bibinger and Trabs (2020) only to the least squares estimator based on the log-linear model, since both work for unknown . While the asymptotic variance of the new estimator, visualized with the grey curve, does not hinge on the parameter value, the variance of the estimator by Bibinger and Trabs (2020) (brown curve) is in particular large when has a larger distance to zero. All curves in the top panel of Figure 1 do not depend on the value of . Our least squares estimator in the log-linear model uniformly dominates the estimator from Bibinger and Trabs (2020). For , the asymptotic variances of the two least squares estimators would coincide in . However, due to the different dependence on , the asymptotic variance of the estimator from Bibinger and Trabs (2020) is larger than the one of our new estimator also in . The lower panel of Figure 1 shows the ratios of the asymptotic variances of the two least squares estimators for both unknown parameters. Left we see the ratio for curvature estimation determined by the ratio of the grey and brown curves from the graphic in the top panel. Right we see the ratio of the asymptotic variances for estimating , as a function depending on different values of . This ratio does not hinge on the value of .
4.2 Monte Carlo simulation study
The simulation of the SPDE is based on its spectral decomposition (20) and an exact simulation of the Ornstein-Uhlenbeck coordinate processes. In Bibinger and Trabs (2020) it was suggested to approximate the infinite series in (20) by a finite sum up to some spectral cut-off frequency which needs to be set sufficiently large. In Kaino and Uchida (2021b) this procedure was adopted, but they observed that choosing the cut-off frequency too small results in a strong systematic bias of simulated estimates. A sufficiently large cut-off frequency depends on the number of observations, but even for moderate sample sizes a cut-off frequency of was recommended by Kaino and Uchida (2021b). This leads to tedious, long computation times as reported in Kaino and Uchida (2021b). A nice improvement for observations on an equidistant grid in time and space has been presented by Hildebrandt (2020)
using a replacement method instead of truncation of large frequencies. The replacement method approximates addends with large frequencies in the Fourier series using a suitable set of independent random vectors instead of simply cutting off these terms. The spectral frequency to start with replacement can be set much smaller than a cut-off for truncation. We thus use Algorithm 3.2 fromHildebrandt (2020) here, which allows to simulate a solution of the SPDE with the same precision as the cut-off method while reducing the computation time from several hours to less than one minute. In Hildebrandt (2020) we also find bounds for the total variation distance between approximated and true distribution allowing to select an appropriate trade-off between precision and computation time. We implement the method with as the spectral frequency to start with replacement. We simulate observations on equidistant grid points in time and space. We illustrate results for a spatial resolution with , and a temporal resolution with . This is in line with Assumption 1. We simulated results for and , as well. Although the ratio of spatial and temporal observations is more critical then, the normalized estimation results were similar. If the condition is violated, however, we see that the variances of the estimators decrease at a slower rate than . The implementation of our estimators is simple, since they rely on explicit, simple transformations of the observed data.
Figure 2 compares empirical distributions of normalized estimation errors
of (grey) vs. Bibinger and Trabs (2020) (brown) and
of with known (yellow) compared to (grey),
for small curvature in the left two columns, and larger curvature in the right two columns. The plots are based on a Monte Carlo simulation with 1000 iterations, and for , , and . We use the standard R density plots with Gaussian kernels and bandwidths selected by Silverman’s rule of thumb. The dotted lines give the corresponding densities of the asymptotic limit distributions. We can report that analogous plots for different parameter values of look (almost) identical. With increasing values of and , the fit of the asymptotic distributions becomes more accurate, otherwise the plots look as well similar as long as .
As expected, the efficiency gains of the new method are much more relevant for larger curvature. In particular, in the third plot from the left for , the new estimator outperforms the one from Bibinger and Trabs (2020) significantly. In the first plot from the left for , instead, the two estimators have similar empirical distributions. The fit of the asymptotic normal distributions is reasonably well for all estimators. This is more clearly illustrated in the QQ-normal plots in Figure 4. Using the true value of , as expected, the estimator outperforms the other methods. We compare it to our new least squares estimator in the second and fourth plots from the left.
Figure 3 draws a similar comparison of estimated volatilities , for unknown , using the estimator (17b) from the log-linear model and the estimator from Bibinger and Trabs (2020). While for in the left panel the performance of both methods is almost similar, for in the right panel our new estimator outperforms the previous one. Figure 5 gives the QQ-normal plots for the estimation of . All plots are based on 1000 Monte Carlo iterations. The QQ-normal plots compare standardized estimation errors to the standard normal. For the estimator from Bibinger and Trabs (2020) we use an estimated asymptotic variance based on plug-in, while for our new estimators the asymptotic variances are known constants.
5 Proofs of the theorems
and eigenvaluesof the differential operator
are given by
where form an orthonormal basis of the Hilbert space with
Let be the initial condition. We impose the same mild regularity condition on as in Assumption 2.2 of Bibinger and Trabs (2020).
In (1) we assume that
either for all and holds true or
5.2 Proof of Theorem 1
A first-order Taylor expansion of the logarithm and Proposition 3.1 of Bibinger and Trabs (2020) yield that
where we conclude the order of the remainder, since under Assumption 1 it holds that
Since under Assumption 1, , it suffices to prove a clt for the leading term from above:
where includes the th squared increment of the realized volatility . Although this leading term is linear in the realized volatilities, we cannot directly adopt a clt from Bibinger and Trabs (2020) due to the different structure of the weights. Thus, we require an original proof for the clt for which we can reuse some ingredients from Bibinger and Trabs (2020).
We begin with the asymptotic variance. We can adopt Lemma 6.4 from Bibinger and Trabs (2020) and Proposition 6.5 and obtain for any that
for any spatial points and . We obtain that
The assumption that , , is used only for the convergence of the Riemann sum in the last step. For the covariances, we used Assumption 1 and an elementary estimate
Since under Assumption 1, the remainders are negligible.
Next, we establish a covariance inequality for the empirical characteristic function. There exists a constant, such that for all :
where , for natural numbers .
Let , the case can be derived separately and is in fact easier. We can write
where , , include the sums over , and have already been considered in Proposition 6.6 of Bibinger and Trabs (2020). This decomposition is useful, since is independent of . From the proof of Proposition 6.6 in Bibinger and Trabs (2020), we have for all that
with some constant , and from Eq. (59) of Bibinger and Trabs (2020) that
Thereby, we obtain that
with a constant , where we use that is bounded and (26). Since , we find a constant , such that
With the variance-covariance structure of , we obtain with some constants , , that
Since Eq. (54) from Bibinger and Trabs (2020) applied to our decomposition with and , yields that
A Lindeberg condition for the triangular array is obtained by the stronger Lyapunov condition. It is satisfied, since
In the last step the inner sum is estimated with a factor , and we just use the regularity of . As , we conclude the Lyapunov condition which together with (27) and the asymptotic analysis of the variance yields the clt for the centered triangular array by an application of Theorem B from Peligrad and Utev (1997).
5.3 Proof of Theorem 3
We establish first the asymptotic variances and covariance of the estimators before proving a bivariate clt. With (22), we obtain that
since for the remainders it holds true that
We used that the sum of covariances is of order
For the estimator (17b), we obtain that
The covariance terms for spatial points are asymptotically negligible by a similar estimate as in (30). The asymptotic covariance between both estimators yields
The covariance terms for spatial points are asymptotically negligible by a similar estimate as in (30). Computing the elementary integrals and simple transformations yield the asymptotic variance-covariance matrix in Theorem 3.
To establish the bivariate clt, it suffices to prove the clt for the -valued triangular array
Here, we use the notation for compensated random variables introduced in Section 5.1 for the squared time increments. The first entry of this vector is the leading term of , and the second entry of