DeepAI
Log In Sign Up

Change point inference on volatility in noisy Itô semimartingales

11/23/2017
by   Markus Bibinger, et al.
0

This work is concerned with tests on structural breaks in the spot volatility process of a general Itô semimartingale based on discrete observations contaminated with i.i.d. microstructure noise. We construct a consistent test building up on infill asymptotic results for certain functionals of spectral spot volatility estimates. A weak limit theorem is established under the null hypothesis relying on extreme value theory which allows for the construction of confidence intervals. A simulation study illustrates the finite-sample performance of the method and efficiency gains compared to a skip-sampling approach.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

07/21/2020

Edgeworth corrections for spot volatility estimator

We develop Edgeworth expansion theory for spot volatility estimator unde...
04/09/2019

Cusum tests for changes in the Hurst exponent and volatility of fractional Brownian motion

In this note, we construct cusum change-point tests for the Hurst expone...
07/13/2020

Feasible Inference for Stochastic Volatility in Brownian Semistationary Processes

This article studies the finite sample behaviour of a number of estimato...
06/19/2022

Estimating the finite-time ruin probability of a surplus with a long memory via Malliavin calculus

We consider a surplus process of drifted fractional Brownian motion with...
02/10/2020

Dating the Break in High-dimensional Data

This paper is concerned with estimation and inference for the location o...
10/16/2022

Inference on Extreme Quantiles of Unobserved Individual Heterogeneity

We develop a methodology for conducting inference on extreme quantiles o...
01/30/2018

Nonparametric Bayesian volatility estimation

Given discrete time observations over a fixed time interval, we study a ...

1 Introduction

Inference on structural breaks for discrete-time stochastic processes, particularly in time series analysis, is a very active research field within mathematical statistics. Whereas the latter is usually concerned with i.i.d. data, important contributions beyond that case are presented in Wu and Zhao (2007), proving limit theorems for nonparametric change-point analysis under weak dependence. These results serve as an important ingredient for the present work. So far inference on structural breaks for continuous-time stochastic processes has attracted less attention. Let us mention the very recent work by Bücher et al. (2017), which also deals with questions of detecting structural breaks of certain continuous-time stochastic processes. Our target of inference is the volatility process. Understanding the structure and dynamics of stochastic volatility processes is a highly important issue in finance and econometrics. Due to the outstanding role of volatility for quantifying financial risk, there is a vast literature on these topics.
Motivated by fundamental results in financial mathematics, the process modeling the logarithmic price of an asset belongs to the class of semimartingales. Whereas statistics for general semimartingales is less developed, a lot of work has been done if is an Itô semimartingale, that is, a semimartingale with a characteristic triple being absolutely continuous with respect to the Lebesgue measure. An overview on existing theory is available by Jacod and Protter (2012). More precisely, our continuous-time model is

(1)

with the continuous part,

(2)

with a standard Brownian motion , the volatility process and the drift process . We define the pure-jump process through the Grigelionis representation

(3)

with a Poisson random measure having a compensator of the form with a -finite measure .

In this paper we are going to work with discrete observations and within the framework of infill asymptotics. That is, our data is generated by discretizing a path of the continuous-time stochastic process on a regular, equidistant grid. Though the model (1) is quite flexible, empirical evidence suggests that the recorded financial high-frequency data in applications does not follow a ‘true’ semimartingale. Therefore, an extension of model (1) incorporating microstructure noise is necessary. Market microstructure noise is caused by various trading mechanisms as discreteness of prices and bid-ask spread bounce effects. The observed data is modeled through

(4)

as a discretization of a continuous-time stochastic process , given by a superposition

(5)

with

being a centered white noise process modeling the microstructure noise. This prominent additive noise model has attained considerable attention in the econometrics and statistics literature, let us refer to the book by

Aït-Sahalia and Jacod (2014) for an overview. Infill asymptotics implies or, equivalently, . Whereas the drift process is not identifiable in a high-frequency framework, also without noise, quantities as the spot volatility process and the integrated volatility process , respectively, are identifiable. Since they constitute key quantities for an econometric risk analysis, there exists a rich literature on estimation theory. We refer to Jacod and Protter (2012) for a comprehensive presentation of these topics. This work is aimed to increase the understanding of the structure of the spot volatility process and to complement existing literature. The recent work by Bibinger et al. (2017) presents results on change-point detection for the model (1) without noise. We focus on a test that distinguishes continuous volatility paths from paths with volatility jumps. Inference on volatility jumps is currently of great interest in the literature, see, for instance, Jacod and Todorov (2010) and Tauchen and Todorov (2011). Moreover, it provides a necessary ingredient to analyze possible discontinuous leverage effects, see Aït-Sahalia et al. (2017) for a recent approach to this question. Inference on the volatility poses a challenging statistical problem, the volatility being latent and not directly observable. In the model (5) with microstructure noise, this becomes even more involved. The only work we are aware of addressing inference on volatility jumps in this model is by Bibinger and Winkelmann (2018) who extend the test for contemporaneous price and volatility jumps by Jacod and Todorov (2010) to noisy observations. Restricting to finitely many price-jump times, their results do not render general inference on volatility jumps. In this work we will extend the methods and results presented in Bibinger et al. (2017) in order to construct a general test for volatility jumps based on the model (5

). Our statistics are functionals of spectral spot volatility estimates building up on the local Fourier method of moments in

Altmeyer and Bibinger (2015) extending the volatility estimation approach introduced by Reiß (2011). While several linear estimators for functionals of the volatility have by now been generalized to noise-robust approaches, the considered change-point test is based on maximum statistics and its extension to an efficient method under noise requires new techniques.

The key theorem for the test is a limit theorem under the null hypothesis with an extreme value limit distribution of Gumbel-type. In particular, a clever rescaling of differences of local spot volatility estimates, quite different from the statistics considered in Bibinger et al. (2017), yields an asymptotic distribution-free test. In a certain sense, our test for volatility jumps complements the prominent Gumbel-test for price jumps proposed by Lee and Mykland (2008) and further studied by Palmes and Woerner (2016a) and Palmes and Woerner (2016b). An extension of the Gumbel-test for price jumps to noisy observations is given in Lee and Mykland (2012). We prove that our Gumbel-test for volatility jumps is consistent. Similar to the price-jump test, it facilitates also detection of the jump times – the change points. One main difficulty to prove the limit theorem is to uniformly control the spot volatility estimation errors.

The paper is organized as follows. Section 2 introduces the testing problem and the assumptions. Section 3 constructs the test. We begin with the test for a continuous semimartingale which is then extended to the general case utilizing truncation techniques. Section 4 establishes the asymptotic theory including the limit theorem under the null hypothesis, consistency of the test and consistent estimation of the change point under the alternative hypothesis. In Section 5 we conduct a Monte Carlo simulation study. The main insight is that the new test considerably increases the power compared to (optimally) skip sampling the noisy data to lower frequencies and applying the not noise robust method by Bibinger et al. (2017) directly. Section 6 gathers the proofs.

2 Testing problem and theoretical setup

We will develop a test for volatility jumps. We aim to test for some càdlàg squared volatility process hypotheses of the form

(6)

It is standard in the theory of statistics of high-frequency data to address such questions path-wise. This means that and are formulated for one particular path of the squared volatility and we strive to make a decision based on discrete observations of the given path of . The semimartingale

is defined on a filtered probability space

. We need further assumptions on the coefficient processes of .

Assumption 2.1.

The processes and are locally bounded. is almost surely strictly positive, that is, .

Our notation for jump processes follows Jacod and Protter (2012).

Assumption 2.2.

Suppose is locally bounded for some deterministic non-negative function which satisfies for some :

(7)

The smaller , the more restrictive Assumption 2.2. The case is tantamount to jumps of finite activity.
On the null hypothesis, we allow for very general and rough continuous stochastic volatility processes.

Hypothesis (-).

Under the null hypothesis, the modulus of continuity

is locally bounded in the sense that there exists and a sequence of stopping times , such that , for some

and some (almost surely finite) random variables

.

The regularity exponent is selected for the testing problem. The test can be repeated for different values also. The regularity exponent coincides with a usual Hölder exponent when is a fix constant. Integrating a sequence enables us to include stochastic volatility processes in our theory. Since stochastic processes as Brownian motion are not in some fix Hölder class, it is crucial to work with (slightly) more general smoothness classes determined by the exponent and by . Observe that if

then the Kolmogorov Čentsov Theorem implies that

if arbitrarily slowly. In particular, we can impose that for our derivation of upper bounds in the sections below. The null hypothesis is the same as in Assumption 3.1 of Bibinger et al. (2017). Our test distinguishes the null hypothesis from alternative hypotheses of the following type.

Alternative (-).

Under the alternative hypothesis, there exists at least one , such that

We suppose that , where satisfies 1. The jump component is a pure-jump semimartingale which satisfies Assumption 2.2 with .

In particular, the alternative hypothesis does not restrict to only one jump. We establish a consistent test when at least one non-negligible jump is present. Multiple jumps and quite general jump components are possible. Consistency of our test only requires that in a small vicinity of , and are sufficiently regular such that the jump is detected. Bibinger et al. (2017) impose in their Theorem 4.3 the condition that all volatility jumps are positive. This condition is replaced here by the semimartingale assumption on . Both ensure that can not be compensated by opposite jumps in an asymptotically small vicinity. In order to incorporate microstructure noise, we have to extend the original probability space. We set . The data generating process is defined on the filtered probability space . The construction can be pursued such that the process remains a semimartingale on the extension with the same characteristic triplet and the same Grigelionis representation. For the details of the construction we refer to Chapter 16 in Jacod and Protter (2012). For the noise process, we impose further assumptions.

Assumption 2.3.

The stochastic process is defined on and fulfills the following conditions.

  1. is a centered white noise process, , and with

  2. The following moment condition holds.

    (8)

It is well-known that can be estimated in this model with -rate by either a rescaled realized volatility or from the negative first-lag autocovariances of the noisy increments. Under Assumption 2.3, Zhang et al. (2005) provide a rate-optimal consistent estimator for :

(9)
Remark 2.4.

The moment condition (8) is standard in related literature, see, for instance, Assumption (WN) of (Aït-Sahalia and Jacod, 2014, page 221) or Assumption 16.1.1 of Jacod and Protter (2012), but in a certain sense purely technical. Let us stress that in our setting, we do impose as less assumptions as possible on the volatility process . More precisely, the regularity under 1, for arbitrarily small , requires the existence of all moments in (8). More precisely, the smaller , the larger has to be chosen. Nevertheless, we point out that the moment condition is not that restrictive for standard models of volatility. In the usual case, for instance, where itself is assumed to be an Itô semimartingale, when , only the existence of moments up to order has to be imposed.

Remark 2.5.

While Assumption 2.3 is in line with standard conditions on the additive noise component in the literature, possible generalizations with respect to the structure of the noise process in three directions are of interest: serial dependence, heterogeneity and endogeneity. Such generalizations are also motivated by stylized facts in econometrics, see Hansen and Lunde (2006) for a detailed discussion. For instance, Chapter 16 in Jacod and Protter (2012) includes conditional i.i.d. noise, endogenous as it may depend (in a certain way) on , in the theory of pre-average estimators. This allows to model phenomena as noise by price discreteness (rounding). Bibinger and Winkelmann (2018) provide some first extensions of spectral spot volatility estimation to serially correlated and heterogeneous noise. Though the possible extensions appear to be relevant for applications, we work in the framework formulated in Assumption 2.3

, mainly due to the lack of groundwork sufficient for the present work. Since we exploit some ingredients from previous works on spectral volatility estimation, particularly the form of the efficient asymptotic variance based on

Altmeyer and Bibinger (2015), a generalization of our results requires non-trivial generalizations of these ingredients first. Furthermore, more general noise processes ask for extensive work on the estimation of the local long-run variance replacing (9). This topic, however, is beyond the scope of this work. Let us remark that it is as well not obvious how to apply strong embedding principles in these cases to generalize our proofs. Since Wu and Zhao (2007) provide strong approximation results for weakly dependent time series, we nevertheless conjecture that certain generalizations in the three directions are possible.

3 The statistical methods

3.1 The continuous case

In this paragraph, we construct the test first for the model without jumps, that is, we assume that

The construction of the test is based on a combination of the techniques by Altmeyer and Bibinger (2015) and Bibinger et al. (2017). In order to do so, we pick a sequence with

(10)

and .
The observation interval is split into bins of length , such that each bin is given by

Furthermore, we consider the orthonormal systems, given by

with

We define, for any stochastic process , the increments by

and the spectral statistics

The squared volatility can be estimated locally by a parametric estimator through oracle versions of bias corrected linear combinations of the squared spectral statistics,

(11)

with variance minimizing oracle weights , given by

(12)

The empirical scalar products , for any functions and , are given by

The order in (10) ensures that the error by discretization of the signal part and the error due to noise are balanced.
In a second step we split the observation interval by some “big blocks” with length :

where is some -valued sequence fulfilling as :

(13)

for some and the regularity exponent under the null hypothesis 1. Using spectral estimators and averaging within each big block provides a consistent estimator for :

(14)

A feasible adaptive estimation is obtained by a two-stage method where from (9) and

(15)

are inserted in the oracle weights to derive feasible estimated weights . The result (15) has been established and used in previous works on spectral volatility estimation, see Bibinger and Winkelmann (2018). The pilot volatility estimator (15) is an average of squared bias corrected spectral statistics over Fourier frequencies and bins. For some fix and an optimal choice of , it renders a rate-optimal estimator for which the -term in (15) is . A sub-optimal choice of will not affect our results, however. Other weights than (12) do not yield an asymptotically efficient estimator with minimal asymptotic variance. With estimated versions of the optimal weights (12), Altmeyer and Bibinger (2015) show that a Riemann sum over the estimates (11) yields a quasi-efficient estimator for the integrated squared volatility. Hence, we use the statistics (11) with exactly these weights and the orthogonal sine basis motivated by the efficiency results of Reiß (2011). Finally, with adaptive versions of the local volatility estimators (14)

(16)

our test statistic is given by

(17)

where , with from (9). We write the absolute value in the denominator, since due to the bias correction in (11) the statistics and are not guaranteed to be positive.

Remark 3.1.
  1. The construction of the test statistic (17) is based on the idea to compare the values of the spot volatility process on intervals and and to reject the null hypothesis of no jumps, if the test statistic fulfills for some accurate sequence .

  2. The statistic (17) significantly differs from the statistic given in Equation (13) of Bibinger et al. (2017) beyond replacing spot volatility estimates by noise-robust spot volatility estimates. Though both statistics are quotients, the underlying structure of them is different. Whereas in Bibinger et al. (2017) the simple structure of the (asymptotic) variance of spot volatility estimates allows to use statistics based on their quotients, (17) is based on differences rescaled with their estimated variances. The statistics which are used to wipe out the influence of the noise process imply that volatility does not simply “cancel out” in our case as in Proposition A.3 of Bibinger et al. (2017). The construction of (17) is particularly appropriate from an implementation point of view, since it scales to obtain an asymptotic distribution-free test and makes it possible to avoid pre-estimation of higher order moments.∎

In order to increase the performance of the statistic, we also include a statistic based on overlapping big blocks:

(18)

with given by

3.2 The discontinuous case

In this paragraph, we generalize the method to be robust in the presence of jumps in (1). When is our target of inference, the jumps are a nuisance quantity. In order to eliminate jumps of in the approach, we consider truncated spot volatility estimates

(19)

with a truncation exponent . Truncated volatility estimators have been introduced first for integrated volatility estimation by Mancini (2009) and Jacod (2008). We define the test statistics with the truncated spot volatility estimates (19)

(20a)
(20b)

4 Asymptotic theory

4.1 Limit theorem under the null hypothesis

The hypothesis test formulated in Section 2 is based on asymptotic results for the statistics and , constructed in Section 3.

Theorem 4.1.

Set , and assume that . If Assumptions 2.1 and 2.3 hold and satisfies condition (13), then we have under 1 that

(21)

where follows an extreme value distribution with distribution function

Theorem 4.1 is a key tool tackling the testing problem which is based on non-overlapping big blocks. The following result covers the case of overlapping big blocks.

Corollary 4.2.

Given the assumptions of Theorem 4.1, the following weak convergence holds under 1:

(22)

with as in Theorem 4.1.

We extend this result to the setup with jumps in when using truncated functionals.

Proposition 4.2.

Let and be the sequences defined in Theorem 4.1. Suppose for a constant and with , Assumption 2.1, Assumption 2.3 and Assumption 2.2 with

(23)

Then we have under 1 that

(24a)
(24b)

with as in Theorem 4.1.

It is natural that we derive the same limit results as above, since the truncation aims to eliminate the nuisance jumps. Proposition 2 gives rather minimal conditions, in particular (23), under that we can guarantee that the truncation works in this sense.

Remark 4.3.

Condition (23) ensures that different error terms in the proof of Proposition 2 are asymptotically negligible. Though we state it in terms of upper bounds on the jump activity , it rather puts restrictions on the interplay between , and . Given from 1, we choose close to to attain the highest possible power of the test. This results in , where the case for appears the most relevant one including a test for jumps in a semimartingale volatility process. Rewritten in terms of bounds on , (23) gives:

For finite activity, , we only have mild lower bounds on the choice of . Usually, a choice of close to 1 is advocated in previous works on truncated volatility estimation. For , this requires . The different error terms under noise for the maximum obtained here actually suggest that is an even better choice when we require only

. Overall, the conditions on the jumps are not much more restrictive than required for central limit theorems of linear volatility estimators, see Chapter 13 of

Jacod and Protter (2012). Compared to Proposition 3.5 of Bibinger et al. (2017), we relax the conditions on by a more sophisticated strategy of our proof. In particular, we do not have to restrict to a Lévy-type process with independent increments, since we work with Doob’s submartingale maximal inequality instead of Kolmogorov’s maximal inequality. With this strategy it is also possible to generalize the result in Proposition 3.5 of Bibinger et al. (2017).

4.2 Key ideas of the proof of the limit results

Since the proofs of the results stated in Section 4.1 are quite long, we want to sketch the key ideas of the proof shortly. The details are worked out in Section 6.
Starting with the continuous case, for the results given in Theorem 4.1 and Corollary 4.2, the main ingredients are described as follows. In the first step we carry out the crucial approximation where we show that the error, replacing the true log-price increments of by Brownian increments multiplied with a locally constant approximated volatility, is negligible. More precisely, we show that the spectral statistics are adequately approximated through with the volatility approximated constant over the big blocks. The analogues of after the approximation are denoted , given in (32).
In the second step, we conduct a time shift with respect to the volatility in to approximate the volatility by the same constant in the differences .
The third step

is to replace the estimated asymptotic standard deviation in the denominator in (

17) by its stochastic limit. The latter step is essentially completed by a Taylor expansion. Finally, we establish in a fourth step that the difference between the statistics using (14) with oracle weights and the statistics using (16) with adaptive weights is sufficiently small to extend the results to the feasible statistics.
The approximation steps combine Fourier analysis for the spectral estimation with methods from stochastic calculus. Disentangling the approximation errors of maximum statistics requires a deeper study than for linear statistics. After an appropriate decomposition of the terms, we frequently use Burkholder, Jensen, Rosenthal and Minkowski inequalities to derive upper bounds.
The final step is to apply strong invariance principles by Komlós et al. (1976) and to apply results from Sakhanenko (1996) to conclude with Lemma 1 and Lemma 2, respectively, in Wu and Zhao (2007). Concerning the non-overlapping statistics we need Lemma 1, whereas the overlapping case needs the more involved limit result presented in Lemma 2 of Wu and Zhao (2007).
In order to prove Proposition 2, we show that under the stated conditions the jump robust statistics provide the same limit as in the continuous case. That is, the jumps do not affect the limit at all. We decompose the additional error term by truncation in several terms of different structure which we prove to be asymptotically negligible under the mild conditions (23) on the jump activity and its interplay with the truncation and smoothing parameters. We use Doob’s maximal submartingale inequality to bound one crucial remainder without imposing a more restrictive Lévy structural assumption as has been used in Bibinger et al. (2017).

4.3 Rejection rules and consistency

Based on the limit results presented in Section 4.1, we can summarize the following rejection rules. Thereto, let be the

-quantile of the Gumbel-type limit law

of in the limit theorems. Since the latter is absolutely continuous with respect to the Lebesgue measure, there is a unique solution, given by

  1. Based on Theorem 4.1 and the notations used there, we

    (25)
  2. Based on Corollary 4.2 and the notations used there, we

    (26)
  3. Based on Proposition 2 and the notations used there, we

    (27)
  4. Based on Proposition 2 and the notations used there, we

    (28)
Theorem 4.4.

Suppose Assumption 2.1, Assumption 2.3, and Assumption 2.2 with (23) in the case with jumps. The decision rules (25), (26), (27) and (28) provide consistent tests to distinguish the null hypothesis 1 from the alternative hypothesis (-) for the testing problem (6).

Consistency of the test means that under the alternative hypothesis, if for some we have that for some fix , the power of the test, for instance by (25), tends to one as :

Theorem 4.1 ensures that (25) facilitates an asymptotic level--test that correctly controls the type 1 error, that is,

Thereby, even for small , the test can distinguish continuous volatility paths from paths with jumps.

Remark 4.5.

The rate in (21), (22), (24a) and (24b) determines how fast the power of the test increases in the sample size . The convergence rate, for close to the upper bound in (13), is close to . The latter coincides with the optimal convergence rate for spot volatility estimation under noise, see Munk and Schmidt-Hieber (2010). In light of the lower bound for the testing problem without noise established in Bibinger et al. (2017) and the relation of the models with and without noise studied in Gloter and Jacod (2001), we conjecture that the above test yields an asymptotic minimax-optimal decision rule. A formal generalization of the proof for the detection boundary from Theorem 4.1 of Bibinger et al. (2017) to our setting however appears not to be feasible, since it heavily exploits simple -approximations of squared increments.

4.4 Consistent estimation of the change point

In this subsection, we present an estimator for the change point , which is of importance, once we have decided to reject 1. Therefore, we suppose (-) and that there exists one with . The aim is to estimate , in general referred to as the change point or break date in change-point statistics, which here gives the time of the volatility jump. We suggest the estimator , given by

(29)

where

It is sufficient to use these modified non-rescaled versions of the statistics in (18). We prove the following consistency result for our estimator.

Proposition 4.5.

Given the assumptions of Theorem 4.1, that is, Assumptions 2.1 and 2.3, and satisfies (13), and assume that (-) applies with one jump time . For , it holds that

In particular,

Remark 4.6.

Put another way, we can detect jump times associated with sequences of jump sizes as as long as in the sense of weak consistency. Choosing as small as possible, such that (13) is satisfied, yields the best possible rate, while for the testing problem in Theorem 4.1 we select as large as possible. In the optimal case, a jump with fix size can be detected with a convergence rate close to . This provides important information how precisely volatility jump times can be located under noisy observations. With jumps in , we conjecture that an analogous results holds true under the conditions of Proposition 2. A sequential application of our methods allows for testing and the estimation of multiple change points. The extension of the estimation from the one change to the multiple change-point alternative is accomplished similarly to Algorithm 4.9 from paragraph 4.2.2. in Bibinger et al. (2017).

5 Simulations and a bootstrap adjustment

In this section we investigate the finite-sample performance of the new method in a simulation study. We also analyze the efficiency gains of our noise-robust approach based on the spectral volatility estimation methodology in comparison to simply skip sampling the data and applying the non noise-robust method from Bibinger et al. (2017). Skip sampling the data, which means we only consider every 60th datapoint, reduces the dilution by the noise and is a standard way to deal with high-frequency data in practice. We consider observations of (5

), a typical sample size of high-frequency returns over one trading day. The noise is centered and normally distributed with a realistic magnitude,

, see, for instance, Bibinger et al. (2018). We implement the same volatility model as in Section 5 of Bibinger et al. (2017), where

(30)

is a semimartingale volatility process fluctuating around the seasonality function

(31)

 

Figure 1: Left: Histogram of statistics left-hand side in (24b) for and , , under null hypothesis and alternative hypothesis and limit law density marked by the line. Right: Histogram of corresponding not noise-robust statistics from Bibinger et al. (2017) applied after (most efficiently) skip sampling to a subset with observations, , and limit law density marked by the line.

where and , with a standard Brownian motion independent of . We set and the drift . We perform the simulations in R using an Euler-Maruyama discretization scheme.

5.1 Performance of the test, comparison to skip sampling, bootstrap adjustment and sensitivity analysis

Concerning the jumps of and under the alternative hypothesis, we implement two different model configurations. In order to grant a good comparison to Bibinger et al. (2017) in the evaluation of the efficiency gains by our method instead of a skip-sample approach, we adopt in Section 5.1 the setup from Section 5 of Bibinger et al. (2017). There, under the alternative hypothesis, the volatility admits one jump of size at time . The jump size equals the range of the expected continuous movement. Under the alternative hypothesis, admits a jump at the same time . Under the null hypothesis and the alternative hypothesis, also jumps at some uniformly drawn time. All price jumps are normally distributed with expected size and variance . More general jumps are considered in Section 5.2.

We consider the test statistic (20b) with overlapping blocks and truncation. Section 5.2 confirms that it outperforms the non-overlapping version (20a). We set and . Robustness with respect to different choices of and is discussed below. For the truncation, we set according to Remark 4.3. In all cases, we compute the adaptive feasible statistics and do not make use of the generated volatility paths to derive the weights (12). We rather rely on the two-stage method and insert (15) with and (9) in the statistics. The spectral estimates from (11) are computed as sums up to the spectral cut-off , smaller than , as the fast decay of the weights (12) in , compare also (48), renders higher frequencies completely negligible. The investigated test statistics will be identically feasible in data applications.

Figure 1 visualizes the empirical distribution from Monte Carlo iterations under the null hypothesis and the alternative hypothesis. The left plot shows our statistics while the right plot gives the results for the statistics from Bibinger et al. (2017) applied to a skip sample of 500 observations. The skip-sampling frequency has been chosen to maximize the performance of these statistics. While they are reasonably robust to minor modifications, too large samples lead to an explosion of the statistics also under the null hypothesis and much smaller samples result in poor power. The length of the smoothing window for the statistics given in Equation (24) of Bibinger et al. (2017) is set , adopted from the simulations in Bibinger et al. (2017). In the optimal case, null and alternative hypothesis are reasonably well distinguished by the skip-sampling method – but the two plots confirm that our approach improves the finite-sample power considerably. For the spectral approach, of the outcomes under exceed the

-decile of the empirical distribution under

. For the optimized skip-sample approach this number reduces to . The approximation of the limit law appears somewhat imprecise. The relevant high quantiles, however, fit their empirical counterparts quite well.
Nevertheless, we propose a bootstrap procedure to fit the distribution of under with improved finite-sample accuracy. We start with an estimator for the spot volatility