In change point detection, the hypothesis is typically stationarity, but there are different types of alternative, like the at most one change point or multiple change points. In this article, we are interested in testing stationarity with respect to the so called epidemic change or changed segment alternative: We have a random sample (with values in a sample space and distributions
) and we wish to test the null hypothesis
versus the alternative
Under the sample constitutes a changed segment starting at and having the length and is then the corresponding distribution of changed segment. This type of alternative is of special relevance in epidemiology and has first been studied by Levin and Kline  in the case of a change in mean. Their test statistic is a generalization of the CUSUM (cumulated sum) statistic. Simultaneously, epidemic-type models were introduced by Commenges, Seal and Pinatel  in connection with experimental neurophysiology.
If the changed segment is rather short compared to the sample size, tests who give higher weight to short segments have more power. Asymptotic critical values for such tests have been proved by Sigmund  in Gaussian case (see also ). The logarithmic case was treated in Kabluchko and Wang , whereas the regular varying case in Mikosch and Račkauskas . Yao  and Hušková  compared tests with different wheightings. Račkauskas and Suquet ,  have suggested to use a compromise weighting, that allows to express the limit distribution of the test statistic as a function of a Brownian motion. However, in order to apply the continuous mapping theorem for this statistic, it is necessary to establish the weak convergence of the partial sum process to a Brownian motion with respect to the Hölder norm.
It is well known that the CUSUM statistic is sensitive to outliers in the data, see e.g. Prášková and Chochola . The problem becomes worse if higher weights are given to shorter segments. A common strategy to obtain a robust change point test is to adapt robust two-sample test like the Wilcoxon one. This was first used by Darkhovsky  and by Pettitt  in the context of detecting at most one change in a sequence of independent observations. For a comparison of different change point test see Wolfe and Schechtmann . The results on the Wilcoxon type change point statistic were generalized to long range dependent time series by Dehling, Rooch, Taqqu . The Wilcoxon statistic can either be expressed as a rank statistic or as a (two-sample) -statistic. This motivated Csörgő and Horváth  to study more general -statistics for change point detection, followed by Ferger  and Gombay . Orasch  and Döring  have studied -statistics for detecting multiple change-points in a sequence of independent observations. Results for change point tests based on general two-sample -statistics for short range dependent time series were given by Dehling, Fried, Garcia, Wendler , for long range dependent time series by Dehling, Rooch, Wendler .
Gombay  has suggested to use a Wilcoxon type test also for the epidemic change problem. The aim of this paper is to generalize these results in three aspects: to study more general
-statistics, to allow the random variable to exhibit some form of short range dependence, and to introduce weightings to the statistic. This way, we obtain a robust test which still has good power for detecting short changed segments. To obtain asymptotic critical values, we will prove a functional central limit theorem for-processes in Hölder spaces.
The article is organized as follows. Section 2 introduces -statistics type test statistics to deal with epidemic change point problem. In Section 3 some experimental results are presented and discussed whereas Section 4 deals with concrete data set. Section 5 and Section 6 constitute the theoretical part of the paper where asymptotic results are established under the null hypothesis. Consistency under alternative of changed segment is discussed in Section 7. Finally in Section 8, we present the table with asymptotic critical values for tests under consideration.
2 Tests for changed segment based on -statistics
A general approach for constructing procedures to detect changed segment is to use a measure of heterogeneity between two segments
where and . As neither the beginning nor the end of changed segment is known, the statistics
may be used to test the presence of a changed segment in the sample , where is a factor smoothing over the influence of either too short or too large data windows. In this paper we consider a class of -statistics type measures of heterogeneity defined via a measurable function by
and the corresponding test statistics
Although other weighting functions are possible our choice is limited by application of functional central limit theorem in Hölder spaces.
Recall the kernel is symmetric if and antisymmetric if for all . Any non symmetric kernel can be antisymmetrized by considering
Let’s note that the kernel is antisymmetric if and only if for any independent random variables with the same distribution such that the expectation exists. The if part follows by Fubini and antisymmetry. To see the only if part, first consider the one point distribution and almost surely to conclude that for all . Next, consider the two point distribution and conclude that and thus . So a -statistic with antisymmetric kernel have expectation if the observations are independent and identically distributed and are good candidates for change point tests. We only consider antisymmetric kernels in this paper.
In the case of real valued sample, examples of antisymmetric kernels include the CUSUM kernel or more generally
for an odd functionand the Wilcoxon kernel . The kernel leads to a Wilcoxon type statistics
whereas with the kernel we get a CUSUM type statistics
where . As more general classes of kernels and corresponding statistics we can consider the CUSUM test of transformed data (
) or a test based on two-sample M-estimators (for some monotone function, see Dehling et al. ).
Based on invariacne principles in Hölder spaces discussed in the next section, we derive the limit distribution of test statistics . Theorems 1 and Theorem 2 provide examples of our results. Let be a standard Wiener process and be a corresponding Brownian bridge. Define for ,
If are independent and identically distributed random elements and is an antisymmetric kernel with for some , then for any , we have
where the variance parameter
where the variance parameteris defined by and .
Note that in practice, the random variables
might not have high moments, but if we use a bounded kernel like, we know that the condition of the theorem holds for any , so we have the convergence for any . Also, in practical applications, the variance parameter has to be estimated. This can be done by
For the case of dependent sample, we consider absolute regular sequences of random elements (also called -mixing). Recall the coefficients of absolute regularity is defined by
where is the sigma-field generated by .
If is a stationary, absolutely regular sequence and is an antisymmetric kernel assume the following conditions to be satisfied:
for some ;
and for some .
Then for any , we have
where the long run variance parameter is given by
For bounded kernel the conditions (ii) on decay of the coefficients of absolute regularity reduces to
for some .
Following Vogel and Wendler , can be estimated using a kernel variance estimator. For this, define autocovariance estimators by
with . Then, for some Lipschitz continuous function with and finite integral, we set
where is a bandwidth such that and as .
With the help of the limit distribution and the variance estimators, we obtain critical values for our test statistic. Simulated quantiles for the limit distribution can be found in Section8.
To discus the behavior of the test statistics under the alternative we assume that for each
we have two probability measuresand on and a random sample such that for ,
Let . Assume that for all , the random variables are independent and let be an antisymmetric kernel. If
then it holds
For dependent random variables, we get a similar theorem:
Assume that for all , the random variables are absolutely regular with mixing coefficients not depending on , such that for some . Let be an antisymmetric kernel, such that there exist such that for all , . Furthermore, let and assume that
Then (4) holds.
3 Simulation results
We compare the CUSUM type and the Wilcoxon type test statistic in a Monte Carlo simulation study. The model is an autoregressive process of order 1 with , wheredistributed. We assume that the first observations are shifted, so that we observe
Under independence, the distribution of the change-point statistics does not dependent on the beginning of the changed segment, only on the length, so we restrict the simulation study segments of the form . In figure 1, the results for independent observations () are shown. In this case, we use the known variance of our observations and do not estimate the variance. The relative rejection frequency of 3,000 simulation runs under the alternative is plotted against the relative rejection frequency under the hypothesis for theoretical significance levels of 1%, 2.5%, 5% and 10%.
As expected, the CUSUM test has a better performance than the Wilcoxon test for normal distributed data. For the exponential and the distribution, the Wilcoxon type test has higher power. For the long changed segment (), the weighted tests with outperform the tests with . For the short changed segment (), the Wilcoxon type test has more power with weight . The same holds for the CUSUM type test under normality. For the other two distributions however, the empirical size is also higher for so that the size corrected power is not improved.
In Figure 2, we show the results for dependent observations (AR(1) with ). In this case, we estimated the long run variance with a kernel estimator, using the quartic spetral kernel and the fixed bandwithd . Both tests become to liberal now with typical rejection rates of 13% to 15% for a theoretical level of 10%. For the long changed segment () it is better to use the weight , for the short segment () the weight . Under normality, the CUSUM type test has a better performance, though the difference in power is not very large. For the other two distributions, the Wilcoxon type test has a better power. Although we have done some simulations with different locations of the changed segment we only report the results for a changed segment positioned directly at the beginning as in the case of independent observations. Let us just mention that the starting and the end points did only play a minor role to the results.
4 Data example
We investigate the frequency of search for the term “Harry Potter” from january 2004 until february 2019 obtained from google trends. The time series is plotted in Figure 3. We apply the CUSUM type and the Wilcoxon type changepoint test with weight parameters . The lag one autocovariance is estimated as 0.457, so that we have to allow for dependence in our testing prodecure. We estimate the the long run variance with a kernel estimator, using the quartic spetral kernel and the fixed bandwithd .
The CUSUM type test does not reject the hypothesis of stationarity for an significance level of 5%, regardsless of the choice of . In contrast, the Wilcoxon type test detects a changed segment for any , even at a significance level of 1%. The beginning and end of the changed segment are estimated differently for different values of : The unweighted Wilcoxon type test with leads to a segment from january 2008 to june 2016. For , we obtain january 2012 to june 2016 as an estimate. leads to an estimated changed segment from january 2012 to may 2016.
By visual inspection of the time series, we come to the conclusion that the estimated changed segment for values fit to data better, because this segment coincides with a period with only low frequencies of search. Furthermore, the spikes of this time series can be explained by the release of movies, and the estimated changed segment is between the release of the last harry potter movie in july 2011 and the release of “Fantastic Beasts and Where to Find Them” in november 2016.
5 Double partial sum process
Throughout this section we assume that the sequence is stationary and is the distribution of each . Consider for a kernel the double partial sums
and the corresponding polygonal line process defined by
where for a real number , , , is a value of the floor function. So , is a random polygonal line with vertexes , . As a functional framework for the process we consider Banach spaces of Hölder functions. Recall the space of continuous functions on is endowed with the norm
The Hölder space , of functions such that
is endowed with the norm
Both and are separable Banach spaces. The space is isomorphic to .
For a kernel and a number we say that satisfies -FCLT if there is a Gaussian process , such that
In order to make use of results for partial sum processes, we decompose the -statistics into a linear part and a so-called degenerate part. Hoeffding’s decomposition of the kernel reads
and leads to the splitting
is the polygonal line process defined by partial sums of random variables . Decomposition (7) reduces -FCLT to Hölderian invariance principle for random variables via the following lemma.
If there exists a constant such that for any integers
for any .
Before we proceed with the proofs of Lemma 6 we need some preparation. Let be the set of dyadic numbers of level in , that is and for , . For set , , . For and define
The following sequential norm on defined by
is equivalent to the norm , see : there is a positive constant such that
Set . In what follows, we denote by the logarithm with basis ().
For any there is a constant such that, if is a polygonal line function with vertexes , then
First we remark that for any ,
As and belong to , this gives,
and it follows by (10),
If and belongs to the same interval, say, , then, observing that the slope of in this interval is precisely , we have
where . If then
If , and , then
We apply these three configurations to and . If then only the first two configurations are possible and we deduce
If then we apply the third configuration to obtain
To complete the proof just observe that if and so . ∎
Proof of Lemma 6.
The following lemma gives general conditions for the the tightness of the sequence in Hölder spaces.
Assume that the sequence is stationary and for a , there is a constant such that for any
Then for any the sequence is tight in the space .
Fix such that . By Arcela-Ascoli the embedding is compact, hence, it is enough to prove
By Lemma 8,
with some constant