# Structural break analysis for spectrum and trace of covariance operators?

This paper deals with analyzing structural breaks in the covariance operator of sequentially observed functional data. For this purpose, procedures are developed to segment an observed stretch of curves into periods for which second-order stationarity may be reasonably assumed. The proposed methods are based on measuring the fluctuations of sample eigenvalues, either individually or jointly, and traces of the sample covariance operator computed from segments of the data. To implement the tests, new limit results are introduced that deal with the large-sample behavior of vector-valued processes built from partial sample eigenvalue estimates. These results in turn enable the calibration of the tests to a prescribed asymptotic level. A simulation study and an application to Australian annual minimum temperature curves confirm that the proposed methods work well in finite samples. The application suggests that the variation in annual minimum temperature underwent a structural break in the 1950s, after which typical fluctuations from the generally increasing trendstarted to be significantly smaller.

## Authors

• 5 publications
• 8 publications
• 3 publications
• ### Detecting structural breaks in eigensystems of functional time series

Detecting structural changes in functional data is a prominent topic in ...
11/18/2019 ∙ by Holger Dette, et al. ∙ 0

• ### Two-sample tests for relevant differences in the eigenfunctions of covariance operators

This paper deals with two-sample tests for functional time series data, ...
09/13/2019 ∙ by Alexander Aue, et al. ∙ 0

• ### The spatial sign covariance operator: Asymptotic results and applications

Due to the increasing recording capability, functional data analysis has...
04/11/2018 ∙ by Graciela Boente, et al. ∙ 0

• ### A Test for Separability in Covariance Operators of Random Surfaces

The assumption of separability is a simplifying and very popular assumpt...
10/23/2017 ∙ by Pramita Bagchi, et al. ∙ 0

• ### Break Point Detection for Functional Covariance

Many experiments record sequential trajectories that oscillate around ze...
06/24/2020 ∙ by Shuhao Jiao, et al. ∙ 0

• ### Quantifying deviations from separability in space-time functional processes

The estimation of covariance operators of spatio-temporal data is in man...
03/26/2020 ∙ by Holger Dette, et al. ∙ 0

• ### Statistics on functional data and covariance operators in linear inverse problems

We introduce a framework for the statistical analysis of functional data...
06/11/2018 ∙ by Eardi Lila, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In functional data analysis, a natural way to measure the variability of a sample is through the covariance operator and its eigenvalues. This basic idea motivates perhaps the most widely used tool in the analysis of functional data: functional principal component analysis (FPCA). FPCA entails projecting functional observations into a lower dimensional space spanned by a few functional principal components computed as eigenfunctions of an empirical covariance operator. Typically a small number of projections account for a large percentage of sample variation, often measured as size of the corresponding eigenvalues of the empirical covariance operator relative to its trace. When functional data are obtained via randomized experiments, it is reasonable to assume that the covariance structure is homogeneous throughout the sample, and, in this case, the principal components and spectra computed from the sample covariance operator correspond to population quantities with well-known optimality properties for dimension reduction. The interested reader is referred to Ramsay and Silverman (2005), Ferraty and Vieu (2006), and Horváth and Kokoszka (2013) for text book treatments of functional data analysis, and to Shang (2014) for a survey of FPCA.

Frequently, however, functional data are obtained not by simple random sampling, but rather sequentially. A common example is the generation of functional observations by parsing long, dense records of a continuous time phenomenon, such as historical temperature data, into a functional time series, such as annual temperature profiles; see Aue et al. (2018), Aue and van Delft (2018), and van Delft et al. (2018). Sequences of the same kind also arise from sequential observations of other complex functional phenomena, such as functional magnetic resonance imaging and DNA minicircle evolution; see Aston and Kirch (2012) and Panaretos et al. (2010), respectively. Consequently, functional data often naturally display time series characteristics.

With such sequences of functional observations it is also often evident that their variability is not stable throughout the entire sample, rather they exhibit several periods of distinct levels and fluctuations. Providing a mechanism to identify data segments for which variability can be assumed stable is useful for several reasons. First, FPCA based analyses using the entire sample might be misleading in the presence of inhomogeneity in the variability in that either (1) the basis computed from the sample covariance operator may not be estimating the optimal basis for dimension reduction, and/or (2) statistics used to determine how many principal components to use, often based on sample eigenvalue estimates, may not perform as expected. As a result too few or many principal components could be considered in subsequent analyses. Breaks in the variability, as measured by eigenvalues, might also be of independent interest since they may signal a relevant change to the system under study. An example is given by structural breaks in the variability of annual minimum temperature curves constructed from historical records in Australia. It is seen below that after the removal of an increasing trend curve, variability begins to decrease in the 1950s. Methods for identifying and pinpointing the nature of such structural breaks in functional data, and further giving statistical significance to such findings, have not been developed, to the best of our knowledge.

In this paper, tests for the constancy of the largest

eigenvalues and trace of the empirical covariance operator of a functional time series are proposed and studied. The tests are based on comparing maximally selected quadratic forms derived from partial sample estimates of the eigenvalues of the covariance operator to the quantiles of their limiting distribution under the hypothesis that the sample is taken from a weakly dependent functional time series. This asymptotic result follows from a weak invariance principle for the vector-valued process of partial sample eigenvalue estimates that might be of independent interest.

This work is inspired by, and builds upon, a number of recent contributions in both the probability and statistics literature. In the setting of separable Hilbert space-valued random variables, Mas (2002) and Mas and Menneteau (2003) showed via perturbation theory that the central limit theorem, law of large numbers, and law of iterated logarithm hold for the spectra if analogous results can be established for the operators themselves. Kokoszka and Reimherr (2013) showed, under conditions similar to those used here, that eigenfunctions of sample covariance operators of weakly dependent functional time series are asymptotically Gaussian. Beran and Liu (2016) established the central limit theorem for the eigenvalue and eigenfunction estimates in functional data models under both short- and long-memory error conditions. With the goal of performing structural break analysis with finite-dimensional time series, Aue et al. (2009) established a weak invariance principle for the process partial sample estimates of the covariance matrix, which was applied do derive structural break tests for the second-order structure of a vector-valued time series. Their results were extended to include strong approximations for partial sample spectra and principal components in Kao et al. (2018), which may be viewed as a finite-dimensional counterpart to this paper. Horváth and Rice (2018+) considered similar methods in the context of high-dimensional linear factor models.

There are several recent papers on two-sample and analysis of variance problems for functional data relevant to the present work. Most closely related is Jaruškova (2013), who developed two-sample and structural break tests for the covariance operator of independent, identically distributed functional data based on principal component projections, and Zhang and Shao (2015), who considered a two-sample test for the covariance operator of dependent functional data based on self-normalized statistics derived from eigenvalue estimates. Beran et al. (2016) developed a two-sample test for the equivalence of eigenspaces of two-sample covariance operators, while Pigoli et al. (2014) considered various metrics and two-sample tests for covariance operators. Boente et al. (2010) developed multi-sample tests under a common principal component assumption. Finally Fremdt et al. (2012) and Panaretos et al. (2010) studied two-sample tests for the second-order structure of Gaussian functional data.

The rest of the paper is organized as follows. In Section 2, the basic problem is formalized, and assumptions and asymptotic results for partial sample eigenvalue estimates are detailed. Applications of these results to test for the constancy of the eigenvalues of the covariance operator are developed in Section 3. The findings of a Monte-Carlo simulation study are presented in Section 4, while the outcomes of an application to annual minimum temperature curves are reported in Section 5. All procedures are implemented in the R package fChange (see Sönmez et al., 2017), which may be downloaded from the CRAN website. Section 6 concludes. Technical details and proofs are collected in Appendices A and B. Below, the following notations are used. Write for the space of square-integrable, real-valued functions defined on . Let denote the standard norm on induced by the inner product , the dimension being clear from the input function. The notation may be used in place of , and short for a sequence indexed by the integers .

## 2 Framework

Suppose that functional observations are generated by the model

 Xi(t)=μ(t)+εi(t),t∈[0,1],i∈Z, (2.1)

where denotes the common mean function of the and a sequence of centered error functions treated as stochastic processes with sample paths in . In order to solidify concepts, assume that , and let

 C(i)(t,t′)=Cov(Xi(t),Xi(t′)),t,t′∈[0,1],

denote the covariance kernel of . On , defines the symmetric and positive definite Hilbert–Schmidt integral operator given by

 c(i)(f)(t)=∫C(i)(t,t′)f(t′)dt′,f∈L2([0,1]),

whose eigenfunctions are commonly termed the principal components of the process . The associated nonnegative, real, and ordered eigenvalues define the “variance explained” by successive principal components. Given a sample following (2.1), one often wishes to estimate these principal components and eigenvalues in order to perform dimension reduction. Under the assumption that the sequence is strictly stationary, which in light of (2.1) is equivalent with the strict stationarity of the errors , it follows that for all , where . Similarly, and . These common principal components may be estimated using the sample covariance kernel

 ^C(t,t′)=1nn∑i=1(Xi(t)−¯X(t))(Xi(t′)−¯X(t′)),t,t′∈[0,1],

where , which in turn yields estimates and as solutions to the equations

 ^λj^φj(t)=∫^C(t,t′)^φj(t′)dt′,t∈[0,1]. (2.2)

A potential issue with this approach arises as follows: if the errors in (2.1) are non-stationary, for instance if their covariance changes within the sample, then principal components and eigenvalues defined in (2.2) may not lead to optimal dimension reduction and/or summaries of variability. Defining

 Λ(i)d=(λ(i)1,…,λ(i)d)⊤∈Rd,

with

signifying transposition, the foregoing motivates to study the null hypothesis

 H0:Λ(1)d=⋯=Λ(n)d

and the alternative

 HA:Λ(1)d=⋯=Λ(k∗)d≠Λ(k∗+1)d=⋯=Λ(n)d,

where , with . The alternative hypothesis describes the situation in which there is a structural break in the largest eigenvalues taking place at the unknown break point . In order to test , consider partial sample estimates of given by

 ^Cx(t,t′)=1n⌊nx⌋∑i=1(Xi(t)−¯X(t))(Xi(t′)−¯X(t′)),x;t,t′∈[0,1]. (2.3)

The estimate may be used to define a partial sample estimate of as

 ^cx(f)(t)=∫^Cx(t,s)f(t′)dt′,x;t∈[0,1]. (2.4)

For , let denote the ordered eigenvalues of with corresponding orthonormal eigenfunctions . Throughout, the following assumptions will be invoked regarding strict stationarity of the underlying functional time series, the level of serial dependence between successive functions in the sample, and the spacing of the population eigenvalues .

###### Assumption 2.1.

It is assumed that

(a) there is a measurable function , where is a measurable space, and independent, identically distributed (iid) innovations taking values in such that for ;

(b) there are -dependent sequences such that, for some ,

 ∞∑ℓ=0(E[∥εi−εi,ℓ∥p])1/p<∞,

where with being independent copies of independent of .

Assumption 2.1(a) implies that is strictly stationary, and hence that holds. Processes satisfying Assumption 2.1(b) were termed --approximable processes by Hörmann and Kokoszka (2010), and cover most stationary functional time series models of interest, including functional AR and ARMA processes (see Aue et al. 2015; and Bosq, 2000). It is assumed that the underlying error innovations are elements of an arbitrary measurable space . However, in many examples is itself a function space, and the evaluation of is a functional of . In order to obtain a normal approximation for the sample eigenvalues of , one must assume at least moments for the norm of the observations, and so our assumption of is nearly optimal in this sense.

###### Assumption 2.2.

There exists an integer such that .

Assumption 2.2 is standard in the FPCA literature. It ensures that eigenspaces belonging to the largest eigenvalues of are one-dimensional and that is bounded away from zero. Under , denote the vector of the largest eigenvalues of by

 Λd=(λ1,…,λd)⊤∈Rd.

To consider tests based on the vector of partial sample estimates of , define

 ^Λd(x)=(^λ1(x),…,^λd(x))⊤,x∈[δ,1],

and note that this gives rise to the process living in , the -dimensional Skorohod space on the interval with some ; see Chapter 3 of Billingsley (1968). Let

 (2.5)

The following theorem establishes the asymptotic properties of a suitably normalized version of the process .

###### Theorem 2.1.

If model (2.1) and Assumptions 2.1 and 2.2 hold, then for ,

 √n(^Λd(x)−⌊nx⌋nΛd:x∈[δ,1])⟹(Σ1/2dW(d)(x):x∈[δ,1])(n→∞),

where denotes weak convergence in , a standard -dimensional Brownian motion, and a covariance matrix with entries

 Σd(j,j′)=∞∑i=−∞Cov(θ0,j,θi,j′),j,j′=1,…,d.

For , let , with as in (2.5). It is seen that is the usual long-run covariance matrix (or spectral density matrix at frequency zero) of the stationary sequence in . Assuming for the moment that the series satisfying Assumption 2.1 and 2.2 was iid, in Theorem 2.1 would reduce to , coinciding with standard asymptotic normality results for the eigenvalues computed from sample covariance operators based on a simple random sample; see Mas and Menneteau (2003). As a corollary to Theorem 2.1, the limiting distribution of the individual partial sample empirical eigenvalue estimates is obtained. These asymptotics are useful in evaluating whether individual eigenvalues have undergone a structural break.

###### Corollary 2.1.

If model (2.1) and Assumptions 2.1 and 2.2 hold, then for and ,

 √n(^λj(x)−⌊nx⌋nλj:x∈[δ,1])⟹(σjW(x):x∈[δ,1])(n→∞), (2.6)

where denotes weak convergence in , a standard one-dimensional Brownian motion, and

## 3 Structural breaks in the covariance operator

### 3.1 Testing for structural breaks in the spectrum

As documented in Aue and Horváth (2013) in univariate and multivariate contexts, a natural way to measure the validity of is to consider the magnitude of the vector-valued cumulative sum process

 ^Λd(x)−⌊nx⌋n^Λd(1),

maximized over the partial sample parameter . Large values of this magnitude would be interpreted as evidence of inhomogeneity of the eigenvalues. Theorem 2.1 may be used to determine the typical size of such a maximum. In order to pursue this goal the following assumption is imposed.

###### Assumption 3.1.

The matrix defined in Theorem 2.1 is invertible and there is an estimator of satisfying

 |Σd−^Σd|F=oP(1), (3.1)

where is the Frobenius norm.

Appendix B outlines a way to construct such a covariance estimator. There, a kernel lag-window type estimator

 ^Σd=∞∑ℓ=−∞w(ℓh)^Γℓ,θ,^Γℓ,θ=1n∑i∈Iℓ(^Θi−¯Θ)(^Θi+ℓ−¯Θ)⊤,

of is discussed in some detail. Here, denotes a weight function and a bandwidth parameter, if and if , and is the estimated score vector whose entries are given by

 ^θi,j=⟨(Xi−¯X)⊗(Xi−¯X)−^C1,^φj,1⊗^φj,1⟩, (3.2)

while is the sample mean of the . It is shown in Appendix B that this estimator satisfies (3.1) under standard conditions on the weight function and the bandwidth . In order to test , consider then the quadratic form statistic

 Jn(δ)=Jd,n(δ)=supδ≤x≤1κ⊤n(x)^Σ−1dκn(x),

where

 κn(x)=√n(^Λd(x)−⌊nx⌋n^Λd(1)),x∈[0,1].

To evaluate the constancy of individual eigenvalues, consider the test statistic

 Ij,n(δ)=supδ≤x≤11^σj∣∣∣^λj(x)−⌊nx⌋n^λj(1)∣∣∣,j=1,…,d,

where . The following result is a consequence of Theorem 2.1.

###### Theorem 3.1.

If the conditions of Theorem 2.1, Assumption 3.1 and (3.1) are satisfied, then

 Jn(δ)D→J(δ)=supδ≤x≤1d∑j=1B2j(x)(n→∞),

and

 Ij,n(δ)D→I(δ)=supδ≤x≤1|Bj(x)|(n→∞),

where indicates convergence in distribution and , , are iid standard Brownian bridges, noting that does not depend on .

A test of asymptotic size for is to reject if or exceed the quantile of the limit distributions distribution and , respectively. These distributions can be obtained via Monte-Carlo simulation. Below, the test based on is referred to as the joint test, the test based on as the th test or the th individual test.

### 3.2 Testing for structural breaks in the trace

The eigenvalue is used to determine the variance of explained by the th principal component by comparing its magnitude to the cummulative variance of the function measured by the trace of the covariance operator

 ∞∑j=1λj=∫C(t,t)dt=tr(c).

A common criterion for selecting the number of principal components for subsequent analysis is to take the minimum that causes the total variance explained (TVE) by the first principal components to exceed a user selected threshold , that is,

 d=dv=min{d:λ1+⋯+λdtr(c)≥v}. (3.3)

When performing principal component analysis for functional time series it is often also of interest to determine if is constant in conjunction with the constance of the largest eigenvalues. A partial sample estimator of the trace is given by

 Tn(x)=1n⌊nx⌋∑i=1∥Xi−¯X∥2,x∈[0,1]. (3.4)

The large-sample behavior of a centered version of the process is given next.

###### Theorem 3.2.

If Assumptions 2.1 and 2.2 hold, then

 (Tn(x)−xtr(c):x∈[0,1])⟹(σTW(x):x∈[0,1])(n→∞),

where denotes weak convergence in , a standard Brownian motion and, with ,

 σ2T=∞∑i=−∞Cov(ξ0,ξi). (3.5)

Utilizing Theorem 3.2 to test for a structural break in the trace of the covariance operator, one may set up the test statistic

 Mn=1^σTsup0≤x≤1|Tn(x)−xTn(1)|,

with a consistent estimator of of the form

 ^σ2T=∞∑ℓ=−∞w(ℓh)^γℓ,^γℓ=1n∑i∈Iℓ(^ξi−¯ξ)(^ξi+ℓ−¯ξ),

where is a weight function, a bandwidth parameter, and is as above. The consistency of this estimator under standard assumptions on and is discussed in Appendix B. The following result is a consequence of Theorem 3.2.

###### Corollary 3.1.

If the conditions of Theorem 3.2 are satisfied and if is consistent for , then

 MnD→M=sup0≤x≤1|B(x)|(n→∞),

where is a standard Brownian bridge.

As for the joint and the individual tests above, a test of asymptotic size for the null of no structural break in the trace is to reject if exceeds the quantile of the limit distribution . This test will be referred to as the trace test below.

### 3.3 Consistency of test statistics

In this subsection, the test statistics proposed above are shown to be consistent under . To this end, suppose that the functional time series is stationary and weakly dependent before and after the break point , and that an additional regularity condition is satisfied to ensure that the matrix estimate does not have eigenvalues diverging to under . All details are specified in the following assumption.

###### Assumption 3.2.

(a) There are measurable functions , where is a measurable space, and iid innovations taking values in such that

 εi={g1(ϵi,ϵi−1,…),i≤k∗,g2(ϵi,ϵi−1,…),i>k∗,

for , where and satisfy Assumption 2.1(b). Let and . Let and denote the eigenelements of and , respectively.

(b) Let denote the eigenelements of the integral operator with kernel

 C∗(t,t′)=τC1(t,t′)+(1−τ)C2(t,t′),

where is such that . Assume then invertibility of the matrices and , whose entries are defined by

 Σ(k)d(j,j′)=∞∑i=−∞Cov(θ(k)0,j,θ(k)i,j′),j,j′=1,…,d;k=1,2,

with

 θ(k)i,j=⟨εi⊗εi−E[εi⊗εi],φ∗j⊗φ∗j⟩,εi=gk(ϵi,ϵi−1,…),  k=1,2.

Under (2.1), Assumption 3.2 guarantees that the sequence is stationary and weakly dependent on the pre- and post-break segments. It is assumed that the first eigenspaces associated with pre- and post-break covariance operators are the same and one-dimensional. One notable feature of Assumption 3.2 is that the eigenfunctions of pre- and post-break covariance operators need not necessarily align. In particular, all proposed tests are expected to be consistent if both eigenvalues and eigenfunctions undergo a structural break, so long as holds.

###### Theorem 3.3.

Let Assumption 3.2 and be satisfied.

(a) If , then as ;

(b) If and , then for as ;

(c) If , then as .

The proof of Theorem 3.3 is provided in Appendix B. The main difficulty in establishing the result is deriving the asymptotic behaviour of and the eigenvalues of its inverse under .

## 4 Simulation Study

### 4.1 Setting

Data generating processes were considered following the setting of Aue et al. (2015, 2018). Specifically, functional data of sample size were generated utilizing Fourier basis functions on the unit interval . The results reported below remained largely invariant to the choice of larger values of . Without loss of generality, the mean function in model (2.1) was assumed to be the zero function. Independent curves were then constructed according to

 ζi=D∑ℓ=1ξi,ℓvℓ,i=1,…,n,

where

are independent normal random variables with zero mean and standard deviations

. Two standard deviations were chosen to mimic two different eigenvalue decays of the covariance operators, namely:

• Fast decay: ;

• Slow decay: .

To explore the finite-sample performance of the proposed tests, artificial breaks were inserted into the eigenvalue structures in (a) and (b) in the following way. For a fixed break location , consider

 σ(1)=σandσ(2)=b∘σ,

where is as above, is a vector of sensitivity parameters and denotes the Hadamard product (entry-wise multiplication). Then, and specifiy the eigenvalue structure of the pre-and post-break observations with controlling the magnitude of the break in a multiplicative fashion. For example, setting results in the null hypothesis of structural stability, while restricts the break to occur only in the leading eigenvalue, with determining the break size.

Both independent curves and functional time series curves were used, the latter to explore the effect of temporal dependence on the proposed tests. In particular, first-order FARs , and , were generated (using a burn-in period of initial curves that were discarded). The operator was set up as , where the random operator was represented by a matrix whose entries consisted of independent, centered normal random variables with standard deviations given by . A scaling was applied to achieve . The constant can then be used to adjust the strength of temporal dependence. To ensure stationarity of the time series, was selected.

With the above in place, the following four settings were studied.

• Setting 1: varies between and for all ;

• Setting 2: varies between and for all ;

• Setting 3: varies between and for all ;

• Setting 4: , and vary between and for all .

Settings 1–3 correspond to a structural break individually affecting the first, second and third eigendirections, respectively. Setting 4 allows for the leading three eigendirections to jointly undergo a structural break. All settings include the null hypothesis by setting all to unity.

Combining the previous paragraphs, the functional curves , , were generated according to model (2.1). Simulations were run for both independent and FAR(1) curves for sample sizes , and across the different specifications above and break locations with . For each data generating process, the individual test statistic , the joint test statistic and the trace test statistic were applied to detect structural breaks, with . All results reported in the next sections are based on 1,000 runs of the simulation experiments.

### 4.2 Level and power of the detection procedures

Empirical level and power of the proposed methods were evaluated relative to the nominal level . The results are presented in Table 4.1. It can be seen that even for these rather small-to-moderate sample sizes, tests kept levels rather well across all specifications.

To examine the power of the tests, structural breaks were inserted as described in Section 4.1. The empirical rejection rates for each test statistic are reported as power curves in Figure 4.2 when the errors in model 2.1 are iid curves, and the decay of the eigenvalues of the covariance operator is slow, as specified in setting (b) in the previous section. Further simulation evidence is provided in the Appendix. The findings may be summarized as follows.

• When the break is dominant in a single eigenvalue, the corresponding individual eigenvalue test tended to have reasonably high empirical power. The joint test was generally competitive with its individual counterparts, losing some power due to the estimation of eigenvalues not contributing to the structural break.

• Some care is required in the labeling of test statistics and settings. For instance, in the case that a sufficiently large break is inserted into the “second” eigendirection, this break will become dominant and constitute the leading mode of variation of the operator introduced in Assumption 3.2 (b). It will therefore be picked up by the first individual test . This effect is most clearly seen in Figure 4.2 for and the test predominantly picking up this break.

• When the break is not dominant but spread out across the three largest eigenvalues as prescribed in Setting 4, then the advantage of the joint test becomes more visible, especially for small sample sizes.

• The test for breaks in the trace displays higher empirical power when the break occurs in larger eigenvalues, since these contribute more to total variation. Once the break is inserted in smaller eigenvalues, the trace test loses some power. As expected, this phenomenon is even more evident when the eigenvalues of the covariance operator have a fast decay (results not shown here).

• The expected improvement in empirical power when increased was noted.

### 4.3 Performance of break date estimates

Once the null hypothesis of structural stability is rejected, it should be followed by an estimation of the break date. Assuming that model (2.1) and Assumptions 2.1 and 2.2 hold, the break date estimator accompanying the th individual test can be specified through

where and is a consistent estimator of as defined in Theorem 2.1. The break date estimator accompanying the joint test can be set up with

 ~x∗n=argmaxδ≤x≤1κn(x)⊤^Σ−1dκn(x),

where and are defined in Section 2.2. Finally, in a similar fashion, the break date estimator for total variation is utilized with

 ¯x∗n=argmax0≤x≤11^σT|Tn(x)−xTn(1)|,

where is given in (3.4).

Settings 1–4 were used to insert eigenvalue breaks with scaling chosen to be and . The slow decay of the eigenvalues in (b) above was considered. For each setting, sample size and choice of , the break date estimators were applied to joint, first, second and third eigenvalue, and the trace tests. The results are presented in the form of boxplots in Figure 4.2. Overall the performance of the joint eigenvalue break date procedure is competitive with its marginal counterparts across all settings. However, the performance of the single eigenvalue break estimation procedures critically depends on the location and the magnitude of the break.

## 5 Application to annual temperature profiles

This section is devoted to demonstrating the practical relevance of the proposed methods using annual temperature curves from various measuring stations in Australia. The raw data consists of daily minimum temperature measurements recorded in degrees Celsius over about one hundred years. For each year, 365 (366 in leap years) raw data points were converted into functional data object using Fourier basis functions. The data is available online and can be downloaded from www.bom.gov.au. Here, attention is focused on a measuring station located at the Post Office in Gayndah, a small town in Queensland. For this particular station, full annual temperature profiles were available from 1894 until 2007, resulting into the annual curves displayed in Figure 5.1.

Before attempting any structural break analysis for the covariance operator, the effect of potential nonstationarities in the mean function has to be taken into account. This can be done in several ways. Two approaches were discussed in Sönmez (2018), namely binary segmentation based on the method of Aue et al. (2018) and moving average smoothing. Since both methods led to almost identical conclusions in terms of the structural break analysis for the mean curve, thus indicating some robustness with respect to the method of detrending, only results for binary segmentation are reported here. Its application yielded three data segments for which the mean function is reasonably constant. The corresponding breaks were located at (1953) and (1972). Plots corroborating the findings are given in Figure 5.1, which indicate that the minimum temperature curves exhibit a generally increasing trend over the observed period.

After detrending, the joint structural break test and the trace test were applied. The dimension for jointly testing multiple eigenvalues was chosen based on the total variation explained (TVE) criterion in (3.3), setting so that at least 85% of the total functional variation was taken into account. As implied by the TVE plot in Figure 5.1, the temperature curves exhibit a slow decay of eigenvalues and was selected. The -value of the joint eigenvalue test was 0.02 with a break date estimate 1950. The test for a structural break in trace led to the same conclusion, the procedure identifying 1950 as the break date estimate.

It is evident from Table 5.1 that estimates of all eigenvalues decreased, often dramatically, after the estimated break location in 1950. This decrease also led to a significant structural break in the trace of the covariance operator. While the annual temperature curves had total variation of about degrees Celsius before 1950, this variation subsequently shrank to degrees Celsius. Taking mean function and covariance operator analyses together, it is seen that increasing annual minimum temperature profiles are accompanied by shrinking variation for this data set. To elucidate further, consider the variation explained by the first eigendirections around the average minimum temperature curves before and after 1950. Total variation around the mean minimum temperature curve before 1950 can be represented as , the superscript signifying “before”. Similarly total variation around the most recent average minimum temperature curve can be calculated as , the superscript signifying “after”. Here, and denote the th eigenvalue of the covariance operator of the temperature curves before and after 1950, respectively. It is seen in Figure 5.2 that the annual minimum temperatures are rising while annual temperature variation is declining. This phenomenon is most pronounced in the months comprising the Australian winter season. As further visual evidence, Figure 5.2 displays the estimated pre- and post-break covariance kernels. Most of the differences can be seen to be along the diagnoal and during the middle of the year.

A natural follow-up question is if there are any dominant modes of variation driving the observed diminishing variation. To check this, individual tests were applied for . The results are presented in Table 5.2. Adjusting nominal levels based on multiple testing, there is some evidence for individual breaks but none, with the possible exception of , exerted a dominant influence, indicating that differences across all directions compound to yield the strong rejection of the null observed for the trace test.

The remainder of this section focuses on a short discussion on whether the breaks related to the spectrum of the covariance operator were accompanied by simultaneous breaks in the corresponding eigenfunctions. Dating and detecting structural breaks in the eigenfunctions, either jointly or marginally, is a rather complicated problem deserving of its own manuscript. Here, the problem will only be briefly approached from the point of view of testing the equality of covariance operators in functional samples, as in Fremdt et al. (2012). These authors introduced a two-sample test which obeys a chi-squared asymptotic distribution with known degrees of freedom. To make use of these results in the present analysis, the (joint) effect of breaks in the eigenvalues were taken into account by standardizing the functional sample

through the transformation

 Yi=d∑j=11√^λ(ℓi)j⟨Xi,^φj⟩^φj,

where for and for , and the sample eigenfunctions. The transformed data was then split up into two subsamples using the estimated break data (1950). Since eigenvalue breaks have been removed from , the two subsamples should have equal covariance structure if there was no break in the eigenfunctions. The test, indeed, yielded a -value of indicating covariance homogeneity. There was thus strong evidence that only the eigenvalues and total variation of the annual minimum temperature curves at Gayndah Post Office were subject to structural breaks but that these breaks did not extend to the eigenfunctions. This indicates stability of seasonal patterns outside those affecting their magnitude. For this particular data set much of the structural break was captured by an increase in minimum temperatures during the Australian winter. It should finally be mentioned that the test of Fremdt et al. (2012) was designed for independent Gaussian functions. The authors discussed that in the case of violated normality and independence assumptions, their test was rather conservative in the sense that the likelihood of falsely not rejecting the null hypothesis was narrow. The large -value obtained here adds further support to the conclusion of homogenous eigenfunctions.

## 6 Conclusion

Several methods were proposed for detecting and localizing structural breaks in the covariance operator of a functional time series based on measuring the fluctuations of partial sample estimates of its eigenvalues and trace. Collectively the proposed tests provide a differential procedure for determining how variability in functional time series changes, whether it be in specific eigenvalues, several eigenvalues, or in the trace of the operator. A simulation study showed that these methods perform well even with fairly small samples. In an application to functional data derived from daily minimum temperatures taken in Australia, strong evidence was found that, after taking into account changes in the level, the variability of these curves significantly decreases, and moreover that this change appears to be across all eigenvalues. The change in variability also does not seem to affect the principal components/eigenfunctions, but a rigorous test for changes in the eigenfunctions is left as a possible direction for future research.

## References

• [1] Aue, A., Dubart Norinho, D., and Hörmann, S.: On the prediction of stationary functional time series. Journal of the American Statistical Association, 110 (2015), 378–392.
• [2] Aue, A., Hörmann, S., Horváth, L., and Reimherr M.: Break detection in the covariance structure of multivariate time series models. The Annals of Statistics, 37 (2009), 4046-4087.
• [3] Aue, A. and Horváth, L.: Structural breaks in time series. Journal of Time Series Analysis, 34 (2013), 1–16.
• [4] Aue, A. and van Delft, A.: Testing for stationarity of functional time series in the frequency domain. Preprint, available at https://arxiv.org/abs/1701.01741.
• [5] Aue, A., Rice, G., and Sönmez, O.: Detecting and dating structural breaks in functional data without dimension reduction. Journal of the Royal Statistical Society, Series B, to appear.
• [6] Aston, J. and Kirch, C.: Estimation of the distribution of change-points with application to fMRI data, Annals of Applied Statistics, 6 (2012), 1906-1948.
• [7] Boente, G., Rodriguez, D., and Sued, M.: Inference under functional proportional and common principal component models,

Journal of Multivariate Analysis

, 101 (2010), 464–475.
• [8]

Beran, J. and Liu H.: Estimation of eigenvalues, eigenvectors and scores in FDA models with dependent errors.

Journal of Multivariate Analysis, 147 (2016), 218 – 233.
• [9] Beran, J., Liu, H., and Telkmann K.: On two sample inference for eigenspaces in functional data analysis with dependent errors, Journal of Statistical Planning and Inference, 174 (2016), 20–37.
• [10] Bosq, D.: Linear Processes in Function Spaces. Springer, New York, 2000.
• [11] Brockwell, P.J. and Davis, R.A.: Time Series: Theory and Methods. Second Edition, Springer, New York, 2006.
• [12] Ferraty, F. and Vieu, P.: Nonparametric Functional Data Analysis, Theory and Practice, Springer, New York, 2006.
• [13] Fremdt, S.  Horváth, L.  Kokoszka, P. and Steinebach, J.: Testing equality of covariance operators in functional samples, Scandinavian Journal of Statistics, 40 (2012), 138–152.
• [14] Hörmann, S. and Kokoszka, P.: Weakly dependent functional data. The Annals of Statistics, 38 (2010) 1845–1884.
• [15] Horváth, L. and Kokoszka, P.: Inference for Functional Data with Applications. Springer, New York, 2012.
• [16] Horváth, L. and Rice, G.: Empirical eigenvalue based testing for structural breaks in linear panel data models, Preprint, available at https://arxiv.org/abs/1511.00284.
• [17] Jaruškova, D.: Testing for a change in the covariance operator, Journal of Statistical Planning and Inference, 143 (2013), 1500–1511.
• [18] Jirak, M.: On weak invariance principles for sums of dependent random functionals. Statistics & Probability Letters, 83 (2013), 2291–2296.
• [19] Kao, C.  Trapani, L., and Urga, G.: Testing for instability in covariance structures, Bernoulli, 24 (2018), 740–771.
• [20] Kokoszka, P. and Reimherr, M.: Asymptotic normality of the principal components of functional time series. Stochastic Processes and their Applications, 123 (2013), 1546–1562.
• [21] Mas, A.: Weak convergence for the covariance operators of a Hilbertian linear process. Stochastic Processes and their Applications, 99 (2002), 117–135.
• [22] Mas, A. and Menneteau, L.: Perturbation approach applied to the asymptotic study of random operators. High dimensional probability III, Progress in Probability, 55 (2003), 127–134.
• [23] Panaretos, V., Kraus, D., and Maddocks, J.: Second-order comparison of Gaussian random functions and the geometry of DNA minicircles. Journal of the American Statistical Association, 105 (2010), 670–682.
• [24] Pigoli D., Aston, J., Dryden, I., and Secchi P.: Distance and inference for covariance operators. Biometrika, 101 (2014), 409–422.
• [25] Ramsay, J.O. and Silverman, B.W.: Functional Data Analysis. Springer, New York, 2005.
• [26] Shang, H.L.: A survey of functional principal component analysis, Advances in Statistical Analysis, 98 (2014), 121–142.
• [27] Sönmez, O.: Structural breaks in functional time series. PhD Dissertation, University of California, Davis, 2018.
• [28] Sönmez, O., Aue, A., and Rice, G.: fChange: Change point analysis in functional data. R package, version 0.1.0. Available at https://CRAN.R-project.org/package=fChange.
• [29] van Delft, A., Bagchi, P., Characiejus, V., and Dette, H.: A nonparametric test for stationarity in functional time series. Preprint, available at https://arxiv.org/abs/1708.05248.
• [30] Zhang, X. and Shao, X.: Two samples inference for the second order property of temporally dependent functional data, Bernoulli, 21 (2015), 909–929.

## Appendix A Proof of Theorems 2.1 and 3.2

The proof of Theorem 2.1 will be developed as a sequence of four lemmas. Throughout , , is used to denote unimportant absolute numeric constants. Under model (2.1) and Assumption 2.1 it may be assumed without loss of generality that . Define

 ~Cx(t,t′)=1n⌊nx⌋∑i=1Xi(t)Xi(t′),x;t,t′∈[0,1].
###### Lemma A.1.

Under the conditions of Theorem 2.1,

 supδ≤x≤1∥^Cx−~Cx∥=OP(1n).
###### Proof.

The proof follows from standard arguments, some of which appear in subsequent lemmas, and so details are omitted. ∎

For , let and denote the ordered eigenvalues and orthonormal eigenfunctions of the integral operator with kernel . Since the eigenfunctions and are unique only up to a sign, assume without loss of generality that and .

###### Lemma A.2.

Under the conditions of Theorem 2.1,

 supδ≤x≤1|^λi(x)−~λi(x)|=OP(1n),

for with defined in Assumption 2.2.

###### Proof.

Lemma 2.2 of Horváth and Kokoszka (2012) gives

 |^λj(x)−~λj(x)|≤∥^Cx−~Cx∥, (A.1)

so that the result follows from Lemma A.1. ∎

###### Lemma A.3.

Under the conditions of Theorem 2.1,

 supδ≤x≤1∥∥∥~Cx−⌊nx⌋nC∥∥∥=OP(1√n).
###### Proof.

By definition of ,

 ~Cx(t,t′)−⌊nx⌋nC(t,t′)=1n⌊nx⌋∑i=1ρi(t,t′)x;t,t′∈[0,1], (A.2)

where