# A residual-based bootstrap for functional autoregressions

We consider the residual-based or naive bootstrap for functional autoregressions of order 1 and prove that it is asymptotically valid for, e.g., the sample mean and for empirical covariance operator estimates. As a crucial auxiliary result, we also show that the empirical distribution of the centered sample innovations converges to the distribution of the innovations with respect to the Mallows metric.

## Authors

• 2 publications
• 1 publication
12/07/2017

### Bootstrap of residual processes in regression: to smooth or not to smooth ?

In this paper we consider a location model of the form Y = m(X) + ε, whe...
11/03/2017

### Moving Block and Tapered Block Bootstrap for Functional Time Series with an Application to the K-Sample Mean Problem

We consider infinite-dimensional Hilbert space-valued random variables t...
01/29/2022

### Weighted residual empirical processes, martingale transformations and model checking for regressions

In this paper we propose a new methodology for testing the parametric fo...
02/18/2016

### What is the distribution of the number of unique original items in a bootstrap sample?

Sampling with replacement occurs in many settings in machine learning, n...
01/08/2019

### What is the dimension of a stochastic process? Testing for the rank of a covariance operator

How can we discern whether a mean-square continuous stochastic process i...
05/19/2020

### Bootstrap prediction intervals with asymptotic conditional validity and unconditional guarantees

Focus on linear regression model, in this paper we introduce a bootstrap...
02/16/2022

### Robust Nonparametric Distribution Forecast with Backtest-based Bootstrap and Adaptive Residual Selection

Distribution forecast can quantify forecast uncertainty and provide vari...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The seminal work of (Bosq, 2000) has initiated a lot of research on the theory, computational aspects and applications of functional data analysis. The recent monograph of (Horváth and Kokoszka, 2010) and, with a focus on functional time series, the review article of (Kokoszka, 2012) give an overview over the field of research. In this paper, we consider a time series with values in a Hilbert space , e.g. curves in a function space like L. In particular, we are interested in functional autoregressions, also known as autoregressive Hilbertian models (ARH). As is well known, a functional autoregressive process of order or FAR(

)-process can be easily be written as a FAR(1)-process by an appropriate change of state vector and Hilbert space. Therefore, it essentially suffices to consider the case of order 1, where

 Xt+1=Ψ(Xt)+ϵt+1. (1)

Here, is a linear operator, and are independent, identically distributed (i.i.d.) innovations. Recently, several new statistical methods for data generated by (1) have been proposed, in particular regarding tests and forecasts. (Kokoszka et al., 2008) have investigated a test of the hypothesis , i.e. of independence of the data. (Gabrys and Kokoszka, 2007) consider a related problem, a test of independence for general functional time series. (Horváth et al., 2010) propose a CUSUM test for a sudden change in the dependence structure of the data, i.e. for the presence of a point in time where the value of changes, which has been applied to neurophysiological data by (Franke et al., 2018). Other papers concentrate on the task of forecasting the data. (Didericksen et al., 2012) present an empirical study of forecasting by where denotes some estimate of . (Kargin and Onatski, 2008) develop an appropriate theory for a particular kind of estimate . Also, forecasting on the basis of FAR(1) models has been used in a lot of applications partly discussed below in the context of the bootstrap.

Asymptotics for the distribution of estimates of the autoregressive operator is involved, as pointed out by (Mas, 2007), and as, additionally, it frequently provides decent approximations only for large sample sizes, a lot of applied papers use resampling techniques to derive critical values for tests or prediction intervals for forecasts (compare, e.g., (Shang, 2015) for an overview). The theory for bootstrapping functional data, which provides guidelines under which circumstances bootstrap approximations are valid, is, however, still rather incomplete. E.g., only recently (Paparoditis and Sapatinas, 2015) show that bootstrap methods work for testing the equality of means and covariance operators in samples of independent functional data.

We are, in particular, interested in the residual-based bootstrap where resampling is done on the basis of the centered sample residuals . This kind of bootstrap is quite common in the context of scalar autoregressive and ARMA models (compare (Kreiss and Paparoditis, 2011)) and forms the starting point for the widely applicable autoregressive sieve bootstrap (compare (Kreiss et al., 2011)).

This kind of bootstrap has been investigated in the analogous, but, from the viewpoint of theory, considerably simpler regression situation. (González-Manteiga and Martínez-Calvo, 2011) discuss the linear functional regression model , where is scalar and is a linear functional. Treating

as fixed which is common in the regression context, they prove that the residual-based bootstrap and, for heteroscedastic residuals

, the wild bootstrap works. In the same model, (González-Manteiga et al., 2014) apply the pairwise bootstrap and the wild bootstrap to a test of the hypothesis . (Ferraty et al., 2010) consider the functional regression model with general, not necessarily linear operators and prove that the residual-based and the wild bootstrap works for nonparametric kernel estimates of . (Ferraty et al., 2012)

extend those results to the case where the response variable is also of functional nature, e.g.

. (Zhu and Politis, 2017) and (Raña et al., 2016) discuss the analogous situation for nonparametric functional autoregressions, considering the regression bootstrap and the wild bootstrap respectively (compare (Franke et al., 2002) for these concepts, their advantages and drawbacks in the scalar case), but not the residual-based bootstrap.

Bootstrap techniques are also quite popular in approximating the distribution of statistics from functional time series data. (Horváth and Kokoszka, 2010) use in their section 14.1 the residual-based bootstrap for evaluating the performance of a test for a change in the autoregressive operator of a FAR(1)-process. (Aneiros-Pérez et al., 2011) consider the nonparametric FAR()-model , estimate the autoregression operator nonparametrically by kernel and local linear estimates and apply the residual-based bootstrap to get prediction intervals. (Mingotti et al., 2015) discuss the residual-based bootstrap for the integrated FAR(1)-model, i.e. for the special case , the Hilbert space identity. They derive bootstrap approximations of critical bounds for unit root tests where, under the hypothesis, is known. (Fernández de Castro et al., 2005) investigate among other bootstrap techniques a variant of the residual-based bootstrap in forecasting applications. They start from the centered sample residuals, but do not resample directly from their empirical distribution. They first consider a finite principal component decomposition of the sample residuals and, then, resample the coefficients of this decomposition separately. In a similar spirit, (Hyndman and Shang, 2009)

assume from the start that the time series has a finite Karhunen-Loève expansion which allows to reduce the functional time series to the finite-dimensional time series of the coefficients. They derive bootstrap prediction intervals based on bootstrap confidence intervals for the scalar coefficient time series. All these papers focus on simulations and applications and do not consider the accompanying theory. This gap is filled for the stationary bootstrap, which is a variant of the well-known block bootstrap with random block lengths, in an early paper of

(Politis and Romano, 1994)

. They consider general Hilbert space valued times series and prove, based on a central limit theorem for triangular arrays of such data, that this bootstrap provides valid approximations for the asymptotic distribution of certain statistics.

Based on the thesis (Nyarige, 2016), we show in this paper that the residual-based bootstrap is applicable to FAR(1)-processes. The theory has direct practical implications as, e.g., the necessary centering of the lag-1 autocovariance operator in the bootstrap world is different from what one would naively expect due to the particular nature of the estimate of . For the proof, we cannot use the approach of (González-Manteiga and Martínez-Calvo, 2011) for the residual-based bootstrap in regression and of (Politis and Romano, 1994) for the stationary bootstrap who, for the bootstrap data, both mimic the proof of asymptotic normality of the corresponding functions of the real data. We have to use different methods which are similar to the scalar situation presented by (Kreiss and Franke, 1992); more details will be given in section 4.

In section 2 we describe the details of our model including the relevant assumptions, and we introduce some estimates from the literature which we need later on.

In section 3 we present the crucial result that the empirical distribution of the centered sample innovations converges to the distribution of the innovations.

In section 4 we give the details for the residual-based bootstrap and, as an illustration, state that it works for estimates of the mean and of the first two covariance operators of the data.

Finally, technical results and proofs are given in the appendix.

## 2 The Model and the Estimates

In this section, we mainly collect some properties of our model and some estimates which are standard in the literature on functional autoregressions and which we need later on. This also serves to introduce notation.

Let be separable Hilbert space with scalar product and norm . As a norm for bounded linear operators from to like we use

 ∥Ψ∥L=sup{∥Ψ(x)∥;∥x∥=1}.

A sufficient condition for the existence of a stationary solution of (1) is (compare (Bosq, 2000), section 3.2). We call a linear operator compact if for two orthonormal bases and of and a sequence of real numbers converging to 0,

 Ψ(x)=∞∑j=1γj⟨x,vj⟩uj,x∈H.

is, in particular, a Hilbert-Schmidt operator if . The Hilbert-Schmidt norm is an upper bound for . The Hilbert-Schmidt operators form a Hilbert space themselves with a scalar product given by

 ⟨A,B⟩S=∞∑j=1⟨A(uj),B(uj)⟩

for an arbitrary orthonormal basis of (compare (Horváth and Kokoszka, 2010), section 2.1).

For the definition of covariance operators, it is convenient to introduce the Kronecker product of which is a linear operator defined by

 (y⊗z)(x)=⟨y,x⟩z,x∈H.

For later reference, we state two rules of calculation which we use repeatedly and which follow immediately from the definition

 z⊗y=(y⊗z)T,A(y)⊗B(z)=B(y⊗z)AT,y,z∈H, (2)

where are two linear operators on and here and the following denotes the adjoint of the linear operator which is characterized by for all .

We assume throughout the paper that the data are part of a stationary functional autoregression (1) with mean . Correspondingly, the covariance operator and the lag 1-autocovariance operator are given by and

. Furthermore, we always assume that 0 is not an eigenvalue of

. Then, all eigenvalues of are positive. Let

denote the corresponding orthonormal eigenvectors in

.

are related to the autoregressive operator

by the analogue to the scalar Yule-Walker equation

 ΨΓ=C (3)

The mean is estimated as usual by the sample mean

 ¯Xn=1nn−1∑j=0Xj.

As estimates of we follow (Horváth and Kokoszka, 2010) and use the simplified sample versions

 ^Γn=1nn−1∑j=0Xj⊗Xj,^Cn=1nn−1∑j=0Xj⊗Xj+1.

We use the last observation only in estimating to streamline notation later on. Due to the same reason, we do not center the around in the definitions of . Under our assumption , this has an asymptotically neglible effect. All results remain true in the general case but then we of course have to center the data around 0 in calculating the covariance estimates.

denote the eigenvalues and eigenvectors of . Solving the Yule-Walker equation (3) is an ill-conditioned problem as is not a bounded linear operator defined on the whole space . Therefore, has to be regularized. We use the popular approach via a finite principal component expansion, compare (Bosq, 2000), (Horváth and Kokoszka, 2010), and consider

 ^Γ†n=kn∑j=11^λj^vj⊗^vj,

where slowly for to get a consistent estimate of . Note that is an eigenvector of , and is the orthogonal projection of onto the span of the eigenvector . Then, we get as an estimate of

 ^Ψn=^Cn^Γ†n.

## 3 Approximation of the innovation distribution by the empirical measure of sample residuals

The basis for residual-based bootstrapping in scalar regression and autoregression models is the approximability of the innovations by the bootstrap innovations where the latter are drawn from the centered sample residuals. This is stated in the following theorem in terms of the Mallows metric

which is discussed in detail by (Bickel and Freedman, 1981). For two distributions on , it is defined by

 d22(F,G)=infX,Y E∥X−Y∥2,

where the infimum is taken over all

-valued random variables

and with marginal distributions resp. . By Lemma 8.1. of (Bickel and Freedman, 1981) the infimum is attained.

By , we denote the distribution of respectively the empirical distribution of the centered sample residuals with

 ~ϵj=^ϵj−1nn∑k=1^ϵk,^ϵj=Xj−^Ψn(Xj−1),j=1,…,n. (4)
###### Theorem 3.1.

Let be a sample from a stationary FAR(1) process satisfying
i) i.i.d., ,
ii) is a Hilbert-Schmidt operator with ,
iii) the eigenvalues of are all positive and have multiplicity 1.
Then,

 d2(^Fn,F)→p0for n→∞,

if and, with ,

 knnkn∑j=11a2j→0 for n→∞ and 1λkn=O(n1/4(logn)β) for some β>12. (5)

A fourth moment condition like i) is not unexpected, as

depends on which are quadratic in the data and which we want to be -consistent estimates. Condition ii) may be relaxed to for some as in the work of (Bosq, 2000)

; we prefer the somewhat stronger assumption to simplify the proofs. The positivity of the eigenvalues in iii) is necessary to exclude singular cases. Assuming dimension 1 of all eigenspaces is standard in the literature on functional autoregressions to circumvent the notational problems with the nonuniqueness of eigenvectors generating a particular eigenspace, but it is not essential for the validity of the results.

The following lemma illustrates the meaning of the rate condition (5) for two particular examples where we impose lower bounds on which is related to the rate of decrease of the eigenvalues. If is allowed to decrease exponentially fast, then may increase at most logarithmically in . If may converge to 0 only with a polynomial rate in then may increase faster like for appropriate . These kinds of relationship between and the rate of decrease of the eigenvalues is quite plausible regarding the character of as a regularization parameter. In similar situations, (Guillas, 2001) found the same kind of rate conditions in his study of the convergence rate of .

###### Lemma 3.1.

a) Let for some . Then, (5) is satisfied for if, for all large enough ,

 kn≤(14log1a−δ)logn for % some  δ>0.

b) Let for some . Then, (5) is satisfied for if

 kn=O(n14a−δ) for some  δ>0.
###### Proof.

a) From the condition of the lemma, we immediately have . Using the formula for geometric sums,

 knnkn∑j=11a2j≤knnb2a2knkn∑j=1a2(kn−j)≤1b2(1−a2) knna2kn→0

as . Moreover we have

 1λkn≤1bakn≤1bn1/4(logn)β

for large enough , as, for some and all , again for large enough ,

 knlog1a≤(clog1a)logn≤(14−δ)logn≤14logn−βloglogn.

b) The proof proceeds in a similar manner as for part a), using and

## 4 The residual-based bootstrap

We start with a sample from a stationary functional autoregression (1). The basic idea of the bootstrap is to replace the data by pseudodata , calculated from the given sample, with two features:
i) The distribution of certain functions of the data can be approximated by the conditional distribution of the corresponding functions of the pseudodata given .
ii) The conditional distribution of given

is known such that distributional characteristics like moments or quantiles can be numerically calculated by Monte Carlo simulation.

In this section, we generalize the well-known residual-based bootstrap for scalar ARMA-processes, compare, e.g. (Kreiss and Paparoditis, 2011), to the functional setting. Let be the centered sample residuals given by (4), and let be their empirical distribution function. The procedure for generating the pseudodata is the following:

1) Draw bootstrap innovations , purely randomly from the centered sample residuals:

 pr ∗(ϵ∗t=~ϵk)=1n,t,k=1,…,n,

such that the are i.i.d. with distribution conditional on the original data. Here and in the following, we write

for conditional probabilities and expectations given

.

2) We generate the bootstrap data , recursively by

 X∗t=^Ψn(X∗t−1)+ϵ∗t,t=1,…,n,

for some suitable initial value .

If is large, the choice of is of minor importance due to the exponentially decreasing memory of our stationary FAR(1)-process. This follows from its representation as an infinite moving average process (e.g. Theorem 13.1 of (Horváth and Kokoszka, 2010)) together with and . Popular choices are , which are used in the simulations of (Nyarige, 2016), or .

Let us remark that the theory of the residual bootstrap has already been studied for the quite similar functional linear regression model

with real-valued and functional regressors by (González-Manteiga and Martínez-Calvo, 2011). Note that the situation there is much simpler, not only due to the lack of dependence, but equally due to the fact that, by construction, . Therefore, the regressors in the bootstrap world trivially satisfy exactly the same assumptions as the real regressors which is quite useful in showing that the same kind of asymptotics holds for functions of the real resp. the bootstrap data. In particular, the critical covariance operator estimate

, for which we need a regularized inverse, and its eigenvalues and eigenfunctions are the same for the real and the bootstrap data, i.e. Theorem

4.2 below is trivially satisfied in the regression context. Obviously, for functional autoregressions, those assertions do not hold, and we cannot use the proof of validity of the bootstrap for the regression case at all, but have to use quite different arguments.

The regression and wild bootstrap, considered by (Zhu and Politis, 2017) respectively (Raña et al., 2016) for nonparametric functional autoregressions, also use , i.e. they do not mimic the whole time series in the bootstrap world but only the local predictor relationship. So, for proofs, they can rely on the same kind of simpler methods as in the case of regression with independent data.

### 4.1 Bootstrapping the sample mean

In this subsection we investigate the sample mean and its analogue in the bootstrap world

 ¯Xn=1nn−1∑t=0Xt,¯X∗n=1nn−1∑t=0X∗t.

Note that . In the proof, we show that for the bootstrap analogue also holds. Therefore, we have to compare the distributions of and without additional centering. In the next theorem and in the following, we use a common convention and write for the Mallows distance between the marginal distributions of the random variables resp. .

###### Theorem 4.1.

Under the assumptions of Theorem 3.1 and if satisfies additionally

 1λknkn∑j=11aj=O(n1/4(logn)β) for some β>1. (6)

we have for

 nd22(¯Xn,¯X∗n)→p0

The following lemma provides two examples of a sufficient rate condition for depending on the rate of decrease of , It is proven in the same manner as Lemma 3.1.

###### Lemma 4.1.

a) Let for some . Then, (5) and (6) are satisfied for if, for all large enough ,

 kn≤(18log1a−δ)logn for % some  δ>0.

b) Let for some . Then, (5) and (6) are satisfied for if

 kn=O(n14(2a+1)−δ) for some  δ>0.

### 4.2 Bootstrapping the covariance operators

In this section, we show that the bootstrap works for the covariance operator estimates , too. We compare them with their bootstrap analogues

 ^Γ∗n=1nn−1∑t=0X∗t⊗X∗t,^C∗n=1nn−1∑t=0X∗t⊗X∗t+1.

We again consider the Mallows metric, which, for bounded linear operators , we define with respect to the operator norm :

 d22(A,B)=infA′,B′ E∥A′−B′∥2L,

where the infimum is taken over all random operators and with the same marginal distribution as resp. .

Note that

is an unbiased estimate of

as . In the bootstrap world, we have an analogous property asymptotically. More precisely, we show in Lemma 5.4 that . Therefore, we have to compare the estimation error with .

###### Theorem 4.2.

Under the assumptions of Theorem 4.1, we have for

 nd22(^Γn−Γ,^Γ∗n−^Γn)→p0

The theorem, in particular, implies that and, conditional on , have the same asymptotic distribution by Lemma 8.3 of (Bickel and Freedman, 1981).

For the lag-1 autocovariance operator, we have, again from Lemma 5.4, that where denotes the projection onto the span of the first eigenvectors of . So, this provides the appropriate reference point in the bootstrap world if we want to approximate the distribution of the estimation error . More precisely,

###### Theorem 4.3.

Under the assumptions of Theorem 4.1, we have for

 nd22(^Cn−C,^C∗n−^Cn^Πkn)→p0

## 5 Appendix - Technical Lemmas and Proofs

Throughout this section,

 Πp=p∑j=1vj⊗vj,^Πp=p∑j=1^vj⊗^vj

denote the projections onto the span of the first orthonormal eigenfunctions resp. empirical eigenfunctions . As the eigenfunctions are only uniquely determined up to their sign, we have to compare later on with where

 ^cj=sgn(⟨^vj,vj⟩).

The first two auxiliary results have been essentially used already by (Mas, 2007). We defer their proofs to the supplement 6.

.

###### Lemma 5.2.

with .

Next we state that the well-known strong consistency of as an estimate of in particular holds under our set of assumptions, and we collect some immediate consequences for reference.

###### Lemma 5.3.

Let . Under the conditions of Theorem 4.1, we have

a) for .

b) for all large enough ,

c) .

###### Proof.

a) The result is a slight modification of Theorem 8.7 of (Bosq, 2000), and the proof is defered to the supplement 6.

b) From a) we immediately have for large enough .

c) First, we note that

 ^Ψkn−Ψk=(^Ψn−Ψ)k−1∑j=0Ψj^Ψk−1−jn

The assertion follows from, using b) and ,

 ∥∥ ∥∥k−1∑j=0Ψj^Ψk−1−jn∥∥ ∥∥L ≤ k−1∑j=0∥∥Ψj∥∥L∥∥^Ψk−1−jn∥∥L≤k−1∑j=0∥Ψ∥jL∥∥^Ψn∥∥k−1−jL ≤

for all large enough . ∎

###### Proof.

(Theorem 3.1)
Let denote the empirical distribution of . Then, from Lemma 8.4 of (Bickel and Freedman, 1981), we have a.s. Hence it suffices to show that . Let

 U0=ϵJ, V0=~ϵJ=^ϵJ−1nn∑j=1^ϵj,

where is Laplace distributed on , i.e. . The random variables have marginal distributions respectively . As in the proof of Theorem 3.1 of (Kreiss and Franke, 1992), we have from the definition of the Mallows metric

 d22(Fn,^Fn)≤1nn∑k=1∥∥ ∥∥ϵk−^ϵk+1nn∑j=1^ϵj∥∥ ∥∥2≤6nn∑k=1∥^ϵk−ϵk∥2+3n2∥∥ ∥∥n∑j=1ϵj∥∥ ∥∥2

From the law of large numbers for i.i.d. random variables we have

 1nn∑j=1ϵj→pEϵj=0, n→∞

such that the second term on the right-hand side vanishes for . For the first term, we show in the following parts a)-c) of the proof

 ∥^ϵt−ϵt∥2≤∥Xt−1∥2Rn+3∥Πkn(Xt−1)−Xt−1∥2

where does not depend on , and . Hence, for ,

 1nn∑t=1∥^ϵt−ϵt∥2≤1nn∑t=1∥Xt−1∥2Rn+31nn∑t=1∥Πkn(Xt−1)−Xt−1∥2→p0,

as, by Corollary 6.2 of (Bosq, 2000), , and, by stationarity of

 E(1nn∑t=1∥Πkn(Xt−1)−Xt−1∥2)=E∞∑j=kn+1⟨X1,vj⟩2→0

for , using a monotone convergence argument and .

a) By definition of , we have

 ∥ϵt−^ϵt∥2 = ∥∥Xt−Ψ(Xt−1)−Xt+^Ψn(Xt−1)∥∥2 = = ∥∥(^Ψn−Ψ^Πkn)(Xt−1)+Ψ(^Πkn−Πkn)(Xt−1)+Ψ(Πkn(Xt−1)−Xt−1)∥∥2 ≤ 3∥∥(^Ψn−Ψ^Πkn)(Xt−1)∥∥2+3∥∥(^Πkn−Πkn)(Xt−1)∥∥2+3∥Πkn(Xt−1)−Xt−1∥2

using . We now show that the first and the second terms are bounded in the required manner.

b) We split into two terms

 (^Πkn−Πkn)(Xt−1) = kn∑j=1⟨Xt−1,^vj⟩^vj−kn∑j=1⟨Xt−1,vj⟩vj =

As are orthonormal, we have for the second term

 ∥∥ ∥∥kn∑j=1⟨Xt−1,^cj^vj−vj⟩vj∥∥ ∥∥2 = kn∑j=1⟨Xt−1,^cj^vj−vj⟩2 ≤ ∥Xt−1∥2kn∑j=1∥∥^cj^vj−vj∥∥2

where the right hand side converges to 0 in probability, as, from the remarks after Theorem 16.1 of (Horváth and Kokoszka, 2010) and (5)

 Ekn∑j=1∥∥^cj^vj−vj∥∥2=1nkn∑j=11a2jO(1)→0% for n→∞.

For the first term, we have, as ,

 ∥∥ ∥∥kn∑j=1⟨Xt−1,^cj^vj⟩(^cj^vj−vj)∥∥ ∥∥2 ≤ knkn∑j=1⟨Xt−1,^cj^vj⟩2∥∥^cj^vj−vj∥∥2 ≤ ∥Xt−1∥2knkn∑j=1∥∥^cj^vj−vj∥∥2

where again the right hand side converges to 0 in probability as, from above,

 Eknn∑j=1∥∥^cj^vj−vj∥∥2=knnkn∑j=11a2jO(1)→0for n→∞.

c) Using Lemma 5.2, we have

 ∥∥(^Ψn−Ψ^Πkn)(Xt−1)∥∥2 = ∥∥∥1nSn^Γ†n(Xt−1)∥∥∥2 ≤ ∥∥∥1nSn∥∥∥2L∥∥^Γ†n(Xt−1)∥∥2 = ∥∥∥1nSn∥∥∥2L∥∥ ∥∥kn∑j=11^λj⟨Xt−1,^vj⟩ ^vj∥∥ ∥∥2 ≤ ∥∥∥1nSn∥∥∥2L∥Xt−1∥2kn∑j=11^λ2j

using the Cauchy-Schwarz inequality. Moreover, as and ,

 ∥∥∥1nSn∥∥∥2L = ∥∥^Cn−Ψ^Γn∥∥2≤2∥∥^Cn−C∥∥2L+2∥∥Ψ(Γ−^Γn)∥∥2L ≤ 2∥∥^Cn−C∥∥2L+2∥∥^Γn−Γ∥∥2L=Op(1n),

as, from the remarks after Theorem 16.1 of (Horváth and Kokoszka, 2010), we have , and from Theorem 3 of (Mas and Pumo, 2009), analogously