A Uniform-in-P Edgeworth Expansion under Weak Cramér Conditions

This paper provides a finite sample bound for the error term in the Edgeworth expansion for a sum of independent, potentially discrete, nonlattice random vectors, using a uniform-in-P version of the weaker Cramér condition in Angst and Poly(2017). This finite sample bound is used to derive a bound for the error term in the Edgeworth expansion that is uniform over the joint distributions P of the random vectors, and eventually to derive a higher order expansion of resampling-based distributions in a unifying way. As an application, we derive a uniform-in-P Edgeworth expansion of bootstrap distributions and that of randomized subsampling distributions, when the joint distribution of the original sample is absolutely continuous with respect to Lebesgue measure.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/18/2022

Finite-sample analysis of identification of switched linear systems with arbitrary or restricted switching

This work aims to derive a data-independent finite-sample error bound fo...
02/06/2021

Edgeworth approximations for distributions of symmetric statistics

We study the distribution of a general class of asymptoticallylinear sta...
07/03/2018

Finite Sample L_2 Bounds for Sequential Monte Carlo and Adaptive Path Selection

We prove a bound on the finite sample error of sequential Monte Carlo (S...
06/06/2020

New Edgeworth-type expansions with finite sample guarantees

We establish Edgeworth-type expansions for a difference between probabil...
06/25/2019

On the Relationship Between Measures of Relative Efficiency for Random Signal Detection

Relative efficiency (RE), the Pitman asymptotic relative efficiency (ARE...
07/20/2020

Biased measures for random Constraint Satisfaction Problems: larger interaction range and asymptotic expansion

We investigate the clustering transition undergone by an exemplary rando...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Suppose that is a triangular array of independent random vectors taking values in and have mean zero. Let be the collection of the joint distributions for . Define

where . We are interested in the Edgeworth expansion of the distribution of

which is uniform over .

The Edgeworth expansion has long received attention in the literature. See Bhattacharya and Rao (2010)

for a formal review of the results. The validity of the classical Edgeworth expansion is obtained under the Cramér condition which says that the average of the characteristic functions over the sample units stays bounded below 1 in absolute value as the function is evaluated at a sequence of points which increase to infinity. The Cramér condition fails when the random variable has a support that consists of a finite number of points, and hence does not apply to resampling-based distributions such as bootstrap.

111Despite the failure of the Cramér condition, the Edgeworth expansion for lattice distributions is well known. See Bhattacharya and Rao (2010), Chapter 5. See Kolassa and McCullagh (1990) for a general result of Edgeworth expansion for lattice distributions. Booth, Hall, and Wood (1994) provide the Edgeworth explansion for discrete yet non-lattice distributions. A standard approach to deal with this issue is to derive an Edgeworth expansion separately for the bootstrap distribution by using the expansion of the empirical characteristic functions. (e.g. Singh (1981) and Hall (1992).)

The main contribution of this paper is to provide a finite sample bound for the remainder term in the Edgeworth expansion for a sum of independent random vectors. Using the finite sample bound, one can immediately obtain a uniform-in-

Edgeworth expansion, where the error bound for the remainder term in the expansion is uniform over a collection of probabilities. A notable feature of the Edgeworth expansion is that it admits random vectors which can be discrete, non-lattice distributions. From this result, as shown in the paper, a uniform-in-

Edgeworth expansion for various resampling-based discrete distributions follows as a corollary. To obtain such an expansion, this paper uses a uniform-in- version of weak Cramér conditions introduced by Angst and Poly (2017) and obtains a finite sample bound for the error term in the Edgeworth expansion by following the proofs of Theorems 20.1 and 20.6 of Bhattacharya and Rao (2010) and Theorem 4.3 of Angst and Poly (2017). This paper’s finite sample bound reveals that we obtain a uniform-in- Edgeworth expansion, whenever we have a uniform-in-

bound for the moment of the same order in the Edgeworth expansion.

A uniform-in-

asymptotic approximation is naturally required in a testing set-up with a composite null hypothesis. By definition, a composite null hypothesis involves a collection of probabilities, and the size of a test in this case is its maximal rejection probability over all the probabilities admitted in the null hypothesis. Asymptotic control of size requires uniform-in-

asymptotic approximation of the test statistic’s distribution under the null hypothesis. One can apply the same notion to the coverage probability control of confidence intervals as well.

As for uniform-in- Gaussian approximation, one can obtain the result immediately from a Berry-Esseen bound once appropriate moments are bounded uniformly in . It is worth noting that uniform-in- Gaussian approximation of empirical processes was studied by Giné and Zinn (1991) and Sheehy and Wellner (1992). There has been a growing interest in uniform-in- inference in various nonstandard set-ups in the literature of econometrics in connection with the finite sample stability of inference. (See Mikusheva (2007), Linton, Song, and Whang (2010), and Andrews and Shi (2013), among others.)

When a test based on a resampling procedure exhibits higher order asymptotic refinement properties, the uniform-in- Edgeworth expansion can be used to establish a higher order asymptotic size control for the test. A related work is found in Hall and Jing (1995) who used the uniform-in- Edgeworth expansion to study the asymptotic behavior of the confidence intervals based on a studentized t statistic. They used a certain smoothness condition for the distributions of the random vectors which excludes resampling-based distributions. A recent paper by the author (Song (2018)) uses this paper’s result to compare two different testing procedures based on randomized subsampling inference, when observations are locally dependent with unknown dependence ordering.

2. A Uniform-in- Edgeworth Expansion

2.1. Uniform-in- Weak Cramér Conditions

Angst and Poly (2017)

(hereafter, AP) introduced what they called a weak Cramér condition and a mean weak Cramér condition which are weaker than the classical Cramér condition. They showed that through their weakening of the latter condition, we can obtain a classical Edgeworth expansion which accommodates the distribution of discrete random variables that arise in resampling methods in statistics. In this paper, we introduce their uniform-in-

versions. Let us prepare notation. Let be the Euclidean norm in , i.e., . The following definition modifies the weak Cramér condition introduced by AP into a condition for a collection of probabilities.222The original definition of the weak Cramér condition in AP specifies the bounds in (1) and (2) in the form . Since we can simply set in applications, we use this choice throughout this paper.

Definition 2.1.

(i) Given , a collection of the distributions of a random vector taking values in and having characteristic function under is said to satisfy the weak Cramér condition with parameter , if for all with ,

(1)

(ii) Given , a collection of the joint distributions of a triangular array of random vectors with each taking values in and having characteristic function under is said to satisfy the mean weak Cramér condition with parameter , if for all with ,

(2)

As noted by AP, the weak Cramér condition is useful for dealing with distributions obtained from a resampling procedure. To clarify this in our context, let us introduce some notation. For any integers , any given sequence of vectors where , let

where denotes the Dirac measure at . Let be the collection of ’s such that does not satisfy the weak Cramér condition with parameter . The following proposition is due to AP. (See Proposition 2.4 there.)

Proposition 2.1 (Angst and Poly (2017)).

Suppose that and . Then

where is Lebesgue measure on .

Therefore, the weak Cramér condition is generically satisfied by for some for almost all ’s. Angst and Poly (2017) give the proof only for the case of . The proof for the general case with is provided in the appendix of this paper.

Let us illustrate how this proposition can be used to establish uniform-in- inference based on resampling. Let be a triangular array of random vectors with . Let the collection of the joint distributions for be denoted by and assume that each is dominated by Lebesgue measure on . Let us assume that , , are i.i.d. draws from the empirical measure of . Thus,

with . Let be the -field generated by and the conditional distribution of

given be denoted by , and let be a signed conditional measure which we would like to show to be approximating . In particular, we are interested in showing that for any collection of convex subsets indexed by , and for some decreasing sequence ,

(3)

as . For example, if we take , denotes the conditional CDF of the quadratic form of .

Now, let us see how Proposition 2.1 can be useful here. Suppose that for each outcome such that for some with satisfying the weak Cramér condition with parameter , we have

(4)

for some sequence of Borel measurable functions for each , such that

(5)

as , where denotes the expectation under . Now, for each , let be the event that satisfies the weak Cramér condition with parameter . We bound the supremum in (3) by

By Proposition 2.1, the second term is zero. Using Markov’s inequality, (4), and (5), the leading term vanishes to zero as , establishing (3). Thus for the uniform-in- approximation of by , it is useful to obtain an explicit finite sample bound in (4). For such a result, a uniform-in Edgeworth expansion is helpful.

2.2. A Uniform-in- Edgeworth Expansion under Weak Cramér Conditions

In this section, we present the main result that gives a finite sample bound for the error term in the Edgeworth expansion. Let us prepare notation first. Given each multi-index , with being a nonnegative integer, we let be the average of the -th cumulant of . For each , let be a polynomial in as given in (7.3) of Bhattacharya and Rao (2010) (BR, hereafter). This polynomial has degree , the smallest order of the terms in the polynomial is , and the coefficients in the polynomial involve only ’s with . (Lemma 7.2 of BR, p.52.) Following the convention, we define the derivative operators as follows:

for and . For each , let be the distribution of , and define a signed measure as follows: for any Borel set ,

where

is the density of the standard normal distribution on

. For each , define

Let , i.e., the characteristic function of , and define for any , and ,

We introduce notation for modulus of continuity for functions: for any Borel measurable function on , and any measure , we define for and ,

where denotes the distribution function of and is the -open ball in around . For any measurable function and for , we define

Define for any constant and integers ,

The theorem below is the main result of this paper which is a modification of Theorem 4.3 of AP with the bound made explicit in finite samples.

Theorem 2.1.

Suppose that for each , is positive definite, and that there exist and such that the following two conditions hold.

(i) There exists a number such that

(6)

for some .

(ii) satisfies the mean weak Cramér condition with parameter for all , with and satisfying that for and in (i),

(7)

Then, for any Borel measurable function on such that for some , and for all such that

(8)

there exist constants and such that for all , , , and we have

where denotes the distribution of , and and depend only on , , , , , , , , and .

The bound in the above depends on only through . By choosing a sequence with , and replacing by , we obtain a bound for the error term in the Edgeworth expansion that is uniformly in , as . Thus it is revealed that the uniform-in- Edgeworth expansion of is essentially obtained by strengthening the mean weak Cramér condition to the same condition but with uniformity in and by strengthening the moment condition to uniformity in as in (6).

When we take to be the indicator function of convex subsets of , we obtain the following corollary which is a version of Corollary 20.15 of BR with the finite sample bound made explicit here.

Corollary 2.1.

Suppose that the conditions of Theorem 2.1 hold, and let be the collection of convex subsets of . Then, there exist constants and such that for all , , ,

where and depend only on , , , , , , , and .

The last term follows because for any indicator on a convex set , we have , where is a constant that depends only on and . (See Corollary 3.2 of BR, p.24.)

The following result for the case of indicator functions on sets defined by polynomials is useful for establishing an Edgeworth expansion of a studentized sample mean. Later we use this result to establish a uniform-in- Edgeworth expansion for the bootstrap distribution of the studentized sample mean. Let us define a set as follows:

where for ,

with . Let

where Then, we obtain the following result as a corollary from Theorem 2.1.

Corollary 2.2.

Suppose that the conditions of Theorem 2.1 hold. Then, for any and , there exist constants and such that for all , , ,

where and depend only on , , , , and .

The result immediately follows from Theorem 2.1 above, after we apply Lemma 5.3 of Hall (1992), p.254, to bound by with .

3. Applications

3.1. Nonparametric Bootstrap Distributions

Let us illustrate how the previous results can be applied to obtain a uniform-in- Edgeworth expansion of a bootstrap distribution of a sum of independent random variables when the random variables are continuous. The Edgeworth expansion of a bootstrap distribution for the i.i.d. random variables is well known in the literature. (Hall (1992)). The result in this paper is distinct for two reasons. First, the Edgeworth expansion is uniform in , where runs over the distribution of the random variable. Second, the Edgeworth expansion follows directly from the Edgeworth expansion for a sum of i.i.d. random variables, due to the use of the weak Cramér condition. On the other hand, this paper’s result assumes that the random variables are continuous, whereas the standard bootstrap result requires only the classical Cramér condition for the random variables. This is due to our reliance on Proposition 2.1.

Suppose that

is a triangular array of continuous random variables which are i.i.d. drawn from a common distribution. Let us assume that this distribution belongs to the collection

of distributions. Let , be the bootstrap sample drawn with replacement from the empirical distribution of

. Define the sample variance:

where . Then, we are interested in the uniform-in- Edgeworth expansion of the bootstrap distribution of the following:

where . For this, we adopt the approach in Chapter 5 of Hall (1992), and in deriving the finite sample bound, we apply Corollary 2.2. Let . It is not hard to see that we can write

(10)

where , , with .

More generally, suppose that we have a triangular array of random vectors taking values in , and the bootstrap sample , and let the -field generated by be . Our focus is on the Edgeworth expansion of the bootstrap distribution of the test statistic of the form in (10) for a generic function which is times continuously differentiable at , with , and is -measurable.

The bootstrap distribution of (defined in (10)) is the conditional distribution of given which we denote by . Let be the -th cumulant of the conditional distribution of given . Define for each Borel ,

Our purpose is to obtain a finite sample bound for the error term in the approximation of the bootstrap measure by an Edgeworth expansion. We define the following three events: for constants , ,

and

be the event of all the eigenvalues of

lying in , where

Let be the event where . Define

Theorem 3.1.

Suppose that ’s are i.i.d., and each is absolutely continuous with respect to Lebesque measure. Let be a given integer and a constant. Then, there exist constants , , and such that for all , , all that satisfies (8) and for all , on the event ,

(11)

where and depend only on and , and

As for the probability , we can find its finite sample bound using the standard arguments, and show it to be uniformly over , when there is a uniform bound for the population moments, and a uniform lower and upper bound for the eigenvalues of . Details can be furnished following the same arguments in Chapter 5 of Hall (1992).

3.2. Randomized Subsampling Distributions

Let be a triangular array of continuous random vectors taking values in , having joint distribution . Let us denote to be the -field generated by . Let be a -measurable map, and be the collection of the permutations on . Then, a test statistic built from randomized subsampling is based on the following form of the sum of conditionally i.i.d. random vectors: for each random vector of permutations ,

where and is the subsample size. Suppose that our test statistic is centered, so that

Our main focus in this section is an Edgeworth expansion of the conditional distribution of given , where

’s are drawn i.i.d. from the uniform distribution on

.

Let us enumerate

with denoting the cardinality of the set on the left hand side, and let the empirical measure of be denoted by , i.e., is a discrete distribution which gives a point mass of at for each . Then , are i.i.d. draws from the distribution . Let the conditional distribution of given be denoted by . We call a randomized subsampling distribution.

The randomized subsampling distribution is different from the subsampling distribution proposed in (Politis and Romano (1994)), in the sense that conditional on , the distribution is the sum of i.i.d. random variables from an empirical distribution. In this sense, it is closer to the out of bootstrap distribution. The out of bootstrap focuses on the conditional distribution of given (with subsample size playing the role of here).(Bickel, Götze, and van Zwet (1997)) In contrast, our focus is on the conditional distribution of . Closely related concept is the bag of little bootstraps (BLB) recently proposed by Kleiner, Talwalker, Sarkar, and Jordan (2014). Unlike the randomized subsampling distribution, the BLB bootstrap method focuses on the CDF (or its functionals)

as an approximation of the CDF of . The main motivation for BLB is to reduce the computational costs which are substantial when is large and the computation of is complex. The use of randomized subsampling distribution is proposed by Song (2018), as a device for inference on a parameter, when the observations are locally dependent but the dependence ordering is not known to the researcher.

Given each multi-index , with being a nonnegative integer, we let be the average of the -th cumulant of . Define

As before, for each , let be a polynomial in as given in (7.3) of BR, p.52. Let us define a signed measure on the Borel -field of as follows: for each Borel ,

where denotes the multivariate normal density with mean and variance matrix . The measure is the Edgeworth expansion of . Our first focus is on finding a finite sample bound for the error in the approximation of by . For this, let us use the following assumptions.

Assumption 3.1.

There exist integers and such that for all ,

where for all and

Assumption 3.2.

The dimensional random vector, , is absolutely continuous with respect to Lebesgue measure.

Assumption 3.1 requires a uniform moment bound. Assumption 3.2

is stronger than the classical Cramér condition for the original sampling distribution. The latter condition is often used for proving higher order refinements for bootstrap t tests. (See

Hall (1992).) Using Corollary 2.1, we provide an explicit finite sample bound for the error term, because we need to obtain a bound that is uniform over the distributions of ’s.

Theorem 3.2.

Suppose that Assumptions 3.1 and 3.2 hold. Then there exist a constant and an integer such that for any ,

where denotes the set of the convex subsets of , constants and depend only on and , and is a nonnegative random variable such that

Our next result is related to the modulus of continuity of the randomized subsampling distribution