# A Sheet of Maple to Compute Second-Order Edgeworth Expansions and Related Quantities of any Function of the Mean of an iid Sample of an Absolutely Continuous Distribution

We designed a completely automated Maple (≥ 15) worksheet for deriving Edgeworth and Cornish-Fisher expansions as well as the acceleration constant of the bootstrap bias-corrected and accelerated technique. It is valid for non-parametric or parametric bootstrap, of any (studentized) statistics that is -a regular enough- function of the mean of an iid sample of an absolutely continuous distribution. This worksheet allowed us to point out one error in the second-order Cornish-Fisher expansion of the studentized mean stated in Theorem 13.5 by Das Gupta in [8, p. 194] as well as lay the stress on the influence of the slight change of the normalizing constant when computing the second-order Edgeworth and Cornish-Fisher expansions of the t-distribution as stated in Theorem 11.4.2 by Lehman and Romano in [14, p. 460]. In addition, we successfully applied the worksheet to a complex maximum likelihood estimator as a first step to derive more accurate confidence intervals in order to enhance quality controls. The worksheet also features export of Maple results into R code. In addition, we provide R code to plot these expansions as well as their increasing rearrangements. All these supplemental materials are available upon request.

## Authors

• 3 publications
• 2 publications
• ### Inference for a Step-Stress Model With Type-II and Progressive Type-II Censoring and Lognormally Distributed Lifetimes

Accelerated life-testing (ALT) is a very useful technique for examining ...
05/14/2020 ∙ by Aida Calviño, et al. ∙ 0

• ### Comparison of the h-index for Different Fields of Research Using Bootstrap Methodology

An important disadvantage of the h-index is that typically it cannot tak...
08/25/2017 ∙ by C. Malesios, et al. ∙ 0

• ### Limitations of the Empirical Fisher Approximation

05/29/2019 ∙ by Frederik Kunstner, et al. ∙ 0

• ### Parametric Bootstrap Confidence Intervals for the Multivariate Fay-Herriot Model

The multivariate Fay-Herriot model is quite effective in combining infor...
06/26/2020 ∙ by Takumi Saegusa, et al. ∙ 0

• ### On Second order correctness of Bootstrap in Logistic Regression

In the fields of clinical trials, biomedical surveys, marketing, banking...
07/03/2020 ∙ by Debraj Das, et al. ∙ 0

• ### A Parametric Bootstrap for the Mean Measure of Divergence

For more than 50 years the Mean Measure of Divergence (MMD) has been on...
08/19/2019 ∙ by Federico Zertuche, et al. ∙ 0

• ### Inference on Functionals under First Order Degeneracy

This paper presents a unified second order asymptotic framework for cond...
01/15/2019 ∙ by Qihui Chen, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Even though the Edgeworth expansion is often, at the first glance, mostly viewed as a theoretical tool, it has steadily gained interest from the practitioner point of view for several decades. At the time present, there are too many examples of smart applications of these expansions to list them all. For instance, it has successfully been used to improve confidence intervals, either asymptotic or bootstrap ones, giving sharp insights on, among other properties, their coverage or expected length. Hall Hall (1992) and Shao and Tu Shao and Tu (1996) provide a detailed account on this topic. Moreover, the Edgeworth expansion is deeply rooted in one of the more recent outstanding study dealing with confidence interval for a binomial proportion (Brown, Cai, and Das Gupta, Brown et al. (2001, 2002)), in an exponential family (Brown, Cai, and Das Gupta, Brown et al. (2003)) and in discrete distributions (Cai, Cai (2005)).

The major drawback of these expansions are their long and tedious computations. Even when dealing with the most simple statistic -the studentized mean-, a famous textbook can provide an incorrect formula as the second-order Cornish-Fisher expansion stated in Theorem 13.5 by Das Gupta in (Das Gupta, 2008, p. 194)

. Make a slight change in the denominator of the usual raw variance estimator to use instead its unbiased counterpart -by simply replacing

by to get an exactly -distributed statistic in the Gaussian framework- and it leads to different second-order formulas, which accounts for the discrepancies of the Edgeworth expansions for the studentized mean when compared to the Edgeworth expansion of the -distribution, for instance as stated in Theorem 11.4.2 by Lehman and Romano in (Lehman and Romano, 2005, p. 460).

For a smooth -or regular enough- model of the mean of an iid sample drawn from an absolutely continuous distribution , first and second-order Edgeworth expansions are valid. Closed, yet complex, formulas were derived by Withers Withers (1983) and Hall Hall (1992) to compute them from the partial derivatives of

and the central moments of the underlying distribution

. In addition, in such a model that exhibits a valid second-order Edgeworth expansion, there exists a closed formula for the acceleration constant used in the bootstrap bias-corrected and accelerated technique (BCA). Were we able to evaluate this formula, this would help to remedy to “A disadvantage of the bootstrap BCA”, as stated Shao and Tu (Shao and Tu, 1996, p. 140), which is “that the determination of is not easy, especially when the problem under consideration is complex”. Moreover, in some statistical models, as put forward in two cases in Sections 4 and 5, the acceleration constant is free of the unknown parameters of the underlying distribution and thus can be derived exactly.

All these reasons account for our use of the Maple computation engine Maple Team (2014), that combines numeric computations with symbolic capabilities, in order to design a tool able to automatically and trustworthy compute Edgeworth expansions as well as Cornish-Fisher ones or the acceleration constant used in BCA bootstrap technique. This is this tool, that we provide as a Maple worksheet.

Maple functions or expressions code can be translated into MATLAB code, which in our setting, is virtually R R Core Team (2014) code. The R language can then be used for display purposes, as we did with Figures 1 and 2, to apply bootstrap techniques of the boot package Canty and Ripley (2014); Davison and Hinkley (1997)

or for any further statistical analysis, for instance to deal with the non-increasing cumulative distribution functions that may result from the above Edgeworth expansions using the rearrangement operator

Hardy et al. (1952) implemented in the Rearrangement package Chernozhukov et al. (2010); Graybill et al. (2011). Using Withers (1983), it is easy to derive Cornish-Fisher expansions of any higher order and upgrade our worksheet to have Maple compute them.

Previous work by Finner and Dickhaus Finner and Dickhaus (2010) only addresses the part on the computation of Edgeworth expansions up to any order. From a (applied) statistical point of view, it is not very useful. On the contrary, we not only carry out the Edgeworth expansions but also the Cornish-Fisher ones, BCA acceleration constant derivation, export to and import in R of the results, then use of the R language to derive increasing rearrangements of the computed expansions, which sums up to a significant additional value of this article for statisticians and practitioners.

## 2 Edgeworth-Cornish-Fisher expansions and BCA bootstrap

### 2.1 Introducing the Edgeworth expansion

Let

be a sequence of random variables with 0 mean and unit variance. The cumulant generating function

of is defined by: , . The th cumulant of is given by: , . Consequently, we show that:

 E[exp(itZn)] = exp(+∞∑j=1κj,n1j!(it)j),t∈R. (1)

The th

derivative of the characteristic function of

exists at the point if and only if . Then the th cumulant of exists if and only if . The entire series expansion and the identification of the coefficients lead to the expression of the first four cumulants, see Hall (1992) for more details:

 κ1,n=E[Zn], κ2,n=\rm Var[Zn], κ3,n=E[Zn−E[Zn]]3, κ4,n=E[Zn−E[Zn]]4−3(\rm Var[Zn])2.

Assume that there exist four real numbers , , , so that the first four cumulants satisfy the following asymptotic expansions:

 κ1,n=n−1/2k1,2+O(n−3/2), κ2,n=1+n−1k2,2+O(n−3/2), κ3,n=n−1/2k3,1+O(n−3/2), κ4,n=n−1k4,1+O(n−3/2).

Then Equation 1 becomes:

 E[exp(itZn)] = exp(−12t2){1+n−1/2(k1,2it+16k3,1(it)3) +n−1(12k2,2(it)2+124k4,1(it)4+12(k1,2it+16k3,1(it)3)2)+⋯}.

A Fourier inversion of this expansion leads to:

 P[Zn⩽x]=Φ(x)+(n−1/2p1(x)+n−1p2(x)+⋯)ϕ(x),

uniformly in where

is the probability density function of a standard Gaussian distribution and

 p1(x) = −(k1,2+k3,16(x2−1)), (2) p2(x) = −x⎛⎜ ⎜⎝(k2,2+k21,2)2+(k4,1+4k1,2k3,1)24(x2−3)+k23,172(x4−10x2+15)⎞⎟ ⎟⎠. (3)

### 2.2 A model valid for Edgeworth expansions

Let , , , …be independent and identically distributed random column

-vectors with mean

denoted by , and . Let be a smooth function satisfying . Denote the th element of a -vector by: or ,

 ai1…ij=(∂j/∂x(i1)…∂x(ij))A(X)|X=μ (4) and (5)

We will focus on functions such as:

1. , where is the (scalar) parameter estimated by and is the asymptotic variance of ;

2. , where is an estimate of .

We can assume that is a known function; as stated in (Hall, 1992), the following function fulfills the previous requirements: . If used, stands for the non-studentized case and the studentized one. This “smooth function model” (Hall, 1992) allows to study problems where is a mean, or a variance, or a ratio of means or variances, or a difference of means or variances, or a correlation coefficient, etc.

###### Example 2.1.

Let be a random sample from an univariate population with mean and variance , and if we wished to estimate , then we would take , , , . This would ensure that , (the sample mean), , and

 h2(¯¯¯¯X)=n−1n∑i=1X(2)i−(n−1n∑i=1X(1)i)2=n−1n∑i=1(Wi−¯¯¯¯¯¯W)2=^β2 (thesamplevariance).
###### Example 2.2.

Let be a random sample from an univariate population with mean and variance , and if we wished to estimate , then we would take , , , , . In this case, we have:

 g(μ)=β2,g(¯¯¯¯X)=^β2,h(μ)=E[W−m]4−β4,h(¯¯¯¯X)=n−1n∑i=1(Wi−¯¯¯¯¯¯W)4−^β4.

The cases where is a correlation coefficient (a function of five means), or a variance ratio (a function of four means), among others, may be treated similarly.

Since and , by a Taylor expansion: , , where means that and

 Snr=d∑i=1aiZ(i)+n−1/22d∑i1=1d∑i2=1ai1i2Z(i1)Z(i2)+⋯+n−(r−1)/2r!d∑i1=1⋯d∑ir=1ai1…irZ(i1)⋯Z(ir).
###### Theorem 2.3 (Hall Hall (1992), Bertrand Bertrand and Maumy (2007)).

Assume that the function has continuous derivatives in a neighborhood of , that , that , and that the characteristic function of satisfies . We have:

 P[√nA(¯¯¯¯X)⩽x] = Φ(x)+n−1/2p1(x)ϕ(x)+n−1p2(x)ϕ(x)+O(n−3/2), (6)

uniformly in and where , are defined by Equations 2 and 3, the values of the coefficients , , et been given by:

 k1,2 = 12d∑i=1d∑j=1aijμij, k2,2 = d∑i=1d∑j=1d∑k=1aiajkμijk+12d∑i=1d∑j=1d∑k=1d∑l=1aijaklμikμjl +d∑i=1d∑j=1d∑k=1d∑l=1aiajklμijμkl, k3,1 = d∑i=1d∑j=1d∑k=1aiajakμijk+3d∑i=1d∑j=1d∑k=1d∑l=1aiajaklμikμjl, k4,1 = d∑i=1d∑j=1d∑k=1d∑l=1aiajakal(μijkl−3μijμkl) +12d∑i=1d∑j=1d∑k=1d∑l=1d∑m=1aiajakalmμilμjkm +12d∑i=1d∑j=1d∑k=1d∑l=1d∑m=1d∑o=1aiajaklamoμikμjmμlo +4d∑i=1d∑j=1d∑k=1d∑l=1d∑m=1d∑o=1aiajakalmoμilμjmμko.

Using Hall (1992) and Withers (1983), it is easy to derive Edgeworth expansions of any higher order and complete our worksheet to have Maple compute them. One can also try to use Finner and Dickhaus work Finner and Dickhaus (2010).

###### Remark 2.4.

Edgeworth expansions can also be used to derive asymptotic expansions for cumulative density functions of distribution. A case in point is the -distribution, see Lehman and Romano in Formulas 11.73 and 11.74 of Theorem 11.4.2, (Lehman and Romano, 2005, p. 460) for the second-order Edgeworth expansion of the -distribution. An easy way to retrieve an exactly distributed statistic in a iid Gaussian framework, is to consider with the square root of the unbiased variance estimator (). Unfortunately, when applying Theorem 2.3 to the sample mean, we get the expansion of with the square root of the raw variance estimator (). This subtle change in the normalizing constant leads to discrepancies between the two second-order terms of the Edgeworth expansion of the -distribution and of the studentized mean. Using asymptotic expansions, we show in the provided Maple worksheet, and report in Section 3.4, how to switch from the expansion of the studentized mean to the one of the -distribution.

### 2.3 Cornish-Fisher expansion

The Cornish-Fisher expansion is the expansion of the

-level quantile of an estimator from its Edgeworth expansion:

 wα=inf{x:P[√nA(¯¯¯¯X)⩽x]⩾α}.

Denote by the level quantile of a standard Gaussian distribution. Equation 6 leads to an expansion of as a polynomial in :

 wα=zα+n−1/2p11(zα)+n−1p21(zα)+O(n−3/2)

where and with and defined by Equations 2 and 3.

Even though the true values of the parameter , whose value may be required to compute the polynomials and in the Cornish-Fisher expansion when working with non pivotal statistics, are unknown, Hall (1992) and Shao and Tu (1996) showed that this issue can be solved using bootstrap techniques to ensure the interval still features the second-order probability matching property.

The same remark as above applies, using Withers (1983), it is easy to derive Cornish-Fisher expansions of any higher order and complete our worksheet to have Maple compute them.

### 2.4 BCA bootstrap

In a framework valid for second-order Edgeworth expansions, bootstrap bias corrected and accelerated (BCA) are second-order probability matching intervals (Hall, 1992; Shao and Tu, 1996). Yet, even though the simpler bootstrap intervals also feature second-order probability matching, BCA ones are often preferred (Hall, 1992, p. 93 and pp. 133-134) and their definition will be now shortly recalled. Note that the BCA intervals are transformation respecting and thus invariant under re-parametrization.

Let be a random sample from the population studied and a parameter of interest. Define for the bootstrap distribution of and, for all in , . In this notation, . The sequences of quantiles would be “centered” empirically if were equal to . In general, this identity does not hold, and we might wish to correct for centering error. Let be the estimate of the centering correction or bias correction. An one-sided BCA interval for , with nominal coverage , is given by where:

 ^yBCA,1−α=^H−1(Φ[^m+(^m+z1−α)(1−^a(^m+z1−α))−1]).

The quantity

amounts for a skewness correction. An explicit formula for the acceleration constant

, valid for non-parametric as well as parametric bootstrap, is worked out by Hall (Hall, 1992, p. 132):

 a=A6σ3√n,whereA=d∑i=1d∑j=1d∑k=1aiajakμijk.

An estimate is also proposed by Hall (Hall, 1992, p. 133):

 ^a=^A6^σ3√n,where^A=d∑i=1d∑j=1d∑k=1^ai^aj^ak^μijk
 with^ai=(∂/∂x(i))A(X)|X=μ and ^μijk=1nn∑l=1[(Xl−μ)(i)(Xl−μ)(j)(Xl−μ)(k)].

A two-sided BCA interval is given by .

## 3 Expanding the mean

Let be the skewness and

the excess kurtosis. All the explicit results in this section were derived from the theoretical ones stated in Section

2 using Maple. For this first application we will follow closely the the steps detailed in the file SI_BertrandMaumy_Edgeworth_mean.mw.

### 3.1 Setting the framework

We begin with a restart statement to be sure to start with a fresh session.

> restart;


We first need to set within Maple the framework of the Exemple 2.1 in Section 2.2. We want to highlight that the user only needs to define the function, see Section 2.2, all the other quantities required for the computation, for both the non-studentized or studentized cases, been automatically derived from this very definition as soon as the previous function remained called .

> g:=proc(x1) return x1; end proc;

 g:=proc(x1)returnx1end\ % proc;

Automatically retrieve the value of the dimension , see Section 2.2, for the non-studentized case (Dim:=1) and for the studentized one (Dims:=2). Enlarge the vector to take into account higher moments required for the computations related to the studentized statistic.

> Dim:=nops([op(1,op(1,g))]);Dims:=2*Dim;NtoL:=[evaln(x||(1 .. Dims))];

 Dim:=1
 Dims:=2
 NtoL:=[x1,x2]

Even the function is automatically defined by this code chunk.

> h := proc (x::(seq(name))) option remember; local NNN, derg;
Ψfor NNN in seq(1..(_npassed/2)) do
ΨΨderg[NNN]:=diff(g(op([1..Dim],[_passed])),
ΨΨ[op(NNN,[_passed])]);
Ψend do;
return sqrt(sum(sum(derg[iii]*derg[jjj]*(op(iii+jjj,[_passed])-op(iii,
[_passed])*op(jjj,[_passed])),jjj=1..(_npassed/2)),iii=1..(_npassed/2)));
end proc;

 h:=proc(x::seq(name)) localNNN,derg; optionremember; forNNNinseq(1..1/2∗nargs)do derg[NNN]:=diff(g(op([1..Dim],[args])),[op(NNN,[args])]) end\ do; returnsqrt(sum(sum(derg[iii]∗derg[jjj]∗(op(iii+jjj,[args])−op(iii,[args]) ∗op(jjj,[args])),jjj=1..1/2∗nargs),iii=1..1/2∗nargs)) end\ proc;

We begin with the link between raw moments, from first to fourth order, and centered ones. We assume is the skewness, the excess kurtosis and , , standardized central moments. Can be tuned with a specific distribution. The variable names and had to be used to avoid errors due to confusion with two built-in R functions Gamma and kappa if the result is exported to the R statistical language.

> EspX:=[]:for ii in [seq(1..nops(NtoL))]
do EspX:=[op(EspX),cat(x,ii)=expand(sum(binomial(ii, j)*’mu||j’*
sigma^j*mu^(ii-j), ’j’ = 0 .. ii))]: end do:
EspX;

 [x1=μ,x2=μ2+σ2]

We reflect some known facts about the raw moments so that Maple can use them when simplifying complex expressions.

> for AA in [seq(1..nops(NtoL))] do if ‘mod‘(AA,2)=1 then
assume(NtoL[AA],real) else assume(NtoL[AA]>0) end if end do;
NtoL;

 [x1,x2]

Several code chunks, not shown, now define various quantities of interests either for the computation itself of for speeding it up, namely: , , its partial derivatives , , its partial derivatives , see Section 2.2. The last preparation step, is to compute all the moments that appear in Section 2.2’s formulas. We only report the first and the last of the five code chunks required to carry out this work.

> for ii from 1 to 4*Dims do XX||ii:=t^ii-expand(subs([mu||4 = kappa1+3,
mu||3 = Gamma1, mu||2 = 1, mu||1 = 0, mu||0 = 1], sum(binomial(ii, j)*
’mu||j’*sigma^j*mu^(ii-j), ’j’ = 0 .. ii))): end do:

> for ii from 1 to Dims do for jj from 1 to Dims do for kk from 1 to Dims
do for ll from 1 to Dims do Mu||ii||"."||jj||"."||kk||"."||ll:=simplify(
subs([seq(t^(4*Dims-i)=t^(4*Dims-i)-XX||(4*Dims-i),i=0..(4*Dims-1))],
expand(XX||ii*XX||jj*XX||kk*XX||ll))); end do; end do; end do; end do;

> ExpGaussian:=[Gamma1=0,kappa1=0]: for ii from 5 to 4*Dims do ExpGaussian:=
[op(ExpGaussian),mu||ii=(‘mod‘(ii+1, 2))*doublefactorial(ii-1)]: end do:
ExpGaussian;

 [GAMMA1=0,kappa1=0,mu5=0,mu6=15,mu7=0,mu8=105]

### 3.2 Acceleration constant a

The purpose of the following code chunk is to compute involved in the derivation of the value of the acceleration constant , see Section 2.4.

> A:=simplify(add(add(add(PartAs([NtoL[i]])*PartAs([NtoL[j]])*
PartAs([NtoL[k]])*Mu||i||"."||j||"."||k,i=1..Dims),j=1..Dims),k=1..Dims)):
A:=simplify(subs(EspX,A),assume=positive);

 A:=Γ1

As a consequence, we can state that the value of the acceleration constant is :

 a=Γ16√n⋅

### 3.3 Non-studentized statistic

We now compute the values , , and defined in Theorem 2.3.

> k12:=simplify(1/2*(add(add(PartA0([NtoL[i],NtoL[j]])*Mu||i||"."||j,i=
1..Dim),j=1..Dim))):
> k12:=simplify(subs(EspX,k12));

 k12:=0
> k31:=k31:=simplify(add(add(add(PartA0([NtoL[i]])*PartA0([NtoL[j]])*PartA0(
Mu||i||"."||k*Mu||j||"."||l,i=1..Dim),j=1..Dim),k=1..Dim),l=1..Dim)):
> k31:=simplify(subs(EspX,k31));

 k31:=Γ1

It is easy to derive results for a given distribution by replacing moments with their actual values. For instance, in the Gaussian case and . In the following, we will report only the results for the general case.

> eval(k31,[Gamma1=0,kappa1=0]);

 0
> k22:=simplify(add(add(add(PartA0([NtoL[i]])*PartA0([NtoL[j],NtoL[k]])*
PartA0([NtoL[i],NtoL[j]])*PartA0([NtoL[k],NtoL[l]])*Mu||i||"."||k*
PartA0([NtoL[i]])*PartA0([NtoL[j],NtoL[k],NtoL[l]])*Mu||i||"."||j*
Mu||k||"."||l,i=1..Dim),j=1..Dim),k=1..Dim),l=1..Dim)):
> k22:=subs(EspX,k22);

 k22:=0
> k41:=simplify(add(add(add(add(PartA0([NtoL[i]])*PartA0([NtoL[j]])*PartA0(
[NtoL[k]])*PartA0([NtoL[l]])*(Mu||i||"."||j||"."||k||"."||l-3*
Mu||i||"."||j*Mu||k||"."||l),i=1..Dim),j=1..Dim),k=1..Dim),l=1..Dim)+12*
PartA0([NtoL[l],NtoL[m]])*Mu||i||"."||l*Mu||j||"."||k||"."||m,i=1..Dim),
[NtoL[i]])*PartA0([NtoL[j]])*PartA0([NtoL[k],NtoL[l]])*PartA0([NtoL[m],
NtoL[o]])*(Mu||i||"."||k*Mu||j||"."||m*Mu||l||"."||o),i=1..Dim),j=1..Dim),
[NtoL[i]])*PartA0([NtoL[j]])*PartA0([NtoL[k]])*PartA0([NtoL[l],NtoL[m],
NtoL[o]])*(6*Mu||i||"."||m*Mu||k||"."||l*Mu||j||"."||o),i=1..Dim),
j=1..Dim),k=1..Dim),l=1..Dim),m=1..Dim),o=1..Dim)):
> k41:=subs(EspX,k41);

 k41:=κ1

In order to compute and , we now evaluate Equations 2 and 3.

> P1:=-(k12+1/6*k31*(x^2-1)):
> P1:=simplify(P1);

 P1:=−1/6Γ1(x2−1)
> P2:=-x*(1/2*(k22+k12^2)+1/24*(k41+4*k12*k31)*(x^2-3)+1/72*k31^2*
(x^4-10*x^2+15)):
> P2:=collect(collect(collect(P2,x),Gamma1),kappa1);

 P2:=(−1/24x3+1/8x)κ1+(−1/72x5+5/36x3−5/24x)Γ12
> P11:=-P1:
> P11:=collect(collect(collect(simplify(subs(EspX,P11)),x),Gamma1),kappa1);

 P11:=(1/6x2−1/6)Γ1
> P21:=P1*diff(P1,x)-P2-x*P1*P1/2:
> P21:=collect(collect(collect(simplify(subs(EspX,P21)),x),Gamma1),kappa1);

 (1/24x3−1/8x)κ1+(−1/18x3+5/36x)Γ12

### 3.4 Studentized statistic

 k1,2,s=−12Γ1,k3,1,s=−2Γ1,k2,2,s=3+74Γ21,k4,1,s=6−2κ1+12Γ21.
 p1s(x)=16(2x2+1)Γ1,p2s(x)=(112x3−14x)κ1+(−118x5−19x3+16x)Γ21−14x3−34x.

Same value as Shao and Tu (Shao and Tu, 1996, p. 145).

We now provide the expansion of the -distribution, cf. remark 2.4. See the worksheet on how we used the polynomials and and an asymptotic expansion of the standard Gaussian CDF, density and polynomials and to get the polynomials and of the second-order Edgeworth expansion of the -distribution.

 p1s,t−dist=p1s(x),p2s,t−dist=p2s(x)+12x.

These values are the one given by Lehman and Romano in Formulas 11.73 and 11.74 of Theorem 11.4.2, (Lehman and Romano, 2005, p. 460) for the second-order Edgeworth expansion of the -distribution.

 p2s,t−dist(x)=(112x3−14x)κ1+(−118x5−19x3+16x)Γ21−14x3−14x.
 p11s(x)=−16(2x2+1)Γ1,p21s(x)=(−112x3+14x)κ1+(518x3−572x)Γ21+14x3+34x.

Corrects the value of given in Theorem 13.5 in (Das Gupta, 2008, p. 194). Same value as the one found by Shao and Tu in (Shao and Tu, 1996, p. 145) and Finner and Dickhaus in Finner and Dickhaus (2010).

### 3.5 Export results from Maple to R

> fd := fopen("A_mean.R", WRITE, TEXT);

 fd:=2
> cg1 := CodeGeneration[’Matlab’](A, resultname = ’A’, output = string);

 cg1:=‘‘A=Gamma1;''
> fprintf(fd, cg1);

 12
> fclose(fd);


## 4 Expanding the variance

All the explicit results in this section were derived from the theoretical ones stated in Section 2 using Maple. The steps and details of the computations are available in the file SI_BertrandMaumy_Edgeworth_variance.mw. To demonstrate how is to simple to switch from the mean expansion in Section 3 setting to another one, namely the variance expansion one, the only code chunk from that must be modified is the following one

> g:=proc(x1,x2) return x2-x1^2; end proc;

 g:=proc(x1,x2)returnx2−x1∧2end\ proc;

### 4.1 Acceleration constant a

 A=−−μ6+3κ1+7(κ1+2)3/2⋅

In the Gaussian case, we find the same value as Shao and Tu in (Shao and Tu, 1996, p. 138) by substituting with the well known values .

 a=16√n(15−3⋅0−7)(0+2)3/2=√23√n⋅

### 4.2 Non-studentized statistic

 k1,2=−1/√κ1+2,k3,1=−(−μ6+3κ1+7+6Γ21)/(κ1+2)3/2.

Same as the value provided by Hall in (Hall, 1992, p. 75 and 76).

 k2,2=−2(κ1+1)/(κ1+2),k4,1=(3−24Γ1μ5−4μ6+μ8−3κ21+96Γ21−6κ1)/(κ1+2)2.

The values of , , and can be found in the dedicated Maple worksheet.

### 4.3 Studentized statistic

 k1,2,s=12(κ1+3−μ6+4Γ21)/(κ1+2)3/2,k3,1,s=2(−μ6+3κ1+3Γ21+7)/(κ1+2)3/2

Same value as the one found by Hall in (Hall, 1992, p. 76).

 k2,2,s = 14(20κ31+163κ2+56Γ21κ1+32Γ1κ1μ5−38μ6κ1+450κ1−90μ6+7μ26 +415+112Γ41+168Γ21+64Γ1μ5−56Γ21μ6)/(κ1+2)3.
 k4,1,s = 2(6κ31+84κ21+297κ1+24Γ1κ1μ5−32μ6κ1+54Γ21κ1−κ1μ8− 2μ8+312+72Γ41−42Γ21μ6+6μ26+48Γ1μ5+150Γ21−76μ6)/(κ1+2)3.

The values of , , and can be found in the dedicated Maple worksheet.

## 5 A more complex example

All the explicit results in this section were derived from the theoretical ones stated in Section 2 using Maple. The steps and details of the computations are available in the file SI_BertrandMaumy_Edgeworth_ML_normal_std_sym.mw.

### 5.1 Introduction

One burning issue that commonly arises in official quality laboratories or in the industries of various sectors, namely chemistry, pharmacy, food processing, etc, is the estimation of the proportion of experimental results that lie within two limits. For instance, in official quality laboratories, is the the range of the acceptable values and is the proportion of valid units.

Let , where , be a sequence of independent experimental results that follow a Gaussian distribution with mean and variance . We denote the sample mean and the sample variance respectively by:

 ¯¯¯¯¯X=1nn∑i=1Xis2c=1n−1n∑i=1(Xi−¯¯¯¯¯X)2.

We introduce the proportion defined by:

 π:=π(μ,σ2)=P[L⩽X⩽U]=Φ(U−μσ)−Φ(L−μσ), (7)

where stands for the cumulative distribution function of the standard Gaussian distribution, and , with , are the “limits of acceptance”, whose values depend on the active regulatory norm.

Using Equation 7, one can put forward as the maximum likelihood (ML) point estimator of :