DeepAI

# Estimator of Prediction Error Based on Approximate Message Passing for Penalized Linear Regression

We propose an estimator of prediction error using an approximate message passing (AMP) algorithm that can be applied to a broad range of sparse penalties. Following Stein's lemma, the estimator of the generalized degrees of freedom, which is a key quantity for the construction of the estimator of the prediction error, is calculated at the AMP fixed point. The resulting form of the AMP-based estimator does not depend on the penalty function, and its value can be further improved by considering the correlation between predictors. The proposed estimator is asymptotically unbiased when the components of the predictors and response variables are independently generated according to a Gaussian distribution. We examine the behaviour of the estimator for real data under nonconvex sparse penalties, where Akaike's information criterion does not correspond to an unbiased estimator of the prediction error. The model selected by the proposed estimator is close to that which minimizes the true prediction error.

07/02/2021

### Asymptotic Statistical Analysis of Sparse Group LASSO via Approximate Message Passing Algorithm

Sparse Group LASSO (SGL) is a regularized model for high-dimensional lin...
06/26/2022

### Prediction Errors for Penalized Regressions based on Generalized Approximate Message Passing

We discuss the prediction accuracy of assumed statistical models in term...
09/15/2015

### Sparse Multinomial Logistic Regression via Approximate Message Passing

For the problem of multi-class linear classification and feature selecti...
07/31/2015

### Robustness in sparse linear models: relative efficiency based on robust approximate message passing

Understanding efficiency in high dimensional linear models is a longstan...
12/29/2021

### A comment and erratum on "Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?"

We identify and correct an error in the paper "Excess Optimism: How Bias...
11/08/2017

### Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

We analyze linear regression problem with a nonconvex regularization cal...
05/31/2021

### Neural message passing for joint paratope-epitope prediction

Antibodies are proteins in the immune system which bind to antigens to d...

## 1 Introduction

In recent decades, variable selection using sparse penalties, referred to here as sparse estimation, has become an attractive estimation scheme [1, 2, 3]

. The sparse estimation is mathematically formulated as the minimization of the estimating function associated with the sparse penalties. In this paper, we concentrate on the linear regression problem with an arbitrary sparse regularization. Let

and

be a response vector and predictor matrix, respectively, where each column of

corresponds to a predictor. We denote the regression coefficient to be estimated as , and the problem considered here can be formulated as

 minx12||y−Ax||22+J(x;η), (1)

where is the sparse regularization that enhances the zero component in , and is a set of regularization parameters. In the classical variable selection problem, the variables are estimated by a maximum likelihood method under the constraint on the support of the variables used in the model. In sparse estimation, the support is determined by the regularization parameter ; hence, the adjustment of is regarded as the selection of a model, and is crucial in the sparse estimation.

In general, the regularization parameter is determined based on model selection criteria. One of the criteria is the prediction performance, which is measured by the prediction error defined for the test data generated according to the true distribution. Following this criterion, the model with the regularization parameter that minimizes the prediction error provides an appropriate description of the data. However, the exact computation of the prediction error is impossible, because we do not know the true distribution. Therefore, the model selection is generally implemented using estimators of the prediction error, rather than the prediction error itself. For the maximum likelihood estimation, Akaike’s information criterion (AIC) is an unbiased estimator of the prediction error, and is widely used for model selection when concentrating on the prediction of unknown data [4]. However, for sparse estimation, AIC is generally not an unbiased estimator of the prediction error. In such cases, a -type criterion, which evaluates the prediction error using generalized degrees of freedom (GDF) [5, 6]

, is useful when the variance of the data is known. However, the exact computation of GDF also requires the true probability distribution of the data, which means that an estimator must be constructed. In sparse estimation, GDF and its unbiased estimator can be derived for certain regularizations. A well-known result obtained using the least absolute shrinkage and selection operator (LASSO) is that AIC corresponds to the unbiased estimator of the prediction error

[7], a nontrivial fact that has attracted the interest of statisticians. Recently, sparse estimations using nonconvex regularizations have been studied with the aim of achieving a more compact representation of the data than LASSO [8, 9]. Hence, model selection methods that can be applied to such regularizations is required.

In this paper, we propose a numerical method for the construction of GDF and prediction error’s estimators based on approximate message passing (AMP). The derived form of GDF does not depend on the regularization, although the property of the AMP fixed point is regularization-dependent. The estimator is asymptotically unbiased when the predictor matrix and response variables are independently and identically distributed (i.i.d.) Gaussian, which is indicated by showing the correspondence between AMP and the replica method [10]. In this case, we can confirm that AIC is the unbiased estimator for penalty, which is consistent with an earlier study [7]. We apply our estimator to real data and show that it adequately emulates model selection according to the prediction error.

The remainder of this paper is organized as follows. In Sec. 2, we summarize the background materials that support the following discussion. The problem setting considered in this paper is explained in Sec. 3, and the AMP algorithm for the penalized linear regression problem is introduced in Sec. 4. The estimator of GDF using the AMP fixed point is derived in Sec. 5, and its asymptotic property for Gaussian random data and predictor are studied in Sec. 6 using the replica method. In Sec. 7, the proposed estimator is applied to real data, and the improvement it offers is quantified in Sec. 8. Finally, Sec. 9 presents the conclusions to this study and a discussion of the results.

## 2 Background

### 2.1 Prediction error and generalized degrees of freedom

We denote the estimated regression coefficient obtained by solving (1) and its fit to the response variable as and , respectively. The prediction performance of the model under the given response variable is measured by the prediction error

 ϵpre(y)=1MEz[||z−^y(y)||22], (2)

where is test data whose statistical property is equivalent to that of . The value of that minimizes the prediction error gives an appropriate description of the data . We aim to approximate the prediction error by constructing its estimator. The simplest estimator is the training error, which is defined by

 ϵtrain(y)=1M||y−^y(y)||22, (3)

but this underestimates the prediction error because of the correlation between and . We consider the case in which the mean and covariance matrix of the response vector are given by

 Ey[y]=my,   Ey[(y−my)(y−my)T]=σ2yIM, (4)

where denotes matrix transpose, and is the

-dimensional identity matrix. In this case, the prediction error and the training error satisfy the relationship

 ϵpre(y)=ϵtrain(y)+2σ2ydf(y), (5)

where is the generalized degrees of freedom (GDF) defined by [11]

 df=cov(y,^y(y))Mσ2, (6)

and . The expression (5) is known as the criterion for model selection [5, 6].

### 2.2 Stein’s lemma

The covariance (6) is not observable in the sense that the expectation with respect to the true distribution is required. Hence, estimators of df, denoted by , are used instead of for approximating the prediction error. An idea for the construction of the estimator appears in Stein’s unbiased risk estimate (SURE), known as Stein’s lemma [12]. Following Stein’s lemma, the component-wise covariance between the response variable and its fit is given by

 cov[yμ,^yμ(y)]=σ2yEy[∂^yμ(y)∂yμ]. (7)

Equation (7) means that an unbiased estimator of GDF is given by

 ^df(y)=1MM∑μ=1∂^yμ(y)∂yμ. (8)

Note that (7

) is exact when the response variables are i.i.d. Gaussian random variables.Equation (

8) is an observable quantity, and hence one of the estimators of the prediction error can be defined by

 ^ϵpre(y)=ϵtrain(y)+2σ2yMM∑μ=1∂^yμ(y)∂yμ. (9)

Equation (9) is an unbiased estimator of the prediction error when the response variables are i.i.d. Gaussian, but the unbiasedness is not generally guaranteed for other predictor matrixes.

For the maximum likelihood method, the unbiased estimator of GDF is given by the number of parameters used in the model, which is known as Akaike’s information criterion (AIC) [4]

 AIC(y)=ϵtrain(y)+2Mσ2y||^x(y)||0 (10)

As defined here, AIC has been multiplied by compared with its conventional definition, but this does not affect the following discussion. For penalized regression problems, when the fit of the response variable is represented as using a matrix , it can be shown that GDF corresponds to [13]

, which is widely used in ridge regression. Furthermore, many studies have derived SURE in problems where (

8) is easily computed [14, 15, 16].

### 2.3 GDF for sparse penalties

For sparse penalties, the derivation of GDF and the construction of its estimator is not straightforward, because the penalty term is not differentiable at the origin. For penalty, GDF is equal to the expectation of the density of non-zero components [7];

 df(y)=1MEy[||^x(y)||0], (11)

and hence is an unbiased estimator of GDF. It means that AIC corresponds to an unbiased estimator of the prediction error. The extension of this result for general problem settings has been extensively discussed [17, 18]. However, such a simple relationship does not generally hold, and efficient computation methods for (8) are required. Computation techniques for df have been developed using cross-validation and the parametric bootstrap method, but these techniques are computationally expensive [6, 19].

In this paper, we propose a method to construct the estimator of the prediction error based on Stein’s lemma using AMP. In principle, the method can be applied for any , and the extra computational cost of obtaining the fixed points is less than in other methods.

### 2.4 Sparse penalties considered in this paper

The penalty functions considered in this paper are , the smoothly clipped absolute deviation (SCAD), and the minimax concave penalty (MCP). Under these regularizations, we explain the properties of the estimator using the one-dimensional problem

 ^θ(w)=argminθ{12σ2w(θ−w)2+J(θ;η)}, (12)

where

is the training sample. The ordinary least square (OLS) estimator, which minimizes the squared error between

and , is for any . The penalty term changes the estimator from the OLS estimator. The penalties considered here give analytical solutions of (12). As explained in Sec. 4, the estimates given by AMP reduce to the one-body problem (12) with effective and . We summarize the solution of (12) in the following subsections.

#### 2.4.1 ℓ1 penalty

The penalty is given by

 J(θ;λ)=λ|θ|, (13)

where is the regularization parameter determined by the model selection criterion. The solution of (12) under the penalty is given by

 ^θ(w)=Vℓ1(~w;σ2w,λ)Sℓ1(~w;σ2w,λ) (14)

where and

 Sℓ1(w;σ2,λ) ={w−sgn(w)λfor |w|>λ0otherwise (15) Vℓ1(w;σ2,λ) ={σ2for |w|>λ0otherwise. (16)

The behaviour of the estimator at is shown in Figure 1(a). Across the whole region of , the estimator is shrunk from the ordinary least square (OLS) estimator, denoted by the dashed line.

#### 2.4.2 SCAD penalty

The SCAD penalty is defined by [8]

 J(θ;η)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩λ|θ|(|θ|≤λ)−θ2−2aλ|θ|+λ22(a−1)(λ<|θ|≤aλ)(a+1)λ22(|θ|>aλ), (17)

where the regularization parameter is . The solution of (12) under SCAD regularization is given by

where

 SSCAD(w;σ2,η) =⎧⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪⎩w−sgn(w)λfor λ(1+σ−2)≥|w|>λw−sgn(w)aλa−1for aλσ−2≥|w|>λ(1+σ−2)wfor |w|>aλσ−20otherwise (19) VSCAD(w;σ2,η) =⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩σ2for λ(1+σ−2)≥|w|>λ(σ−2−1a−1)−1for aλσ−2≥|w|>λ(1+σ−2)σ2for |w|>aλσ−20otherwise (20)

The behaviour of the estimator under SCAD regularization at is shown in Figure 1(b). The SCAD estimator behaves like the and OLS estimators when and , respectively. In the region , the estimator linearly transits between the and OLS estimators. We call this region the transient region of SCAD.

#### 2.4.3 Mcp

The MCP is defined by [9]

 J(θ;η) =⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩λ|θ|−θ22a(|θ|≤aλ)aλ22(|θ|>aλ), (21)

where . The estimator of (12) under MCP is given by

 ^θ(w;σ2w,η)=VMCP(~w;σ2w,η)SMCP(~w;σ2w,η), (22)

where

 SMCP(w;σ2,η) =⎧⎪⎨⎪⎩w−sgn(w)λfor aλσ−2≥|w|>λwfor |w|>aλσ−20otherwise (23) VMCP(w;σ2,η) =⎧⎪⎨⎪⎩(σ−2−a−1)−1for aλσ−2≥|w|>λσ−2for |w|>aλσ−20otherwise (24)

Figure 1(c) shows the behaviour of the MCP estimator at . The MCP estimator behaves like the OLS estimator when , and is connected from zero to the OLS estimator in the region . We call this the transient region of MCP.

## 3 Problem setting

Problem (1) discussed in this paper is equivalent to searching the ground state, but here we extend the problem to finite temperatures by introducing the inverse temperature . We construct the posterior distribution for finite , and extract the ground state corresponding to the maximum a posteriori (MAP) estimator in the limit .

The likelihood we consider here is the Gaussian distribution given by

 Pl(y|x)∝exp{−β2||y−Ax||22}. (25)

The distribution (25) depends on the parameter only through the form of ; hence, we hereafter denote (25) as , where . The prior distribution of is given by the penalty function and the inverse temperature as

 Pr(x)∝exp(−βJ(x)), (26)

and hence the posterior distribution of is given by

 Pβ(x|y)=exp{−β2||y−Ax||22−βJ(x)−lnZβ(y)} (27)

following Bayes’ formula, where is the normalization constant. The estimate for the solution of (1) is expressed as

 ^x(y)=limβ→∞⟨x⟩β, (28)

where denotes the expectation according to (27) at . As , (27

) becomes the uniform distribution over the minimizers of (

1). Hence, when problem (1) has a unique minimizer, it corresponds to the estimate (28).

The exact computation of the expectation in (28) is intractable. Thus, we resort to the approximated message passing (AMP) algorithm to approximate (28), a technique that is widely used in signal processing and statistical physics [20, 21, 22, 23].

## 4 Approximate message passing

In this section, we briefly summarize AMP for penalized regression problems. The explanation here only covers the aspects required to reach the result shown in Sec. 5; a detailed derivation is given in [24]. In the derivation of AMP, we introduce the following assumptions for sufficiently large and .

A1:

All components of the predictor matrix are .

A2:

The correlation between predictors is negligible: the off-diagonal components of are .

The message passing algorithm defined for the probability distribution is expressed by messages that are propagated from factors to variables and variables to factors according to

 ^mμ→i(xi) = 1^Zμ→i∫∏j∈V(μ)∖idxjPl(yμ|uμ(A,x))∏j∈V(μ)∖imj→μ(xj) (29) mi→μ(xi) = 1Zi→μPr(xi)∏γ∈F(i)∖μ^mγ→i(xi). (30)

Here, and represent the variable nodes and factor nodes connected to the -th factor node and -th variable node, respectively, and denotes that is not included. Using the messages, the marginal distribution is given by

 mi(xi)=1ZiPr(xi)∏γ∈M(i)^mγ→i(xi). (31)

Updating the messages in the form of the probability distribution is intractable, and we represent the distributions using the mean and rescaled variance, which is the variance multiplied by , assuming that they can be defined:

 ai→μ ≡ ∫dxxmi→μ(x) (32) vi→μ ≡ β∫dx(x−ai→μ)2mi→μ(x). (33)

Further, we define the mean and rescaled variance with respect to the marginal distribution as

 ai ≡ ∫dxxmi(x) (34) vi ≡ β∫dx(x−ai)2mi(x), (35)

where the -th component of the estimate (28), denoted by , corresponds to at . Following the calculation under assumptions A1 and A2, the marginal distribution is derived as [24]

 mi(x)=M(x;Σ2i,Ri), (36)

where

 Σ2i=(∑μA2μi(g′out)μ)−1, (37) Ri=ai+(∑μ(gout)μAμi)Σ2i, (38) M(x;Σ2,R)=1^Z(Σ2,R)Pr(x)1√2πΣ2exp{−(x−R)22Σ2}, (39)

and is the normalization constant required to satisfy . Equation (36) suggests that the product of messages from factors connected to , corresponds to a Gaussian distribution with mean and variance . The functions and are given by

 (gout)μ = ∂∂ωμlogΞ(yμ|ωμ,Vμ) (40) (g′out)μ = −∂2∂ω2μlogΞ(yμ|ωμ,Vμ), (41)

where we define

 ωμ ≡ N∑i=1Aμiai→μ (42) Vμ ≡ N∑i=1A2μivi, (43)

and the function contains the dependence on the likelihood as

 (44)

Under assumption A1, the following relationship holds [24]:

 ai→μ=ai−vi((gout)μAμi+(g′out)μA2μiai→μ), (45)

 ωμ=∑iAμiai−β−1(gout)μVμ. (46)

### 4.1 Specific form for penalized linear regression

Substituting the output function for linear regression given by

 Pl(yμ|uμ)=exp{−β2(yμ−uμ)2} (47)

into Eqs. (40) and (41), we obtain the specific form of and as

 (gout)μ = yμ−ωμ1+Vμ (48) (g′out)μ = 11+Vμ. (49)

Using these equations, the mean and variance parameters are given by

 Ri = ai+(∑μyμ−ωμ1+VμAμi)Σ2i (50) Σ2i = (∑μA2μi1+Vμ)−1. (51)

In summary, the AMP algorithm consists of (52)–(57).

 V(t)μ = ∑iA2μiv(t−1)i, (52) Σ2i(t) = ⎛⎝∑μA2μi1+V(t)μ⎞⎠−1 (53) ω(t)μ = N∑i=1Aμia(t−1)i−V(t)μ(yμ−ω(t−1)μ)1+V(t)μ (54) R(t)i = a(t−1)i+⎛⎝∑μyμ−ω(t)μ1+V(t)μAμi⎞⎠Σ2i(t) (55) a(t)i = fa(Σ2i(t),R(t)i), (56) v(t)i = fc(Σ2i(t),R(t)i), (57)

where we define

 fa(Σ2,R) = ∫dxxM(x;Σ2,R) (58) fc(Σ2,R) = ∫dx(x−fa(Σ2,R))2M(x;Σ2,R), (59)

and note that

 fc(Σ2i,Ri)=Σ2i∂fa(Σ2,R)∂Ri. (60)

At , the integral of (58) can be computed by the saddle point method as

 fa(Σ2,R)=argminx{(x−R)22Σ2−J(x)}, (61)

which corresponds to the one-dimensional problem (12) with the replacements and . Under the penalties considered here, , SCAD and MCP penalty, these functions are computed analytically, and have the form

 fa(Σ2,R) = Vr(~R;Σ2,η)Sr(~R;Σ2,η) (62) fc(Σ2,R) = Vr(~R;Σ2,η), (63)

where and the subscript denotes dependence on the regularization: .

## 5 AMP-based estimator

At the AMP fixed point, Stein’s lemma implies that the covariance between the response variable and its fit can be expressed as

 cov[yμ,^yμ]=σ2yEy[∑νAνi∂ai∂yν]. (64)

The dominant dependence on of the estimate is induced through the mean parameter . Hence, using (60),

 daidyν∼dRidyν∂fa(Σ2i,Ri)∂Ri=viΣ2idRidyν. (65)

Substituting the definition of , (42), into (55), we obtain

 Ri=(∑μyμ−∑j≠iAμjaj→μ1+VμAμi)Σ2i+{ai−∑μA2μiai→μ1+VμΣ2i}, (66)

where the second term can be ignored under the assumption A1:

 ai−∑μA2μiai→μ1+VμΣ2i=viΣ2i∑μA2μi1+Vμ((gout)μAμi+(g′out)μA2μiai→μ)=O(M−1).

Hence, we have

 dRjdyν = Σ2j⎧⎪ ⎪⎨⎪ ⎪⎩Aνj1+Vν−∑κ∑k≠jAκkAκj∂ak→κ∂yν1+Vκ⎫⎪ ⎪⎬⎪ ⎪⎭ (67) = Σ2j⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩Aνj1+Vν−∑κ∑k≠jAκkAκj∂Rk∂yν∂fa(Σ2k,Rk)∂Rk1+Vκ⎫⎪ ⎪ ⎪⎬⎪ ⎪ ⎪⎭, (68)

where is approximated by and (65) is used to derive (68) from (67). Substituting (68) into (65), the second term of (68) can be ignored, and we obtain

 (69)

This means that

 ^df(1)(y)≡1M∑νVν1+Vν (70)

is the AMP expression of the unbiased estimator of GDF, and we define the corresponding estimator of the prediction error as

 ^ϵ(1)pre(y)=ϵtrain(y)+2σ2y^df(1)(y). (71)

These expressions do not depend on the form of the function . In deriving (70), note that assumptions A1 and A2 are required, and hence its unbiasedness is not generally guaranteed.

## 6 Asymptotic behaviour of the AMP-based estimator for Gaussian predictor matrix

When the components of the predictor matrix and data are i.i.d. according to a Gaussian distribution, the asymptotic property of the AMP fixed point can be described by the replica method, where and while remains . Here, we concentrate on the case and , but the discussion can be extended to the case in which the generative model of contains a “true” sparse expression such as , where is a noise vector.

### 6.1 Replica analysis

The basis of replica analysis is the derivation of the free energy density, which is defined by

 f≡−limβ→∞1MβEy,A[lnZβ(y,A)]. (72)

The free energy density and physical quantities derived from it are averaged over the predictor matrix and response variable, a procedure that is less common in the context of statistics. This averaging is implemented with a purpose of describing typical behavior of the problem under consideration.

After the calculation under the replica symmetry assumption, the free energy density is derived as [24]

 f=extrQ,χ,^Q,^χ{Q+σ2y+m2y2(1+χ)−Q^Q−χ^χ2+12αξ(^Q,^χ)}, (73)

where denotes extremization with respect to the variables . The function is given by

 ξ(^Q,^χ) =2∫Dzlogfξ(√^χz,^Q) (74) fξ(√^χz,^Q) =minxexp(^Q2x2−√^χzx+J(x;η)), (75)

where is the random field that effectively represents the randomness of the problem introduced by and , and . Equation (75) is equivalent to the one-dimensional problem (12) with the correspondence and . The solution of (75), denoted by , is given by

 x∗(z;^Q,^χ)=Vr(√^χz;^Q−1,η)Sr(√^χz;^Q−1,η), (76)

and is statistically equivalent to the solution of the original problem (1). Therefore, the expected density of nonzero components in the estimate is given by

 ^ρ ≡1NEy,A[N∑i=1||^x||0] (77) =∫DzI(|x∗(z;^Q,^χ)|>0),

where is the indicator function that takes 1 when is satisfied, otherwise zero. At the extremum of (73), the variables are given by

 χ =−1α∂ξ(^Q,^χ)∂^χ (78) Q =1α∂ξ(^Q,^χ)∂^Q (79) ^χ =Q+σ2y+m2y(1+χ)2 (80) ^Q =11+χ. (81)

Note that the functional form of the parameters and does not depend on the regularization, but the values of and are regularization-dependent.

### 6.2 GDF for Gaussian random predictors

As discussed in [10], the expression of GDF for Gaussian random predictors using the replica method is given by

 df=χ1+χ (82)

for any regularization. The specific form of this expression depends on the regularization. We summarize the form of GDF for three regularizations.

### 6.3 ℓ1 penalty

For penalty, the saddle point equation of is given by

 χ=^ρα^Q, (83)

where and

Substituting (83) and (81) into (82), GDF for regularization is given by

 df=^ρα. (85)

Equation (85) is the number of parameters divided by , and coincides with the well known results of GDF for LASSO [7], where AIC is the unbiased estimator of the prediction error.

The saddle point equation of for SCAD regularization is given by

 χ=^ρα^Q+γS^Q{^Q(a−1)−1}, (86)

where

 ^ρ =erfc(λ√2^χ) (87) γS =1α(erfc(λ(^Q+1)√2^χ)−erfc(aλ^Q√2^χ)). (88)

Here, corresponds to the fraction of components of the regression coefficients that are in the transient region of SCAD (Sec. 2.4.2). Equation (86) leads to the following expression for GDF:

 df=^ρα+γS^Q(a−1)−1. (89)

The coefficient of in the second term corresponds to the rescaled variance of the estimate in the transient region. From (89), the difference between the prediction error and AIC is given by

 Ey,A[ϵpre(y,A)]−Ey,A[AIC(y,A)]=γSσ2y^Q(a−1)−1. (90)

This result means that AIC underestimates the prediction error when SCAD regularization is used, and the transient region contributes to the increase of GDF.

### 6.5 Mcp

The saddle point equation of for MCP is given by

 χ=^ρα^Q+γM^Q(^Qa−1), (91)

where

 ^ρ =erfc(λ√2^χ), (92) γM =1α(erfc(λ√2^χ)−erfc(aλ^Q√2^χ)), (93)

and is the fraction of the regression coefficient in the transient region of MCP (Sec. 2.4.3). Equation (91) leads to

 df=^ρα+γM^Qa−1, (94)

where the coefficient of corresponds to the rescaled variance of the estimate in the transient region. Hence, the difference between the prediction error and AIC is given by

 Ey,A