# Sparse covariance matrix estimation in high-dimensional deconvolution

We study the estimation of the covariance matrix Σ of a p-dimensional normal random vector based on n independent observations corrupted by additive noise. Only a general nonparametric assumption is imposed on the distribution of the noise without any sparsity constraint on its covariance matrix. In this high-dimensional semiparametric deconvolution problem, we propose spectral thresholding estimators that are adaptive to the sparsity of Σ. We establish an oracle inequality for these estimators under model miss-specification and derive non-asymptotic minimax convergence rates that are shown to be logarithmic in p/n. We also discuss the estimation of low-rank matrices based on indirect observations as well as the generalization to elliptical distributions. The finite sample performance of the threshold estimators is illustrated in a numerical example.

## Authors

• 13 publications
• 6 publications
• 14 publications
• ### Optimal estimation of functionals of high-dimensional mean and covariance matrix

Motivated by portfolio allocation and linear discriminant analysis, we c...
08/20/2019 ∙ by Jianqing Fan, et al. ∙ 0

• ### Optimal covariance matrix estimation for high-dimensional noise in high-frequency data

In this paper, we consider efficiently learning the structural informati...
12/19/2018 ∙ by Jinyuan Chang, et al. ∙ 0

• ### Covariance Estimation for Matrix-valued Data

Covariance estimation for matrix-valued data has received an increasing ...
04/11/2020 ∙ by Yichi Zhang, et al. ∙ 3

• ### Model Agnostic High-Dimensional Error-in-Variable Regression

We consider the problem of high-dimensional error-in-variable regression...
02/28/2019 ∙ by Anish Agarwal, et al. ∙ 0

• ### Scale matrix estimation under data-based loss in high and low dimensions

We consider the problem of estimating the scale matrix Σ of the additif ...
05/30/2020 ∙ by Mohamed Anis Haddouche, et al. ∙ 1

• ### Covariance Estimation in High Dimensions via Kronecker Product Expansions

This paper presents a new method for estimating high dimensional covaria...
02/12/2013 ∙ by Theodoros Tsiligkaridis, et al. ∙ 0

• ### Entrywise Estimation of Singular Vectors of Low-Rank Matrices with Heteroskedasticity and Dependence

We propose an estimator for the singular vectors of high-dimensional low...
05/27/2021 ∙ by Joshua Agterberg, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

One of the fundamental problems of multivariate data analysis is to estimate the covariance matrix of a random vector based on independent and identically distributed (i.i.d.) realizations of . An important feature of data sets in modern applications is high dimensionality. Since it is well known that classical procedures fail if the dimension is large, various novel methods of high-dimensional matrix estimation have been developed in the last decade. However, an important question has not yet been settled: How can be estimated in a high-dimensional regime if the observations are corrupted by noise?

Let

be i.i.d. random variables with multivariate normal distribution

. The maximum likelihood estimator of is the sample covariance estimator

 Σ∗X:=1nn∑j=1XjX⊤j.

The estimation error of explodes for large . To overcome this problem, sparsity assumptions can be imposed on , reducing the effective number of parameters. The first rigorous studies of this idea go back to Bickel and Levina [3, 4] and El Karoui [21] who have assumed that most entries of are zero or very small. This allows for the construction of banding, tapering and thresholding estimators based on , for which the dimension can grow exponentially in . Subsequently, a rich theory has been developed in this direction including Lam and Fan [34] who proposed a penalized pseudo-likelihood approach, Cai et al. [11] who studied minimax optimal rates, Cai and Zhou [12] studying the loss as well as Rothman et al. [46] and Cai and Liu [8] for more general threshold procedures and adaptation, to mention only the papers most related to the present contribution. For current reviews on the theory of large covariance estimation, we refer to [9, 24]. Heading in a similar direction as noisy observations, covariance estimation in the presence of missing data has been recently investigated by Lounici [37] as well as Cai and Zhang [10].

Almost all estimators in the afore mentioned results build on the sample covariance estimator . In this paper, we assume that only the noisy observations

 Yj=Xj+εj,j=1,…,n,

are available, where the errors are i.i.d. random vectors in independent of . Then the sample covariance estimator is biased:

 E[Σ∗Y]=E[1nn∑i=1YiY⊤i]=Σ+Γ

where is the covariance matrix of the errors. Assuming known to correct the bias is not very realistic. Moreover, for heavy tailed the errors

that do not have finite second moments,

is not defined and the argument based on makes no sense. Several questions arising in this context will be addressed below:

1. How much information on the distribution of do we need to consistently estimate ?

2. Do we need finite second moments of and/or sparsity restrictions on to estimate ?

3. What is the minimax optimal rate of estimating based on noisy observations?

If the covariance matrix of the errors exists and is known, the problem does not differ from the direct observation case, since can be simply subtracted from . If can be estimated, for instance from a separate sample of the error distribution or from repeated measurements, we can proceed similarly. However, in the latter case, we need to assume that is sparse, since otherwise we cannot find a good estimator for large dimensions. Reducing our knowledge about further, we may only assume that the distribution of belongs to a given nonparametric class. This leads to a high-dimensional deconvolution model. The difference from standard deconvolution problems is that the density of ’s is a parametric object known up to a high-dimensional matrix parameter . A related model in the context of stochastic processes has been recently studied by Belomestny and Trabs [2]. Obviously, we need some assumption on the distribution of errors since otherwise is not identifiable as, for example, in the case of normally distributed . It turns out that we do not need a sparse covariance structure for the error distribution and we can allow for heavy tailed errors without any moments.

From the deconvolution point of view, it might seem surprising that and thus the distribution of can be estimated consistently without knowing or estimating the distribution of errors , but as we will show it is possible. The price to pay for this lack of information is in the convergence rates that turn out to be very slow - logarithmic in the sample size. In the pioneering works in one-dimensional case, Matias [40], Butucea and Matias [5]

have constructed a variance estimator in deconvolution model with logarithmic convergence rate and a corresponding lower bound. In this paper, we provide a general multidimensional analysis of the minimax rates on the class of sparse covariance matrices.

To replace the sample covariance matrix by a deconvolution counterpart, we use some ideas from the literature on density deconvolution. Starting with Carroll and Hall [13] and Fan [22], the deconvolution problem have been extensively studied. In particular, unknown (but inferable) error distributions have been analysed by Neumann [42], Delaigle et al. [18], Johannes [31] and Delaigle and Hall [17] among others. For adaptive estimation with unknown error distribution we refer to Comte and Lacour [14], Kappus and Mabon [32], Dattner et al. [16] and references therein. Almost all contributions to the deconvolution literature are restricted to a univariate model. Hence, our study contributes to the deconvolution theory by treating the multivariate case; in particular, our techniques for the lower bounds might be of interest. To our knowledge, only Masry [39], Eckle et al. [20], and Lepski and Willer [35, 36] have studied the setting of multivariate deconvolution. They deal with a different problem, namely that of nonparametric estimation of the density of or its geometric features when the distribution of is known.

Applying a spectral approach, we construct an estimator for the covariance matrix assuming that

are normally distributed and that the characteristic function

of the distribution of decays slower than the Gaussian characteristic function. A similar idea in a one-dimensional deconvolution problem has been developed by Butucea et al. [6]. The assumption as implies identifiability of and allows us to construct an estimator , which is consistent in the maximal entry norm. Based on , we then construct hard and soft thresholding estimators and , respectively, for sparse matrices. The sparsity is described by an upper bound on the -norm, , of entries of . We establish sparsity oracle inequalities for and when the estimation error is measured in the Frobenius norm. This choice of the norm is naturally related to the distance between two multivariate normal distributions. The oracle bounds reveal that the thresholding estimators adapt to the unknown sparsity . For the soft thresholding estimator we present an oracle inequality, which shows that the estimator adapts also to approximate sparsity.

Assuming that the characteristic function of satisfies for large and some , we prove the following upper bound on the estimation error in the Frobenius norm:

 ∥^ΣHτ−Σ∥⩽CS1/2(lognlogp)−(1−β/2)(1−q/2) (1)

for some constant

and with high probability. The dependence of this bound on the sparsity

is the same as found by Bickel and Levina [3] for the case direct observations; furthermore the well-known quotient drives the rate. However, the severely ill-posed nature of the inverse problem causes the logarithmic dependence of the rate on . We also see that the estimation problem is getting harder if gets closer to 2 where it is more difficult to distinguish the signal from the noise. Furthermore, we establish a lower bound showing that the rate in (1) cannot be improved in a minimax sense for . Let us emphasise that our observations

are by definition not normally distributed. Therefore, the proof of the lower bound differs considerably from the usual lower bounds in high-dimensional statistics, which rely on Gaussian models.

Covariance estimation is crucial in many applications where also observation errors appear. For instance, many portfolio optimization approaches rely on the covariance matrix of a possibly high number of assets where the financial data are typically perturbed due to bid-ask spreads, micro-structure noise etc. [23, 49]. While in a high-frequency regime the observation noise can be handled by local averages, in a low-frequency situation, as daily closing prices, the denoising is more difficult and our deconvolution approach can be applied, cf. [2]. Note that the dimension dependence in [2]

can be improved with our analysis for low-rank matrices. As another application the spatial empirical covariance matrices of climate data and their eigenvectors, called empirical orthogonal functions, are important spatio-temporal statistics. Naturally recordings of climate data, e.g. sea surface temperatures, may suffer from measurement errors

[15] and should be taken into account. Especially, sparse covariance structures appear in the problem of spatio-temporal wind speed forecasting taking into account the time series data of a target station and data of surrounding stations, see [47].

This paper is organized as follows. In Section 2 we construct and analyze the spectral covariance matrix estimator. In Section 3 the resulting thresholding procedures are defined and analyzed. In Section 4 we investigate upper and lower bounds on the estimation error. In Section 5 some extensions of our approach are discussed including the estimation of low-rank matrices based on indirect observations as well as the generalization to elliptical distributions. The numerical performance of the procedure is illustrated in Section 6. Longer and more technical proofs are postponed to Section 7 and to the appendix.

Notation: For any and , the -norm of is denoted by and we write for brevity . For the Euclidean scalar product is written as . We denote by the identity matrix, and by the indicator function. For two matrices the Frobenius scalar product is given by inducing the Frobeninus norm . The nuclear norm is denoted by and the spectral norm by , where

stands for the maximal eigenvalue. For

and we denote by the -norm of the entries of the matrix if and the number of non-zero entries for . We write or if the matrix is positive definite or semi-definite. We denote by

the joint distribution of

when the covariance matrix of is and the characteristic function of the noise is . We will write for brevity if there is no ambiguity.

## 2 Spectral covariance estimators

Let denote the characteristic function of error distribution:

 ψ(u)=E[ei⟨u,ε1⟩],u∈Rp.

Then the characteristic function of is given by

 φ(u):=E[ei⟨u,Yj⟩]=exp(−12⟨u,Σu⟩+logψ(u)),u∈Rp.

Here and throughout we assume that and we use the distinguished logarithm, cf. [48, Lemma 7.6]. This assumption is standard in the literature on deconvolution. Allowing for some zeros of has been studied in [41, 19]. Note that our estimation procedure defined below does not rely on all in , but uses only with a certain radius .

The canonical estimator for the characteristic function is the empirical characteristic function

 φn(u):=1nn∑j=1ei⟨u,Yj⟩,u∈Rp.

Since concentrates around with rate , we have with overwhelming probability for sufficiently large frequencies ensuring for some constant (see Lemma 13 and Corollary 14). In this case is well defined. On the unlikely event , we may set .

Arguing similarly to Belomestny and Trabs [2], we consider the identity

 logφn(u)|u|2=−⟨u,Σu⟩|u|2+logψ(u)|u|2+logφn(u)−logφ(u)|u|2,u∈Rp∖{0}. (2)

Both sides are normalized by being the order of the leading term . While the left-hand side of (2) is a statistic based on the observations , the first term on the right-hand side encodes the parameter of interest, namely the covariance matrix . The second term is a deterministic error due to the unknown distribution of . If , i.e., the error distribution is less smooth than the normal distribution, the deterministic error vanishes as . The third term in (2) is a stochastic error term. Using the first order approximation we get

 logφn(u)−logφ(u)=log(φn(u)−φ(u)φ(u)+1)≈φn(u)−φ(u)φ(u). (3)

The latter expression resembles the estimation error in classical deconvolution problems. However, there is a difference since here in the denominator we have rather than the characteristic function of the distribution of errors. A similar structure was detected in the statistical analysis of low-frequently observed Lévy processes by Belomestny and Reiß [1]. Following [1], one can call this type of problems auto-deconvolution problems. Since , and we assume that , the stochastic error grows exponentially in . Thus, the estimation problem is severely ill-posed even in one-dimensional case.

These remarks lead us to the conclusion that can be estimated consistently without any particular knowledge of the error distribution as soon as , and the spectral radius in (2) is chosen to achieve a trade-off between the stochastic and deterministic errors. To specify more precisely the condition , it is convenient to consider, for any and , the following nonparametric class of functions :

 Hβ(T):={ψ characteristic % function on Rp:∣∣log|ψ(u)|∣∣⩽T(1+|u|ββ),u∈Rp}.

Note that since . Therefore, the condition that determines the class can be written as the lower bound . If the characteristic function of belongs to , the decay for some of the characteristic exponent allows for separating the normal distribution of from error distribution for large . The decay rate determines the ill-posedness of the estimation problem. Noteworthy, we require neither sparsity restrictions on the joint distribution of nor moment conditions of these random variables.

A typical representative in the class is a characteristic function of a vector of independent -stable random variables. In the case of identically distributed marginals, it has the form for some parameter . A related example with correlated coefficients is a -dimensional stable distribution with characteristic function (note that ). Recalling that stable distributions can be characterized as limit distributions of normalized sums of independent random variables and interpreting as accumulation of many small measurement errors, suggests that these examples are indeed quite natural.

If , the deterministic error term in (2) is small for large values of . We will choose in (2) in the form where is large, and are -dimensional unit vectors defined by

 u(i,i):=u(i):=(1{i=k})k=1,…,p and u(i,j):=1√2(u(i)+u(j)) for i≠j. (4)

Using the symmetry of , we obtain

 ⟨u(i),Σu(i)⟩ =σi,iand⟨u(i,j),Σu(i,j)⟩=σi,j+σi,i+σj,j2

for any with . Motivated by (2) applied to for some spectral radius , we introduce the spectral covariance estimator:

 ^Σ=(^σi,j)i,j=1,…pwith^σi,j :=⎧⎨⎩−1U2Re(logφn(Uu(i))),if i=j,−1U2Re(logφn(Uu(i,j)))−12(^σi,i+^σj,j),if i≠j. (5)

Equivalently, we can write for any with . Since concentrates around , cf. Lemma 13, we have with high probability if .

The spectral covariance estimator can be viewed as a counterpart of the classical sample covariance matrix for the case of indirect observations. The entries of enjoy the following concentration property.

###### Theorem 1.

Assume that , and for some . Let and satisfy . Set

 τ(U)=6γeRU2+3TUβU2(log(ep)n)1/2+3TU−2+β. (6)

Then, for any ,

 PΣ,ψ(|^σi,j−σi,j|<τ)⩾1−12(ep)−γ2andPΣ,ψ(maxi,j=1,…,p|^σi,j−σi,j|<τ)⩾1−c∗p2−γ2

where .

###### Proof.

Set . Using (2) we obtain, for all ,

 |^σi,i−σi,j| ⩽U−2|S(Uu(i,j))|+U−2∣∣log|ψ(Uu(i,j))|∣∣ ⩽U−2|S(Uu(i,j))|+U−2maxi∈{1,…,p}∣∣log|ψ(Uu(i,j))|∣∣.

For the last summand in this display is bounded uniformly by on the class . This remark and Corollary 14 in Section 7.1 imply that

 P(|^σi,j−σi,j|⩾6γ√log(ep)√nU2mini,j∈{1,…,p}|φ(Uu(i,j))|+3TU−2+β)⩽12(ep)−γ2

if the condition is satisfied for all . Note that for any , and any ,

 |φ(Uu(i,j))| =exp(−U2⟨u(i,j),Σu(i,j)⟩2+Relogψ(Uu(i,j))) ⩾exp(−U2(|Σ|∞+3TUβ−2)).

Therefore, for and satisfying the conditions of the theorem,

 P(|^σi,j−σi,j|⩾6γeRU2+3TUβU2(log(ep)n)1/2+3TU−2+β)⩽12(ep)−γ2.

A union bound concludes the proof. ∎

The first term in is an upper bound for the stochastic error. We recover the familiar factor which is due to a sub-Gaussian bound on the maximum of the entries . The term is an upper bound for appearing in the linearization (3). Note that for this bound can be written as for . This suggests the choice of spectral radius in the form for some sufficiently small constant . The second term in (6) bounds the deterministic error and determines the resulting rate , cf. Theorem 5.

## 3 Thresholding

Based on the spectral covariance estimator, we can now propose estimators of high-dimensional sparse covariance matrices. We consider the following sparsity classes of matrices:

 G0(S,R) :={Σ>0:Σ=Σ⊤,|Σ|0⩽S,|Σ|∞⩽R}and (7) Gq(S,R) :={Σ>0:Σ=Σ⊤,|Σ|qq⩽S,|Σ|∞⩽R}for q∈(0,2),

where denotes the sparsity parameter and bounds the largest entry of . We also consider larger classes that differ from only in that the condition is dropped. Note that for the classes , since otherwise the condition does not hold. This restriction on does not apply to the classes , for which the unknown effective dimension of can be smaller than . However, for the classes , the overall model remains, in general, -dimensional since the distribution of the noise can be supported on the whole space .

The sparsity classes considered by Bickel and Levina [3] and in many subsequent papers are given by

 Uq(s,R):={Σ>0:Σ=Σ⊤,maxip∑j=1|σi,j|q⩽s,maxiσi,i⩽R}

for , and with the usual modification for . We have , so that our results can be used to obtain upper bounds on the risk for the classes .

Based on the spectral covariance estimator, we define the spectral hard thresholding estimator for as

 ^ΣHτ: =(^σHi,j)i,j=1,…,pwith^σHi,j:=^σi,j1{|^σi,j|>τ}, (8)

for some threshold value . The following theorem gives an upper bound on the risk of this estimator in the Frobenius norm.

###### Theorem 2.

Let , , and . Let be defined in (6) with parameters and satisfying . Then

 supΣ∈G∗q(S,R),ψ∈Hβ(T)PΣ,ψ(∥^ΣHτ−Σ∥⩾3S1/2τ1−q/2)⩽c∗p2−γ2

provided that for , and for . Here, .

###### Proof.

First, consider the case and . In view of Theorem 1, the event is of probability at least for all . On we have the inclusion , so that . Therefore, on the event ,

 ∥^ΣHτ−Σ∥2⩽|^ΣHτ−Σ|0|^ΣHτ−Σ|2∞⩽2|Σ|0|^ΣHτ−Σ|2∞⩽2S|^ΣHτ−Σ|2∞.

Note that, again on , we have . Combining this with the last display implies the assertion of the theorem for .

Consider now the case and . We use the following elementary fact: If for some and , then (cf.[51]). Taking , , and , and using Theorem 1 we obtain that, on the event of probability at least ,

 |^σHi,j−σi,j|⩽3min{|σi,j|,τ/2},i,j=1,…,p.

Thus, for any , with probability at least ,

 ∥^ΣHτ−Σ∥2 =∑i,j(^σHi,j−σi,j)2⩽9∑i,jmin{σ2i,j,τ2/4}⩽9(τ/2)2−q|Σ|qq⩽9τ2−qS.

Since all bounds hold uniform in and , the theorem is proven. ∎

In the direct observation case where we have for all , so that the deterministic error term in (6) disappears. In this case, can be fixed and the threshold can be chosen as a multiple of , analogously to [3]. Together with the embedding , we recover Theorem 2 from Bickel and Levina [3]. In Section 4 we will discuss in detail the optimal choice of the spectral radius and the threshold in the presence of noise.

The spectral soft thresholding estimator is defined as

with some threshold . It is well known, cf., e.g. [51], that

 ^ΣSτ=argminA∈Rp×p{|A−^Σ|22+2τ|A|1}. (9)

Adapting the proof of Theorem 2 in Rigollet and Tsybakov [44], we obtain the following oracle inequality, which is sharp for and looses a factor otherwise.

###### Theorem 3.

Assume that , and for some . Let where is defined in (6) with parameters and such that . Then,

 ∥^ΣSτ−Σ∥2⩽minA∈Rp×p{∥A−Σ∥2+(1+√2)2τ2|A|0} (10)

with probability at least where . For any we have, with probability at least ,

 ∥^ΣSτ−Σ∥2⩽minA∈Rp×p{2∥A−Σ∥2+c(q)τ2−q|A|qq} (11)

where is a constant depending only on .

###### Proof.

Starting from the characterization (9), we use Theorem 2 by Koltchinskii et al. [33]. To this end, we write where are random variables with exponential concentration around zero due to Theorem 1. Observing is thus a sequence space model in dimension and a special case of the trace regression model considered in [33]. Namely, is the diagonal matrix with diagonal entries and are diagonalisations of the canonical basis in . In particular, Assumption 1 in [33] is satisfied for , i.e., where we use the notation of [33]. Note also that the rank of a diagonal matrix is equal to the number of its non-zero elements. Consequently, Theorem 2 in [33] yields with that

 |^ΣSτ−Σ|22⩽minA∈Rp×p{|A−Σ|22+(1+√2)2τ2|A|0}

on the event that . To estimate the probability of , we apply Theorem 1. Inequality (11) follows from (10) using the same argument as in Corollary 2 of [44]. ∎

This theorem shows that the soft thresholding estimator allows for estimating matrices that are a not exactly sparse but can be well approximated by a sparse matrix. Choosing in the oracle inequalities (10) and (11) we obtain the following corollary analogous to Theorem 2.

###### Corollary 4.

Let , , and . Let where is defined in (6) with parameters and such that . Then

 supΣ∈G∗q(S,R),ψ∈Hβ(T)PΣ,ψ(∥^ΣSτ−Σ∥⩾CS1/2τ1−q/2)⩽c∗p2−γ2

where for , and for .

## 4 Minimax optimality

In this section, we study minimax optimal rates for the estimation of on the class . We first state an upper bound on the rate of convergence of the hard thresholding estimator in this high-dimensional semiparametric problem. It is an immediate consequence of Theorem 2. Due to Corollary 4, the result directly carries over to the soft thresholding estimator.

###### Theorem 5.

Let , , and . For , set

 U∗=√14Rlogn64γ2log(ep). (12)

Let be large enough such that for some numerical constant . Then for any where is defined in (6) we have

 sup(Σ,ψ)∈Gq(S,R)×Hβ(T) PΣ,ψ(∥^ΣHτ−Σ∥⩾¯C1¯rn,p)⩽¯C0p2−γ2with ¯rn,p:=S1/2(R1−β/2T(lognlog(ep))−1+β/2)1−q/2 (13)

for some numerical constants .

###### Proof.

It follows from the assumption on that . This and the definition of imply that . Therefore, we can apply Theorem 2, which yields the result since

 τ(U∗) ⩽6γe2RU2∗U2∗(log(ep)n)1/2+3TU−2+β∗⩽(2¯c3+3)TU−2+β∗.

It is interesting to compare Theorem 5 with the result of Butucea and Matias [5] corresponding to

and establishing a logarithmic rate for estimation of the variance in deconvolution model under exponential decay of the Fourier transform of

. Butucea and Matias [5] have shown that, if , their estimator achieves asymptotically a mean squared error of the order . This coincides with the case and of the non-asymptotic bound in (13). A similar rate for has been obtained by Matias [40] under the assumptions on the decay of the Laplace transform.

We now turn to the lower bound matching (13) for

. Intuitively, the slow rate comes from the fact that the error distribution can mimic the Gaussian distribution up to some frequency in the Fourier domain. A rigorous application of this observation to the construction of lower bounds goes back to

Jacod and Reiß [30], though in quite a different setting. For the multidimensional case that we consider here the issue becomes particularly challenging.

###### Theorem 6.

Let and assume that , , for some constants , and . Then, there are constants such that

 inf~Σsup(Σ,ψ)∈G0(S,R)×Hβ(T)PΣ,ψ(∥~Σ−Σ∥⩾c1r–n,p) >c2with r–n,p :=S1/2R1−β/2T(logn)−1+β/2

where the infimum is taken over all estimators .

The proof of this theorem is postponed to Section 8. We use the method of reduction to testing of many hypotheses relying on a control of the -divergence between the corresponding distributions, cf. Theorem 2.6 in [50]. The present high-dimensional setting introduces some additional difficulties. When the dimension of the sample space is growing, an increasing number of derivatives of the characteristic functions has to be taken into account for the -bound. Achieving bounds of the correct order in causes difficulty when is arbitrarily large. We have circumvented this problem by introducing a block structure to define the hypotheses. The construction of the family of covariance matrices of used in the lower bounds relies on ideas from Rigollet and Tsybakov [44], while the error distributions are chosen as perturbed -stable distributions. To bound the -divergence, we need a lower bound on the probability density of . It is shown by Butucea and Tsybakov [7] that the tails of the density of a one-dimensional stable distribution are polynomially decreasing. We generalize this result to the multivariate case (cf. Lemma 15 below) using properties of infinitely divisible distributions.

We now give some comments on the lower bound of Theorem 6. Assuming of order means that we consider quite a sparse regime. We always have . Recall also that as the diagonal of the covariance matrix is included in the definition of for the class . An alternative strategy pursued in the literature is to estimate a correlation matrix, i.e., to assume that all diagonal entries are known and equal to one. However, this seems not very natural in the present noisy observation scheme. On the other hand, Theorem 6 shows that even in the sparse regime the estimation error tends to as for dimensions growing polynomially in . The logarithmic in rate reflects the fact that the present semiparametric problem is severely ill-posed.

Comparing the lower bound with the upper bound from Theorem 5, we see that they coincide if the dimension satisfies for some and some . Thus, we have established the minimax optimal rate under this condition. Note also that we only loose a factor of order for very large , for instance, if .