A Gaussian sequence approach for proving minimaxity: A Review

10/04/2018 ∙ by Yuzo Maruyama, et al. ∙ Rutgers University The University of Tokyo 0

This paper reviews minimax best equivariant estimation in these invariant estimation problems: a location parameter, a scale parameter and a (Wishart) covariance matrix. We briefly review development of the best equivariant estimator as a generalized Bayes estimator relative to right invariant Haar measure in each case. Then we prove minimaxity of the best equivariant procedure by giving a least favorable prior sequence based on non-truncated Gaussian distributions. The results in this paper are all known, but we bring a fresh and somewhat unified approach by using, in contrast to most proofs in the literature, a smooth sequence of non truncated priors. This approach leads to some simplifications in the minimaxity proofs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We review some results on minimaxity of best equivariant estimators from what we hope is a fresh and somewhat unified perspective. Our basic approach is to start with a general equivariant estimator, and demonstrate that the best equivariant estimator is a generalized Bayes estimator, , with respect to an invariant prior. We then choose an appropriate sequence of Gaussian priors whose support is the entirety of the parameter space and show that the Bayes risks converge to the constant risk of . This implies that is minimax. All results on best equivariance and minimaxity, which we consider in this paper, are known in the literature. But, using a sequence of Gaussian priors as a least favorable sequence, simplifies the proofs and gives fresh and unified perspective.

In this paper, we consider the following three estimation problems.

Estimation of a location parameter:

Let the density function of be given by

(1.1)

Consider estimation of the location parameter under location invariant loss

(1.2)

We study equivariant estimators under the location group, given by

(1.3)
Estimation of a scale parameter:

Let the density function of be given by

(1.4)

with scale parameter , where . Consider estimation of the scale under scale invariant loss

(1.5)

We study equivariant estimators under scale group, given by

(1.6)
Estimation of covariance matrix:

We study estimation of based on a random matrix having a Wishart distribution , where the density is given in (2.3) below. An estimator is evaluated by the invariant loss

(1.7)

We consider equivariant estimators under the lower triangular group, given by

(1.8)

where , the set of lower triangular matrices with positive diagonal entries.

For the first two cases with the squared error loss and the entropy loss , respectively, the so called Pitman (1939) estimators

(1.9)
(1.10)

are well-known to be best equivariant and minimax. Clearly, they are generalized Bayes with respect to and , respectively. Girshick and Savage (1951) gave the original proof of minimaxity. Kubokawa (2004)

also gives a proof and further developments in the restricted parameter setting. Both use a sequence of uniform distribution on expanding interval as least favorable priors.

For the last case, James and Stein (1961) show that the best equivariant estimator is given by

(1.11)

where is from the Cholesky decomposition of and for . Note that the group of lower triangular matrices with positive diagonal entries is solvable, and the result of Kiefer (1957) implies the minimaxity of . Tsukuma and Kubokawa (2015) gives as a sequence of least favorable priors, the invariant prior truncated on a sequence of expanding sets.

In each case, the sequence of priors we employ is based on a Gaussian sequence of possibly transformed parameters. This is in contrast to most proofs in the literature which use truncated versions of the invariant prior. As a consequence, the resulting proofs are less complicated.

Section 2 is devoted to developing the best equivariant estimator as a generalized Bayes estimator with respect to a right invariant (Haar measure) prior in each case. The general approach is basically that of Hora and Buehler (1966). Section 3 provides minimaxity proofs of the best equivariant procedure by giving a least favorable prior sequence based on (possibly transformed) Gaussian priors in each cases. We give some concluding remarks in Section 4.

2 Establishing best equivariant procedures

All results in this section are well-known. Our proof of best equivariance for , and follow from Hora and Buehler (1966). The reader is referred to Hora and Buehler’s (1966) for further details on their general development of a best equivariant estimator as the generalized Bayes estimator relative to right invariant Haar measure.

2.1 Estimation of location parameter

Consider an equivariant estimator which satisfies . Then we have a following result.

Theorem 2.1.

Let have distribution (1.1) and let the loss be given by (1.2). The generalized Bayes estimator with respect to the invariant prior , , is best equivariant under the location group, that is,

Proof.

The risk of the equivariant estimator (1.3) is written as

(2.1)

Then the best equivariant estimator is

2.2 Estimation of scale

Consider an equivariant estimator which satisfies . Then we have a following result.

Theorem 2.2.

Let have distribution (1.4) and let the loss be given by (1.5). Then the generalized Bayes estimator, with respect to the prior , , is best equivariant under the scale group, that is,

Proof.

The risk of the equivariant estimator is written as

(2.2)

Then the best equivariant estimator is

2.3 Estimation of covariance matrix

Let have a Wishart distribution . Let be the set of lower triangular matrices with positive diagonal entries. By the Cholesky decomposition, and can be written as

for and . As in Theorem 7.2.1 of Anderson (2003)

, the probability density function of

is

(2.3)

where is a normalizing constant given by

(2.4)

and is the left-invariant Haar measure on given by

(2.5)

An estimator

is evaluated by the invariant loss function given by

(2.6)

Denote the risk function by

For all , the group transformation with respect to on a random matrix and a parameter matrix is defined by . The group operating on is transitive. Any equivariant estimator of

under the lower triangular group is of form given by

(2.7)
Theorem 2.3.

Let and let the loss be as in (2.6). Then the generalized Bayes estimator with respect to the prior

(2.8)

, is best equivariant under lower triangular group, that is,

(2.9)

Note that is the “left” invariant measure, which seems to contradict the general theory by Hora and Buehler (1966). However this seeming anomaly is due to our parameterization , and

(2.10)

The general theory implies that

(2.11)

where is right invariant Haar measure on given by

(2.12)

In the proof below, in addition to the left invariance of , and the right invariance of , we use the fact that

(2.13)
Proof of Theorem 2.3.

By (2.3) and (2.6), the risk of an equivariant estimator can be expressed as

Then the best equivariant estimator with respect to the group can be written by

3 Minimaxity

In this section, we choose an appropriate sequence of priors whose support is the entirety of the parameter space and show that the Bayes risks converge to the constant risk of the best equivariant estimator . By a well-known standard result (see e.g. Lehmann and Casella (1998)), this implies minimaxity of . In order to deal with explicit expressions for minimax estimators as well as for somewhat technical reasons, in this section, we specify the loss functions to be standard choices in the literature. For the location and scale problem, the squared error loss and the entropy loss

are used respectively. For estimation of covariance matrix, the so called Stein’s (1956) loss function given by

(3.1)

is used.

3.1 Estimation of location

In this section, we show the minimaxity of , the best location equivariant estimator under squared error loss. A point of departure from most proofs in the literature is that a smooth sequence of Gaussian densities simplifies the proof. It is also easily applied in the multivariate location family (See Remark 3.1).

Recall that the Bayes estimator corresponding to a (generalized) prior , under squared error loss, is given by

(3.2)
(3.3)

Hence, by Theorem 2.1, the best equivariant estimator is given by

(3.4)
Theorem 3.1.

Let have distribution (1.1) and let the loss be given by . Then the best equivariant estimator, , given by (3.4), is minimax, and the minimax constant risk is given by

Under the squared error loss, the Bayes estimator is explicitly written as (3.3), However, in the following proof, the implicit expression (3.2) is mainly used to indicate possible extension for more general loss functions. For the same reason, instead of is used.

Proof of Theorem 3.1.

Let

The Bayes risk of under the prior is given by

Also the corresponding Bayes estimator is given by

Clearly

and therefore, to show , it suffices to prove

Making the transformation yields

where

Now, make the transformation . We then have

or equivalently

Hence, by change of variables, we have

Note also and

Since for any , the dominated convergence theorem implies

(3.5)

and hence

(3.6)

Hence by Fatou’s lemma, we obtain that

(3.7)

Remark 3.1.

In the multivariate case, suppose and

Let . Then the Pitman estimator of , the generalized Bayes estimator with respect to , is

(3.8)

Using

as the least favorable sequence of priors gives minimaxity under the quadratic loss of (3.8).

3.2 Estimation of scale

In this section, we show the minimaxity of the scale Pitman estimator under entropy loss given by

(3.9)

Recall that the Bayes estimator corresponding to a (generalized) prior , under entropy loss (3.9), is given by

(3.10)
(3.11)

Hence the generalized Bayes estimator under , which is best equivariant as shown in Theorem 2.2, is given by

(3.12)

We have a following minimaxity result.

Theorem 3.2.

Let have distribution (1.4) and let the loss be given by . Then the best equivariant estimator, , given by (3.12), is minimax, and the minimax constant risk is given by

Proof.

Assume or equivalently

where is the pdf of . Then the Bayes estimator satisfies

and the Bayes risk is given by

Clearly

and therefore, to show , it suffices to prove

Making the transformation yields

where

Now, make the transformation . We then have

or equivalently

Hence

where and is explicitly given as (when the loss is (3.9))

(3.13)

Note

Since

for any , the dominated convergence theorem implies

(3.14)

Also the continuity of implies

(3.15)

Hence by Fatou’s lemma, we obtain that

(3.16)

Remark 3.2.

In the same way, we can consider the estimation of with and propose the corresponding result,

is minimax and best equivariant for estimating under entropy loss

3.3 Estimation of covariance matrix

As we mentioned in the beginning of this section, we use the so called Stein’s (1956) loss function given by

(3.17)

James and Stein (1961), in their Section 5, show that the best equivariant estimator is given by

(3.18)

where is from the Cholesky decomposition of and