1 Introduction.
Fisher information is a fundamental concept in statistics because it quantifies the efficiency of point estimators in finite samples and the asymptotic behavior of maximum estimators. The importance of Fisher information is derived from two properties:

Monotonicity: The Fisher information in a statistic (a reduction of a set of data) is never greater than the information in the complete data set.

Additivity: The total Fisher information in a set of independent observations is the sum of the Fisher informations of each of its components.
In this article we apply Fisher information to develop analytic inequalities involving both scalars and matrices. The monotonicity and additivity of Fisher information are key tools in deriving or reproving analytical inequalities, as shown below. Our general approach is to formulate a probability model, specialize it to Gaussian distributions, and use informationtheoretic properties of the model to derive inequalities based on statistical principles.
In Kagan and Smith (2001) we used Fisher information to create statistical proofs of the monotonicity and convexity of the matrix function for Hermitian matrices. That is,
and, given weights such that and ,
Here and throughout the paper, for any pair of Hermitian matrices, means is nonnegative definite. Similarly the matrix function is shown to be convex using statistical methods.
The convexity result above was extended to a notion of matrixweighted averages in Kagan and Smith (1999). The scalar weights in are replaced by matrix weights as follows:
where . It was shown that and are hyperconvex functions, meaning that
and
As before, these results were derived by making use of the properties of Fisher information.
Our work is similar to the use of properties of entropy and related informational quantities to derive and extend classical inequalities. See Dembo, Cover and Thomas (1991) for an exposition of that work.
2 Properties of Fisher Information.
Basic results concerning Fisher information are given in standard textbooks on mathematical statistics, for example Rao (1971) or Bickel and Doksum (2015). Let
be a random vector with density
depending on a parameter . We assume the score functionis well defined. Then , the Fisher information on contained in , is defined as
Under further regularity conditions,
The fundamental information inequality (or CramérRao inequality) states that if
is an unbiased estimator of
, then(If A and B are Hermitian matrices, the notation means that is nonnegative definite.)
When is a location parameter, has density . The Fisher information on a location parameter becomes
Plainly, is constant in . (The notation by default denotes the information on a location parameter throughout this paper.)
If is distributed as , the density of is and plainly .
For a scalar Gaussian random variable
one has , and for any with and , . This is a consequence of the CramérRao inequality.3 Mixtures, Mean Functions and Inequalities.
Consider an experiment consisting of observing a pair , where
is a discrete random variable with
and the conditional distribution of given is .The marginal distribution of is a scale mixture of Gaussian distributions with mixture parameter . Its density is
(1) 
Here is the density of the standard normal
. The variance
of with density (1) is(2) 
The Fisher information on contained in the pair is
(3) 
Monotonicity of the Fisher information (the information in whole data set is never less than in any part of it; in our case is a part of ) implies
For any with , . Hence one gets a twosided inequality for with density :
(4) 
Since in (1) is completely determined by the weights and variances , so is . On setting , the inequality (4) takes the form
(5) 
Recall that a function is called a mean function if for all
:

,

for any , .
Classical examples of mean functions are the arithmetic, geometric and harmonic means.
From (5), satisfies (i). Furthermore, for any , is the Fisher information in with density
and due to the well known property of the Fisher information mentioned above,
so that satisfies (ii). Thus, is a mean function. We suggest calling it the informational mean.
Inequalities (4) and (5) have a statistical interpretation. Their right hand sides are the Fisher information on in the pair with
(6) 
The left hand sides are the Fisher information on in a Gaussian with given by (2).
Turn now to the case when are replaced with Hermitian positive definite matrices . As is well known, the inequality between the arithmetic and harmonic means still holds:
(7) 
The matrices are not assumed to commute so that their geometric mean is not defined.
Suppose that is a dimensional random vector with distribution given by a density , where is a dimensional parameter, the vector score,
is well defined and . Then the matrix is called the matrix of Fisher information on contained in . (The superscript denotes transposition.)
For any Gaussian with mean vector and nondegenerate covariance matrix . For any with covariance matrix , the information matrix is evidently constant in and . (Here and throughout this paper, means that the matrix is nonnegative definite.)
Let be a pair of random elements whose distribution is given by
(8) 
The marginal density of is the mixture of the densities of , , with a mixture parameter . Similarly to (2), the covariance matrix of is
(9) 
and the matrix of Fisher information on in the pair is
(10) 
which is constant in .
As in the case of a scalar valued , when is vector valued, the matrix of Fisher information is monotone. In our case, .
On setting , becomes a function of and the mixing probabilities . Comparing it with on one side and with the matrix of Fisher information in a Gaussian on the other leads to
(11) 
We want to emphasize that the matrices are not assumed to commute.
As a function of , satisfies the above condition (ii) and the following version of (i): if a matrix and a positive matrix are such that , then The statistical interpretation of (11) is the same as that of (4) and (5).
4 An inequality for Fisher information in sums of random variables.
In the previous section, we considered the Fisher information in a scale mixture of Gaussian densities to obtain analytic inequalities of mean functions. In this section we follow a different approach by examining the Fisher information on weighted location parameters in an independent sample of observations. The model is as follows.
For independent with finite Fisher information and , set
(12) 
The information in on equals Observe that for any constant , the information in equals that in .
Multiplying both sides of (12) by with and taking the sum of the results gives
whence
(13) 
The information about in the vector with independent components is the same as in the vector . Due to monotonicity and additivity of the Fisher information,
(14) 
whence
(15) 
for . For this inequality is known (e.g., see Dembo, Cover & Thomas 1999, Theorem 13).
When the are independent Gaussian variables with variances the sum has a Gaussian distribution with variance and (15) takes the form
(16) 
for subject to .
Replacing with subject to gives a generalization, in a sense, of the classical inequality between the arithmetic and harmonic means:
(17) 
for .
5 General comments
The paper reveals statistical meaning of classical mean functions (see in this connection Rao (2000), Kagan and Smith (2001), Kagan (2003), Kagan and Rao (2003)) and
introduces a new one of purely statistical origin, called the informational mean. It leads to a new inequality similar to the classical inequality between the arithmetic, geometric and harmonic means and holds when the arguments of the mean functions are Hermitian positive definite matrices, not necessarily commuting in which case the geometric mean cannot be defined.
The material of the paper can be used as a part of the chapter on the Fisher information in graduate courses in Statistics.
REFERENCES

Bickel, P.J. and Doksum, K.A. (2015), Mathematical Statistics (Vol. 1, 2nd ed.), Boca Raton: CRC Press.

Dembo, A., Cover, T.M., and Thomas, J.A. (1991), “Information Theoretic Inequalities,” IEEE Trans. Information Theory 37, 15011518.

Kagan, A. and Smith, P.J. (1999), “A Stronger Version of Matrix Convexity as Applied to Functions of Hermitian Matrices,” J. Inequal. & Appl., 3, 143152.

Kagan, A. and Smith, P.J. (2001), “Multivariate Normal Distributions, Fisher Information and Matrix Inequalities,”
Int. J. Math Educ. Sci. Technol., 32, 9196. 
Kagan, A. (2003), “Statistical Approach to Some Mathematical Problems”, Austrian J. Statist., 32(12), 7183.

Kagan, A. and Rao, C. R. (2003), “Some Properties and Applications of the Efficient Fisher Score”, J. Statist. Plann. Inference, 116, 343352.

Rao, C. R. (2000), “Statistical Proofs of Some Matrix Inequalities”, Linear Algebra Appl., 321, 307320.

Rao, C.R. (1973), Linear Statistical Inference and Its Applications, Hoboken, NJ: J. Wiley & Sons.
Comments
There are no comments yet.