Fisher-Rao Geometry and Jeffreys Prior for Pareto Distribution

In this paper, we investigate the Fisher-Rao geometry of the two-parameter family of Pareto distribution. We prove that its geometrical structure is isometric to the Poincaré upper half-plane model, and then study the corresponding geometrical features by presenting explicit expressions for connection, curvature and geodesics. It is then applied to Bayesian inference by considering the Jeffreys prior determined by the volume form. In addition, the posterior distribution from the prior is computed, providing a systematic method to the Bayesian inference for Pareto distribution.

Authors

• 4 publications
• 3 publications
• 5 publications
09/18/2017

Bayesian analysis of three parameter singular Marshall-Olkin bivariate Pareto distribution

This paper provides bayesian analysis of singular Marshall-Olkin bivaria...
06/27/2018

A Robustified posterior for Bayesian inference on a large number of parallel effects

Many modern experiments, such as microarray gene expression and genome-w...
11/22/2019

Calibration of the Pareto and related distributions -a reference-intrinsic approach

We study two Bayesian (Reference Intrinsic and Jeffreys prior) and two f...
04/28/2022

Bernstein - von Mises theorem and misspecified models: a review

This is a review of asymptotic and non-asymptotic behaviour of Bayesian ...
01/30/2013

On the Geometry of Bayesian Graphical Models with Hidden Variables

In this paper we investigate the geometry of the likelihood of the unkno...
11/20/2018

Geometry of Friston's active inference

We reconstruct Karl Friston's active inference and give a geometrical in...
05/06/2020

Strong replica symmetry in high-dimensional optimal Bayesian inference

We consider generic optimal Bayesian inference, namely, models of signal...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A statistical model is often described by a family of distributions which are indexed by a set of parameters. These parameters form a space which can be endowed with some differential-geometrical structures reflecting the properties of the specified distributions. These geometrical structures then provide geometrical approaches to deal with statistical problems. One of the most useful structures is the Fisher–Rao metric which stems from the use of the Fisher information matrix [11]. Rao [18] considered the Fisher information from a differential-geometrical viewpoint. Gradually, the Fisher–Rao metric becomes one of the central concepts in the subject of information geometry, e.g. [1, 2, 20, 10], which combines ideas from differential geometry and information theory to study the geometrical structure of statistical models.

In this paper, we focus on the two-parameter family of Pareto distribution, a family of statistical models with power-law probability distributions that is often used in describing many scientific and social phenomena, e.g.

[3, 22]. Pareto distribution does not belong to the well-studied regular type of distributions, since the support of the probability density depends on one of its parameters. Consequently, the Fisher–Rao geometry for Pareto distribution is not as regular as many other distributions. Note that the Fisher–Rao metric is no longer equal to the negative Hessian form as it is expected in regular cases. Although both of them are symmetric, the negative Hessian form is not guaranteed to be positive definite on the whole parameter space. However, in the earlier work [15]

, the geometrical structure of Pareto distribution was actually calculated by the negative Hessian form. Thus, this paper is first devoted to presenting the proper metric structure for Pareto distribution. Interestingly, this structure is readily identified, as we shall prove that it is isometric to the Poincaré upper half-plane model which is reminiscent of a similar result about the two-parameter family of one-dimensional normal distribution. Based on this observation, the geometrical characteristics of Pareto distribution such as curvature and geodesics can be readily obtained. These results are included in Section

3.

To illustrate an application of the geometrical structure for statistical inference over Pareto distribution, we utilize the Jeffreys prior to develop a systematic approach to Bayesian inference for Pareto distribution. The Jeffreys prior (cf. [12]) is a non-informative prior distribution for a parameter space, that is, a prior distribution without any subjective information assumed. The basic idea of the Jeffreys prior is, to define a prior distribution such that the probability of finding a parameter in a specified region is proportional to the geometrical volume of this region. Different from the previous work [14]

, no change of variables is introduced in this paper as a change of variable involving parameters may greatly affect the statistical and geometric properties of the distribution. For instance, it is well known that any normal distribution can be transformed to a standard one. In this paper, our derived prior is an improper prior which does not have a proper probability distribution, nevertheless we can proceed to calculate the posterior distribution from the prior, and to obtain a proper posterior probability distribution as we shall show in Section

4. Then the posterior distribution can be directly applied to Bayesian inference for Pareto distributions, as an illustration of that, we present simulation results in Section 5.

2 Preliminaries

For convenience, we review some necessary differential-geometrical concepts and results (cf. [17, 21]) which are used later in Section 3.

Definition 1.

A Riemannian metric on a smooth manifold is a assignment to each point of an inner product on the tangent space . A Riemannian manifold is a pair consisting of a manifold together with a Riemannian metric on .

Definition 2.

A diffeomorphism is called an isometry between two Riemannian manifolds and if .

Theorem 3.

On a Riemannian manifold there always exists a unique Riemannian connection, namely, an affine connection that is torsion-free and compatible with the metric.

Definition 4.

Let be an affine connection on an -dimensional manifold and let be a local frame on

. For a vector field

, the connection forms are defined by

 ∇Xej=ωij(X)ei,

and the matrix is called the connection matrix of the connection relative to the frame . Similarly, the curvature forms are defined by

 R(X,Y)ej=Ωij(X,Y)ei,

where the curvature tensor

is given by

 R(X,Y)=∇X∇−∇Y∇X−∇[X,Y],

and the matrix is called the curvature matrix of the connection relative to the frame .

Note that we adopt the Einstein summation convention here and throughout the paper.

Proposition 5.

The curvature forms are related to the connection forms by the second structural equation:

 Ωij=dωij+ωik∧ωkj. (1)
Proposition 6.

Let be an -dimensional Riemannian manifold, be the Riemannian connection, be an orthonormal frame, and be the dual frame. Then the connection matrix of relative to

is a skew symmetric matrix such that the first structural equation holds,

 dθi+ωij∧θj=0, ∀i=1,…,m. (2)

Consequently, the curvature matrix is also skew symmetric.

Proposition 7.

Let be a coordinate chart on a Riemannian manifold , and . Then the volume form of on is given by

 vol=√det[gij]dx1∧⋯∧dxm. (3)
Definition 8.

Let be an affine connection on a manifold and be a local coordinate frame on . Then the Christoffel symbols of relative to are defined by

 ∇∂i∂j=Γkij∂k.
Proposition 9.

Let be a coordinate chart on a manifold with be the Christoffel symbols of a connection. Then the geodesic equations are given by

 ¨xk+Γkij˙xi˙xj=0, k=1,…,m. (4)

3 Geometry of the two-parameter family of Pareto distribution

In this section, we study the geometrical structure related to the Fisher–Rao metric of the two-parameter family of Pareto distribution. The Fisher–Rao metric provides us with a Riemannian manifold structure and makes the differential-geometrical tools applicable on the parameter space of statistical models.

For a family of distribution with probability function , where being an open subset of , the Fisher–Rao metric is defined below.

Definition 10.

Let

denote a random variable which represents the probability function

and . The Fisher–Rao metric matrix about the frame is defined via the expectation as

 gij(θ)=E[∂il(X|θ)∂jl(X|θ)]=∫∂il(x|θ)∂jl(x|θ)p(x|θ)dx, (5)

where is the log-likelihood function.

Suppose that the following regularity conditions hold, namely,

1. For each , the mapping is smooth.

2. The order of integration and differentiation can be freely rearranged. For instance,

 ∫∂ip(x|θ)dx=∂i∫p(x|θ)dx=∂i1=0. (6)

For discrete distributions, we simply replace the integration by summation.

3. Different parameters stand for different probability density functions, that is,

implies that and are different. Moreover, every parameter possesses a common support where .

Then we have

 E[∂il(X|θ)]=0 (7)

and the Fisher–Rao metric in the negative Hessian form

 gij(θ)=−E[∂i∂jl(X|θ)]. (8)

Now, we calculate the Fisher–Rao metric for the two-parameter family of Pareto distribution. Its probability density function is given by

 p(x|α,β)=βαβxβ+1I[x≥α], α>0,β>0. (9)

Thus, the log-likelihood function is given by

 l(x|α,β)=logp(x|α,β)=logβ+βlogα−(β+1)logx.

Note that, with parameters , the probability density function of Pareto distribution does not satisfy the second regularity condition since the support of depends on parameter , and hence the negative Hessian form (8) is not valid as the Fisher–Rao metric. If it is used regardless, one would obtain a ‘fake’ metric matrix

 ⎛⎜⎝βα2−1α−1α1β2⎞⎟⎠,

which is not positive definite unless .

Set , and , . By Definition 10 and the observation that the random variable

obeys the exponential distribution with mean value

, the proper Fisher–Rao metric is calculated as

 g11=β2α2, g12=g21=0, g22=1β2. (10)

Above results about the Fisher–Rao metric of Pareto distribution are summarized as follows.

Proposition 11.

The tensor expression of the Fisher–Rao metric for the two-parameter family of Pareto distribution is given by

 g=β2α2dα⊗dα+1β2dβ⊗dβ. (11)

In the rest of this paper, we shall denote the statistical manifold of Pareto distribution by

 P={pα,β∣pα,β(x)=p(x|α,β), α>0,β>0}.

Together with the Fisher–Rao metric given by (11), becomes a Riemannian manifold in the sense of Definition 1.

3.1 An isometry between the Pareto manifold and the Poincaré upper half-plane model

In this subsection, we shall show that the Riemannian manifold is isometric to the Poincaré upper half-plane model.

The Poincaré upper half-plane model (cf. [13, 19]) is the upper half-plane

 H={(x,y)∈R2∣y>0},

together with the Poincaré metric

 h=dx⊗dx+dy⊗dyy2.

Let map to . Then, we have

 F∗h =dlogα⊗dlogα+d(1/β)⊗d(1/β)(1/β)2 =(1/α)2dα⊗dα+(1/β)4dβ⊗dβ(1/β)2 =β2α2dα⊗dα+1β2dβ⊗dβ=g,

which yields the following Proposition 12.

Proposition 12.

The diffeomorphism defined above is an isometry between and .

Now we can make use of the geometry of the Poincaré upper half-plane model to study the statistical manifold of Pareto distribution, since isometry preserves essential geometrical structures.

3.2 Connection form, curvature form and Christoffel symbols

As a consequence of Proposition 12, has constant Gaussian curvature . It hence contributes as another member of the statistical manifolds with constant curvatures, e.g. [16, 6].

Now we shall study the geometrical structure of in detail by using differential forms, which have been tools of great power and versatility in differential geometry, since Élie Cartan pioneered its use in the 1920s [7]. In this subsection, we derive the connection and curvature for in terms of differential forms.

With the metric given by (11), we obtain an orthonormal frame as

 e1=αβ∂1, e2=β∂2. (12)

The dual frame with respect to (12) is given by

 θ1=βαdα, θ2=1βdβ. (13)

The volume form is given by

 vol=θ1∧θ2=1αdα∧dβ, (14)

which can also be derived from (3).

Let be the unique Riemannian connection on . Let and be the connection and curvature matrices of relative to , respectively. By the skew-symmetry stated in Proposition 6, we only need to determine and for .

By differentiating (13), we have

 dθ1=−1αdα∧dβ, dθ2=0. (15)

The first structural equation (2) reads

 dθ1=−ω12∧θ2, dθ2=ω12∧θ1. (16)

By comparing (15) and (16), we obtain

 ω12=βαdα.

Using the second structural equation (1), we get

 Ω12=dω12=−1αdα∧dβ=Kvol.

Now we present another description of the connection by the Christoffel symbols defined in Definition 8.

For any smooth vector field on , we have

 ∇X∂1 =∇X[(β/α)e1]=X(β/α)e1+(β/α)∇Xe1 =((Xβ)/α−β(Xα)/α2)(α/β)∂1−(β/α)ω12(X)e2 =(Xββ−Xαα)∂1−β3α2(Xα)∂2.

Similarly, we have

 ∇X∂2 =∇X[(1/β)e2]=X(1/β)e2+(1/β)∇Xe2 =−[(1/β)2Xβ]β∂2+(1/β)ω12(X)e1 =Xαβ∂1−Xββ∂2.

Hence, relative to the coordinate frame , we obtain

 ∇∂1∂1 =−1α∂1−β3α2∂2, ∇∂2∂1 =1β∂1, ∇∂1∂2 =1β∂1, ∇∂2∂2 =−1β∂2.

The corresponding Christoffel symbols are given in Table 1.

3.3 Geodesics and geodesic distances

By (4) and Table 1, we obtain the geodesic equations for as

 ¨α−˙α2α+2˙α˙ββ=0, ¨β−β3˙α2α2−˙β2β=0. (17)

Thanks to the well-known results about , we can derive explicit expression of the geodesics on by Proposition 12 instead of solving (17) directly. On , the unit-speed geodesic starting from , with the initial velocity making an angle about the positive -axis, is given by

 x(t) =x0+y(t)sinhtcosθ0, y(t) =y0etsin2(π/4−θ0/2)+e−tcos2(π/4−θ0/2).

Hence, the corresponding geodesic starting from on is given by , i.e.,

 α(t) =α0exp(sinhtcosθ0β(t)), (18) β(t) =β0(etsin2(π/4−θ0/2)+e−tcos2(π/4−θ0/2)).

Fig. 1 illustrates the radial geodesics given by (18) with and respectively, which outline the shape of the unit geodesic ball with .

On , the geodesic distance between two points and is given by

 dH((x0,y0),(x1,y1))=arcosh(1+(x0−x1)2+(y0−y1)22y0y1).

Hence, the geodesic distance on between probability densities and is given by , i.e.,

 d(pα0,β0,pα1,β1)=arcosh(1+β0β1(logα0−logα1)22+(β0−β1)22β0β1). (19)

4 An application to Jeffreys prior

In statistics, the first step in Bayesian inference for parametric models is to select an appropriate prior distribution for the related parameters. As there already exist many useful results about Bayesian inference for Pareto distribution, e.g.

[4, 5]

, we shall focus ourselves on the Bayesian approach generated from the so-called Jeffreys prior. The Jeffreys prior is a non-informative prior which is directly related to the Fisher–Rao metric by stipulating that the prior probability be proportional to the geometrical volume in the parameter space. Although there already exist lots of priors from which one can carefully choose for practical use, our choice of the Jeffreys prior is made here in order to observe the statistical feature of our specified geometrical structure.

By the volume form in (14), we obtain the Jeffreys prior for the two-parameter Pareto model as

 p(α,β)∝1α,α>0,β>0. (20)

The proportionality in (20) cannot give us a proper probability distribution as integration of yields infinity, but an improper prior can still be useful. Improper priors usually provide much less information than that available in the observed data, which is a desired property as most inference problems are mainly based on data analysis rather than the specification of priors.

To deal with the inference problem after we observed the data from Pareto distribution with unknown parameters, we first calculate the posterior distribution by using (20

) as the improper prior. Bayes’ theorem gives the posterior probability density as

 p(α,β|x)=p(x|α,β)p(α,β)∫∞0∫∞0p(x|α,β)p(α,β)dαdβ, (21)

where the joint probability density of the observations is given by

 p(x|α,β)=βnαnβ(n∏i=1xi)−β−1I[minni=1xi≥α]. (22)

By the factorization criterion, we note from (22) that the statistics

 q1(x)=nmini=1xi, q2(x)=n∑i=1logxi

are jointly sufficient statistics for the parameters . Actually, by classical results in statistics (cf. [8, 9]), are minimal jointly sufficient statistics for

, since they are equivalent to the maximum likelihood estimator (MLE) of

:

 ˆα(x)=q1(x), ˆβ(x)=nq2(x)−nlogq1(x).

By direct computation, we have

 ∫∞0∫∞0p(x|α,β)p(α,β)dαdβ =∫∞0βnexp[−q2(x)(β+1)]∫q1(x)0αnβ−1dαdβ =1n∫∞0βn−1qnβ1(x)exp[−q2(x)(β+1)]dβ =exp[−q2(x)]Γ(n)n[q2(x)−nlogq1(x)]n.

Thus, by (21), we obtain the joint posterior probability density for as

 p(α,β|x)=n[q2(x)−nlogq1(x)]nΓ(n)βnαnβ−1exp[−q2(x)β]I[0<α≤q1(x)]. (23)

Next, we calculate the marginal posteriors by integrating (23). The marginal posterior of is given by

 p(α|x)=∫∞0p(α,β|x)dβ=n2[q2(x)−nlogq1(x)]nα[q2(x)−nlogα]n+1I[0<α≤q1(x)], (24)

and the cumulative distribution function of

is

 Pr(α≤t|x)=(q2(x)−nlogq1(x)q2(x)−nlogt)n, 0

We notice from (24) that one anomalous posterior behavior of is that as , and this phenomenon cannot be eliminated even with increasing sample size . However, from (25), we observe that

 Pr(α≤t|x)=1[1+ˆβ(x)log(ˆα(x)/t)]n, (26)

which tends to 0 for with positive as . This means that the distribution (24) is concentrated in the vicinity of for large sample size in spite of the unboundedness near 0. Similarly, the marginal posterior of is obtained as

 p(β|x)=βn−1[q2(x)−nlogq1(x)]nΓ(n)exp[−(q2(x)−nlogq1(x))β], (27)

which is a probability density of gamma distribution. Hence, the posterior feature of

is readily identified as it belongs to a well-studied distribution. Suppose the random variables are independently drawn from the exponential distribution with population mean , then the sample mean obeys the gamma distribution (27

). As a consequence of the central limit theorem, for large sample size

, the distribution of in (27) approximates to the normal distribution with mean

and variance

.

Furthermore, to deal with the situation when either of and is known, we can derive the conditional posteriors from the joint posterior and the marginal posteriors. The conditional posterior of with known is given by

 p(α|x,β)=nβαnβ−1qnβ1(x)I[0<α≤q1(x)],

with cumulative distribution function

 Pr(α≤t|x,β)=[t/ˆα(x)]nβ, 0

The conditional posterior of with known is given by

 p(β|x,α)=βn[q2(x)−nlogα]n+1Γ(n+1)exp[−(q2(x)−nlogα)β],

which is again a gamma distribution. Both of these two distributions are easy to manipulate, and we shall not discuss them in depth.

Now we can derive some estimators and predictions for the parameters from the posterior distributions. By setting in (26), the posterior median of can be obtained as

 ˜α(x)=ˆα(x)exp(1−n√2ˆβ(x)).

Similarly, by using (28), the posterior median of with known can be obtained as

 ˜α(x,β)=2−1nβˆα(x).

As the posterior and conditional posterior of both satisfy the gamma distribution, their medians do not have simple closed form.

The posterior mean of can be computed as

 ¯α(x)=E(α|x)=∫ˆα(x)0t dPr(α≤t|x)=ˆα(x)−∫ˆα(x)0Pr(α≤t|x)dt. (29)

Although it does not yield a simple closed form, we can still obtain the following bounds for .

Proposition 13.

These inequalities hold for the posterior mean :

 (n−1)ˆβ(x)−1(n−1)ˆβ(x)ˆα(x)≤¯α(x)≤nˆβ(x)nˆβ(x)+1ˆα(x). (30)
Proof.

Note that

 1+ˆβ(x)log(ˆα(x)/t)≤exp[ˆβ(x)log(ˆα(x)/t)]=[ˆα(x)/t]ˆβ(x).

From this, we have

 ∫ˆα(x)0Pr(α≤t|x)dt≥∫ˆα(x)0[t/ˆα(x)]nˆβ(x)dt=ˆα(x)nˆβ(x)+1.

By (29), we obtain the second inequality of (30).

By change of variables , we have

 ∫ˆα(x)0Pr(α≤t|x)dt =∫∞0ˆα(x)exp(−u)[1+ˆβ(x)u]ndu ≤∫∞0ˆα(x)[1+ˆβ(x)u]ndu=ˆα(x)(n−1)ˆβ(x).

This yields the first inequality of (30). ∎

The other posterior means can be readily computed from the corresponding distributions. We present these results without details as follows. The posterior mean of with known is given by

 ¯α(x,β)=E(α|x,β)=nβnβ+1ˆα(x).

The posterior mean of with unknown is given by

 ¯β(x)=E(β|x)=ˆβ(x).

The posterior mean of with known is given by

 ¯β(x,α)=E(β|x,α)=n+1q2(x)−nlogα.

We can also easily determine the posterior predictive distribution of a new observation

which is independently drawn from the Pareto distribution with the same parameters as the previous observations . The posterior predictive distribution with unknown is

 p(˜x|x) =∫∞0∫∞0p(˜x|α,β)p(α,β|x)dαdβ (31)

The posterior predictive distribution with known is

 p(˜x|x,α) =∫∞0p(˜x|α,β)p(β|x,α)dβ =(n+1)[q2(x)−nlogα]n+1˜x[q2(x)+log˜x−(n+1)logα]n+2I[˜x≥α].

The posterior predictive distribution with known is

 p(˜x|x,β) =∫∞0p(˜x|α,β)p(α|x,β)dα =nn+1βˆα−nβ(x)˜x−β−1min{˜x,ˆα(x)}(n+1)βI[˜x>0] ={nn+1βˆα−nβ(x)˜xnβ−1,0<˜x<ˆα(x),nn+1βˆαβ(x)˜x−β−1,˜x≥ˆα(x).

5 Simulations

We shall generate random samples for simulation of the Pareto distribution with underlying parameters fixed as . By the inverse transform sampling method, the desired samples from Pareto distribution can be generated as , where

is drawn from random numbers uniformly distributed in the unit interval

.

We first randomly generate samples from the Pareto distribution with parameters . The posterior distributions and corresponding estimators are obtained by the methods given in the previous section. The results of estimators are recorded in Table 2. To illustrate the extent to which the estimators approximate to the underlying parameters, we show in the last column the distances between the underlying parameters and the estimators of by using (19). One interesting phenomenon is that the distances decrease much more in the case of known than in the case of known . This implies that the known conveys much more information than the known .

We also provide graphical representations of the joint posterior and marginal posteriors as follows. The joint posterior probability density function is shown in Fig. 2. As we can see, the joint posterior probability is concentrated in a small area near the MLE . We also note from (23) that the density is unbounded near , but the inappreciable probability makes it imperceptible from Fig. 2.

The marginal posterior probability density functions of and are shown in Fig. 3. From the marginal posterior of in Fig. 3(fig:marga), we see that the probability density is extremely concentrated in the vicinity of . In addition, as we have already discussed, the density is unbounded near . However, this fact cannot be directly seen from Fig. 3(fig:marga), since diverges so slowly that it needs being far less than to achieve a noticeable magnitude of . Furthermore, by using (26), we can deduce that the probability for is less than . Thus, it is very unlikely that is very distant from , let alone being near 0. Fig. 3(fig:margb) illustrates the marginal posterior density , which approximates to the normal distribution with mean

, as we have mentioned earlier. This explains the concentration of the probability density of near as shown in Fig. 3(fig:margb).

Fig. 4 illustrates the approximation to the underlying Pareto distribution by the posterior predictive distribution. The probability density function of the underlying distribution is determined by (9) with ; the probability density function of the posterior predictive distribution is determined by (31). As we can see, the predictive density well approximates to the underlying density , except that it is continuous for with a cusp at . We also note from (31) that is unbounded near , which cannot be directly seen from Fig. 4.

6 Conclusion

In this paper, we proved that the two-parameter family of Pareto distribution with a proper Fisher–Rao metric is isometric to the Poincaré upper half-plane model. Geometrical properties of the Pareto distribution, such as connection, curvature and geodesics, could then be studied in the light of the isometry. One notable result is that it contributes as one of the statistical manifolds with constant curvatures among the very few known occasions, including the well-known normal distribution and the Weibull distribution [6]; a classification of exponential families with constant curvatures is available in [16]. Jeffreys prior, a non-informative prior closely related to the Fisher–Rao metric, was accordingly obtained to carry out Bayesian inference. We expect that results of this paper would motivate further differential-geometric investigations of statistical manifolds violating the regularity conditions as well as their applications.

Acknowledgements

H Sun is supported by the National Natural Science Foundation of China (Nos. 61179031, 10932002). L Peng is supported by the MEXT “Top Global University Project” and Waseda University Grant for Special Research Projects (Nos. 2019C-179, 2019E-036, 2019R-081).

References

• [1] S. Amari. Differential-Geometrical Methods in Statistics. Springer, New York, 1985.
• [2] S. Amari and H. Nagaoka. Methods of Information Geometry. American Mathematical Society, 2007.
• [3] B. C. Arnold. Pareto Distributions. International Cooperative Publishing House, Fairland, 1983.
• [4] B. C. Arnold and S. J. Press. Bayesian inference for Pareto populations. J. Econom., 21:287–306, 1983.
• [5] B. C. Arnold and S. J. Press. Bayesian estimation and prediction for Pareto data. J. Am. Stat. Assoc., 84:1079–1084, 1989.
• [6] L. Cao, H. Sun, and X. Wang. The geometric structures of the Weibull distribution manifold and the generalized exponential distribution manifold. Tamkang J. Math., 39:45–52, 2008.
• [7] E. Cartan. Oeuvres Complètes, 3 vols. Gauthier-Villars, Paris, 1952–1955.
• [8] G. Casella and R. L. Berger. Statistical Inference. Duxbury Press, second edition, 2001.
• [9] M. H. DeGroot and M. J. Schervish. Probability and Statistics. Pearson Education, Inc., fourth edition, 2012.
• [10] B. Efron. Defining the curvature of a statistical problem (with applications to second order efficiency) (with discussion). Ann. Statist., 3:1189–1242, 1975.
• [11] R. A. Fisher. On the mathematical foundations of theoretical statistics. Phil. Trans. R. Soc. Lond. A., 222:309–308, 1922.
• [12] H. Jeffreys. An invariant form for the prior probability in estimation problems. Proc. R. Soc. Lond. A Math. Phys. Sci., 186:453–461, 1946.
• [13] J. Jost. Compact Riemannian Surfaces: An Introduction to Contemporary Mathematics. Springer, third edition, 2006.
• [14] D. H. Kim, S. G. Kang, and W. D. Lee. Noninformative priors for Pareto distribution. Journal of the Korean Data & Information Science Society, 20:1213–1223, 2009.
• [15] L. Peng, H. Sun, and L. Jiu. The geometric structure of the Pareto distribution. Boletín de la Asociación Matemática Venezolana, XIV:5–13, 2007.
• [16] L. Peng and Z. Zhang. Statistical Einstein manifolds of exponential families with group-invariant potential functions. arXiv:1904.02389, 2019.
• [17] P. Petersen. Riemannian Geometry. Springer-Verlag, second edition, 2006.
• [18] C. R. Rao. Information and accuracy attainable in the estimation of statistical parameters. Bull. Calcutta. Math. Soc., 37:81–91, 1945.
• [19] S. Stahl. The Poincaré Half-plane: A Gateway to Modern Geometry. Jones & Bartlett Learning, 1993.
• [20] H. Sun, Z. Zhang, L. Peng, and X. Duan. An Elementary Introduction to Information Geometry. Science Press, Beijing, 2016.
• [21] L. W. Tu. Differential Geometry: Connections, Curvature, and Characteristic Classes. Springer, New York, 2017.
• [22] G. K. Zipf. Human Behavior and the Principle of Least Effort. Addison-Wesley, Cambridge, MA, 1949.