# Product Inequalities for Multivariate Gaussian, Gamma, and Positively Upper Orthant Dependent Distributions

The Gaussian product inequality is an important conjecture concerning the moments of Gaussian random vectors. While all attempts to prove the Gaussian product inequality in full generality have been unsuccessful to date, numerous partial results have been derived in recent decades and we provide here further results on the problem. Most importantly, we establish a strong version of the Gaussian product inequality for multivariate gamma distributions in the case of nonnegative correlations, thereby extending a result recently derived by Genest and Ouimet [5]. Further, we show that the Gaussian product inequality holds with nonnegative exponents for all random vectors with positive components whenever the underlying vector is positively upper orthant dependent. Finally, we show that the Gaussian product inequality with negative exponents follows directly from the Gaussian correlation inequality.

## Authors

• 2 publications
• 10 publications
• 2 publications
12/23/2021

### A combinatorial proof of the Gaussian product inequality conjecture beyond the MTP2 case

In this paper, we present a combinatorial proof of the Gaussian product ...
06/01/2020

### Some improved Gaussian correlation inequalities for symmetrical n-rectangles extended to some multivariate gamma distributions and some further probability inequalities

The Gaussian correlation inequality (GCI) for symmetrical n-rectangles i...
06/04/2022

### Miscellaneous results related to the Gaussian product inequality conjecture for the joint distribution of traces of Wishart matrices

This note reports partial results related to the Gaussian product inequa...
12/22/2020

### Quantitative Correlation Inequalities via Semigroup Interpolation

Most correlation inequalities for high-dimensional functions in the lite...
12/09/2018

### Uniform Hanson-Wright type concentration inequalities for unbounded entries via the entropy method

This paper is devoted to uniform versions of the Hanson-Wright inequalit...
07/30/2019

### Euclidean Forward-Reverse Brascamp-Lieb Inequalities: Finiteness, Structure and Extremals

A new proof is given for the fact that centered gaussian functions satur...
03/31/2021

### Representative endowments and uniform Gini orderings of multi-attribute inequality

For the comparison of inequality in multiple attributes the use of gener...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Let be a positive integer, and let

be a random vector which has a multivariate Gaussian distribution with probability density function,

 (2π)−n/2|Σ|−1/2exp(−12x′Σ−1x),x∈Rd,

with a nonsingular covariance matrix . We refer to the random vector as having a centered Gaussian distribution because , and we write .

The Gaussian product inequality (GPI) conjecture states that, for any centered Gaussian random vector and any , there holds the inequality,

 E(d∏j=1X2nj)≥d∏j=1E(X2nj). (1.1)

We refer readers to Frenkel [4] and Genest and Ouimet [5] for details of the history, motivation, and literature on this inequality.

Several generalizations of (1.1) have been studied recently. Li and Wei [13] considered, as an extension of (1.1), conditions such that

 E(d∏j=1|Xj|nj)≥d∏j=1E(|Xj|nj), (1.2)

for arbitrary . Wei [20] derived hypotheses under which the inequality

 (1.3)

holds for all index sets , where . To distinguish between these inequalities, we call (1.2) the weak form and (1.3) the strong form of the GPI.

Russell and Sun [18] recently related the GPI to a class of combinatorial inequalities, and thereby established numerous cases of the GPI for . One of the results obtained by Russell and Sun [18] is derived here by different methods, and we present that result in Corollary 2.2. The approach by way of combinatorial inequalities is noteworthy because it is also shown in [18] to lead to new inequalities for the bivariate Gaussian distributions.

The weak form of the GPI was established by Frenkel [4] for and arbitrary , and by Lan, et al. [11] for and integer exponents , , and with equality between at least two exponents. Genest and Ouimet [5] developed recently a novel and far-reaching approach to the GPI, proving (1.2) for arbitrary with nonnegative even integer exponents when the covariance matrix is completely positive, i.e., where is a matrix with for all . Nevertheless, it is still unknown whether the weak form of the GPI (1.2) is valid for general .

On the other hand, the strong form of the GPI (1.3) fails even for with . Consider a Gaussian random vector, , with . By the Isserlis-Wick formula [7]

or by using moment-generating functions, we obtain

 E(X21X22X23)−E(X21X22)E(X23)=2(σ11σ223+4σ12σ13σ23+σ213σ22).

If we set , , and with then with , the matrix is positive definite and yet

 E(X21X22X23)−E(X21X22)E(X23)=4ρ2(1+2σ12)<0;

concrete examples are and .

Wei [20] showed, however, that the strong form of the GPI holds for the case in which all exponents are negative. It is also obvious that the strong form (1.3) holds if , the vector of absolute values, is associated, i.e., if for all component-wise non-decreasing functions [3]. Thus, for a centered Gaussian random vector , if is associated then it follows immediately that the strong form of the GPI holds. In particular, if the vector is multivariate totally positive of order , denoted (cf., [8]) then, as the property implies associatedness, it follows that the strong form of the GPI holds.

Moreover, for Gaussian vectors , the property of its absolute values can be characterized explicitly in terms of the covariance matrix . For this purpose (and in the sequel) we call a diagonal matrix a sign matrix if for all . It was proved by Karlin and Rinott [9] that for the vector of absolute values, , is if and only if there exists a sign matrix such that all off-diagonal entries of are nonnegative; hence the strong form of the GPI holds for that class of covariance matrices.

In this article, we derive new and more general hypotheses under which the weak form of the GPI (1.2) and the strong form of the GPI (1.3) hold. We extend the results of Genest and Ouimet [5] in several directions, one of which is a proof of the strong form of the GPI (1.3) for nonnegative correlations, i.e., for any covariance matrix with for all . Additionally, we show that the weak form of the GPI and the strong form of the GPI follow from the properties of positive upper orthant dependence (PUOD) and strongly positive upper orthant dependence (SPUOD), respectively. Finally, we apply the Gaussian correlation inequality (Royen [14]) to obtain in Section 4 an alternative and succinct proof of the strong form of the GPI for negative exponents, derived originally by Wei [20]; further, we show that this result extends to the multivariate gamma distributions.

## 2 The strong form of the GPI for nonnegative correlations

Genest and Ouimet [5] established the weak form of the GPI (1.2

) for the multivariate normal distribution

and with even integers and completely positive covariance matrix , i.e., , where is a matrix with nonnegative entries . In Theorem 2.1, we extend this result in three directions. First and most importantly, we extend the result in [5] to the case of nonnegative correlations, where is such that for all . For , the assumption of nonnegative correlations is known to be less restrictive than complete positivity [6].

On the other hand, a famous counterexample of Šidák [19] established the existence of Gaussian random vectors with completely positive covariance matrices and for which the vector of absolute values is not positively upper orthant dependent (PUOD) and hence not associated. Hence the result of Genest and Ouimet and the more general result presented here both extend the strong form of the GPI (1.3) beyond the straightforward case in which the vector of absolute values is associated.

The second direction in which Theorem 2.1 extends results known hitherto is that the we obtain the strong form (1.3), hence also the weak form (1.2).

Third, in considering the case of even exponents, the weak form of the GPI (1.2) and the strong form (1.3) each correspond to inequalities for special cases of the multivariate gamma distributions. Precisely, the -dimensional gamma distribution (in the sense of Krishnamoorthy and Parthasarathy [10]) may be defined by means of its moment-generating function. Denote by

the identity matrix of order

and, for sufficiently small , define . Then we say that

has a multivariate gamma distribution with a not necessarily integer “degree-of-freedom parameter”

and positive semidefinite matrix parameter , written , if the moment-generating function of is

 Eexp(d∑j=1tjXj)=det(Id−ΣT)−α. (2.1)

This -dimensional gamma distribution is also known as the Wishart-Gamma distribution since it was derived originally as the distribution of one-half of the diagonal entries of a

-Wishart distributed random matrix with

. In this regard, it is remarkable that the distribution also exists for all values , where denotes the integer part of ; see [15].

The distribution is infinitely divisible (i.e., the distribution exists for all ) if and only if the distribution is multivariate totally positive of order (); see Bapat [1]. For example, one can show that is infinitely divisible if is of “structure [17], i.e., with for all , or if is of “tree-type”; see [16] for details.

If then distribution. Consequently, in the case of even exponents, as considered in [5], the weak form (1.2) and the strong form (1.3) of the GPI intrinsically are inequalities on the distribution. Therefore it is natural to extend these inequalities to the more general multivariate gamma distributions having moment-generating function (2.1).

###### Theorem 2.1.

Let , where is positive semidefinite. Suppose there exists a sign matrix such that all the elements in are nonnegative, i.e., for all . Then for all subsets , and for all nonnegative integers , there holds the strong form of the GPI,

 (2.2)

Proof.  Since the moment-generating function (2.1) is invariant under the transformation we can, without loss of generality, assume that for all . Moreover, by permuting the coordinates of , we may also assume that where .

With sufficiently small and , the moment-generating function of the distribution is

 Eexp(d∑j=1tjXj)=det(Id−ΣT)−α=det(Id−T1/2ΣT1/2)−α. (2.3)

Denote by

the eigenvalues of the matrix

. For sufficiently small , we have . Then we have

 det(Id−T1/2ΣT1/2)−α =exp(−αlogdet(Id−T1/2ΣT1/2) =exp(−αd∑j=1log(1−εj)).

Inserting into this sum the series expansions,

 −log(1−εj)=∞∑n=1εnjn,

, and interchanging the order of summation, we obtain

 det(Id−ΣT)−α =exp(αd∑j=1∞∑n=1εnjn) =exp(α∞∑n=11nd∑j=1εnj)=exp(α∞∑n=11ntr[(ΣT)n]). (2.4)

Next, we partition and into block matrices,

 Σ=(Σ11Σ12Σ21Σ22),T=(T100T2),

where and are , is , and and are . Then,

 T1/2ΣT1/2=(T1/21Σ11T1/21T1/21Σ12T1/22T1/22Σ21T1/21T1/22Σ22T1/22) (2.5)

Let

 A=(A11A12A21A22)

be a symmetric matrix which has been partitioned similarly to . By induction on , we find that

 An=(An11+Q11,n(A)Q12,n(A)Q21,n(A)An22+Q22,n(A)) (2.6)

where each matrix is a homogeneous polynomial in with nonnegative coefficients; for instance,

 Q11,2(A)=A12A21, Q12,2(A)=[Q21,2(A)]′=A11A12+A12A22, Q22,2=A21A12.

Denote by the set of nonnegative integers. Applying (2.6) to (2.5), and taking traces, we obtain

 1ntr[(ΣT)n] =1ntr[(T1/2ΣT1/2)n] =1ntr[(T1/21Σ11T1/21)n]+1ntr[(T1/22Σ22T1/22)n]+1n2∑i=1Qii,n(T1/2ΣT1/2) =1ntr[(Σ11T1)n]+1ntr[(Σ22T2)n]+∑n∈Nd0,n1+⋯+nd=ncntn11⋯tndd, (2.7)

where each is a polynomial in the entries of and . It is evident from (2.6) that the coefficients of each are nonnegative; therefore, since the entries of are nonnegative, we obtain for all .

Substituting (2) into (2), we obtain

 det(Id−ΣT)−α =det(Ip−Σ11T1)−αdet(Id−p−Σ22T2)−α⋅exp(∑n∈Nd0αcntn11⋯tndd). (2.8)

Next, the Maclaurin expansion of the exponential function leads to

 exp(∑n∈Nd0αcntn11⋯tndd)=∑m∈Nd0bmtm11⋯tmdd, (2.9)

Since and for all then and for all .

Applying (2.3) and (2.9) to (2.8), we obtain

 ∑n∈Nd0 (Ed∏j=1Xnjjnj!)tn11⋯tndd

Collecting terms in on the right-hand side of the above expression, we obtain

Next, we decompose the inner summation into terms corresponding to the cases in which and . Noting that , we obtain

with . Comparing the coefficients of the monomials , we obtain

which yields (2.2), the strong form of the GPI.

The following result was obtained by Russell and Sun [18] by different methods. In the context of Theorem 2.1, the corollary follows from the well-known result that if then .

###### Corollary 2.2.

(Russell and Sun [18]) Let , and suppose that there exists a sign matrix such that all off-diagonal elements of the matrix are nonnegative. Then the strong form of the GPI (1.3) holds for all even integers .

###### Remark 2.3.

An alternative approach to establishing Corollary 2.2 is by means of the classical Isserlis-Wick formula [7]. To see this, we write

 Ep∏j=1X2mjj =E(X1⋯X12m1terms⋅X2⋯X22m2terms⋯Xp⋯Xp2mpterms), (2.10) Ed∏j=p+1X2mjj =E(Xp+1⋯Xp+12mp+1terms⋅Xp+2⋯Xp+22mp+2terms⋯Xd⋯Xd2mdterms), (2.11) and Ed∏j=1X2mjj =E(X1⋯X12m1terms⋅X2⋯X22m2terms⋯Xd⋯Xd2mdterms). (2.12)

By the Isserlis-Wick formula, the expectations (2.10), (2.11), and (2.12) can be written as a sum of products of the elements of , , and , respectively. Moreover, a simple inspection of the terms arising in the evaluation of (2.10) and (2.11) show that the product of those two expectations yields a collection of terms that are a subset of the terms arising from evaluation of (2.12). Since we assume that for all and then we obtain the strong form of the GPI.

###### Remark 2.4.

We note that Theorem 2.1 can be extended further to distributions more general than the multivariate gamma distributions. Consider mutually independent random vectors such that, for all , where all entries of the matrix are nonnegative. Denote by the -th component of , and define a random vector by , . Then it is straightforward to show that the moment-generating function of is

 p∏i=1det(Id−ΣiT)−αi=exp(p∑i=1αi∞∑n=11ntr[(ΣiT)n]).

Exploiting the additivity of the traces and using similar arguments as in the proof of Theorem 2.1 yields an analogous theorem for the vector .

## 3 Positive upper orthant dependence and the GPI

In this section, we investigate the validity of the inequalities (1.2) and (1.3) without making specific assumptions on the distribution of the marginals of . As already pointed out in the introduction, (1.3) is valid if , the vector of absolute values, is associated. It is also clear that (1.3) holds when is weakly associated [21].

In this section, we show that (1.3) follows from the notion of strong positive upper orthant dependence (SPUOD), which has been shown to be strictly weaker than weak association [21]. Moreover the weak form (1.2) follows from the notion of positive upper orthant dependence (PUOD).

Let us recall [2] that a random vector is said to be positively upper orthant dependent (PUOD) if

 P(V1≥t1,…,Vd≥td)≥d∏j=1P(Vj≥tj)

for all . We will also say that the vector is strongly positively upper orthant dependent (SPUOD) if

 P(V1≥t1,…,Vd≥td)≥∏j∈IP(Vj≥tj)⋅∏j∈IcP(Vj≥tj)

for all .

We begin with a result which, in the one-dimensional case, is classical in the literature on the statistical analysis of survival data.

###### Lemma 3.1.

Let be a random vector with nonnegative components and such that . Then

 E(Y1⋯Yd)=∫∞0⋯∫∞0P(Y1≥t1,…,Yd≥td)dt1⋯dtd. (3.1)

Proof.  For completeness, we provide a direct proof; cf., [12]. For , let be the indicator function of the interval ; i.e., if , and if . Then

 ∫∞0χy(t)dt=y,

and it follows by an application of Fubini’s theorem that

 E(Y1⋯Yd) =Ed∏j=1∫∞0χYj(tj)dtj =∫∞0⋯∫∞0Ed∏j=1χYj(tj)dtj. (3.2)

It is trivial that

 d∏j=1χYj(tj)={1, if Y1≥t1,…,Yd≥td0, otherwise;

therefore

 Ed∏j=1χYj(tj)=P(Y1≥t1,…,Yd≥td).

Substituting the latter result into (3), we obtain (3.1).

###### Theorem 3.2.

Let be a random vector such that for fixed exponents .

If , the vector of absolute values of , is PUOD then there holds the weak form of the GPI,

 E(d∏j=1|Xj|nj)≥d∏j=1E(|Xj|nj). (3.3)

If is SPUOD then the strong form of the GPI holds, i.e., for any ,

 (3.4)

Proof.  Suppose that is PUOD. Replacing each by and simplifying the various inequalities on the , , we obtain

 P(|X1|n1≥t1,…,|Xd|nd≥td)≥d∏j=1P(|Xj|nj≥tj)

for all . Integrating both sides of this inequality with respect to over then, by applying Lemma 3.1, we obtain (3.3).

The strong form of the GPI (3.4) can be derived analogously starting from the assumption that is SPUOD.

## 4 The strong form of the GPI for negative exponents

The strong form of the GPI (1.3) for the case in which all exponents are negative was proved by Wei [20]. We now derive this result succinctly by an application of the Gaussian correlation inequality [14] and the method of integrating the multivariate survival function, as applied earlier in Section 3.

###### Proposition 4.1.

Suppose that and that . Then

 (4.1)

for all

Proof.  Without loss of generality, we can assume that . We note that the conditions on are necessary to ensure that the moments in (4.1) are finite.

For , we apply the Gaussian correlation inequality [14] to obtain

 P(| X1|−n1≥t1,…,|Xd|−nd≥td) =P(|X1|≤t−1/n11,…,|Xd|≤t−1/ndd) ≥P(|X1|≤t−1/n11,…,|Xp|≤t−1/npp)P(|Xp+1|≤t−1/np+1p+1,…,|Xd|≤t−1/ndd) =P(|X1|−n1≥t1,…,|Xp|−np≥tp)P(|Xp+1|−np+1≥tp+1,…,|Xd|−nd≥td).

Integrating the first and last terms of this inequality with respect to , we obtain (4.1).

The argument used to prove Proposition 4.1 also establishes the novel finding that if any random vector with (almost surely) positive components satisfies the Gaussian-type correlation inequality,

 P(Y1≤t1,…,Yd≤td)≥P(Y1≤t1,…,Yp≤tp)P(Yp+1≤tp+1,…,Yd≤td)

for all , then

for all such that the expectations exist, and for all . In particular, the strong form of the GPI with negative exponents holds for the multivariate gamma distributions treated in [14].

Acknowledgments. We are grateful to Frédéric Ouimet for drawing our attention to the article [5] which motivated us to take another look at the GPI.

## References

• [1] Bapat, R. B. (1989). Infinite divisibility of multivariate gamma distributions and M-matrices, Sankhy , 51, 73–78.
• [2] Dharmadhikari, S., and Joag-Dev, K. (1988). Unimodality, Convexity, and Applications. Academic Press, San Diego.
• [3]

Esary, J. D., Proschan, F., and Walkup, D. W. (1967). Association of random variables, with applications.

Ann. Math. Stat., 38, 1466–1474.
• [4] Frenkel, P. E. (2008). Pfaffians, Hafnians and products of real linear functionals. Math. Res. Lett., 15, 351–358.
• [5] Genest, C., and Ouimet, F. (2021). A combinatorial proof of the Gaussian product inequality conjecture beyond the MTP case. arXiv:2112.12283v2.
• [6] Gray, L. J., and Wilson, D. G. (1980). Nonnegative factorization of positive semi-definite nonnegative matrices. Linear Algebra Appl., 31, 119–127.
• [7] Kan, R. (2008). From moments of sum to moments of product. J. Multivariate Anal., 99, 542–554.
• [8] Karlin, S., and Rinott, Y. (1980). Classes of orderings of measures and related correlation inequalities. I. Multivariate totally positive distributions. J. Multivariate Anal., 10, 467–498.
• [9]

Karlin, S., and Rinott, Y. (1981). Total positivity properties of absolute value multinormal variables with applications to confidence interval estimates and related probabilistic inequalities.

Ann. Statist., 5, 1035–1049.
• [10] Krishnamoorthy, A. S., and Parthasarathy, M. (1951). A multivariate gamma-type distribution. Ann. Math. Stat., 22, 549–557.
• [11] Lan, G., Hu, Z.-C., and Sun, W. (2020). The three-dimensional Gaussian product inequality. J. Math. Anal. Appl., 485, 123858.
• [12] Liu, Y. (2020). A general treatment of alternative expectation formulae. Statist. Probab. Lett., 166, 108863.
• [13] Li, W. V., and Wei, A. (2012). A Gaussian inequality for expected absolute products. J. Theor. Probab., 25, 92–99.
• [14] Royen, T. (2014). A simple proof of the Gaussian correlation conjecture extended to multivariate gamma distributions. Far East J. Theor. Statist., 48, 139–145.
• [15] Royen, T. (2016). A note on the existence of the multivariate gamma distribution. arXiv:1606.04747.
• [16] Royen, T. (1994). On some multivariate gamma distributions connected with spanning trees. Ann. Inst. Math. Statist., 46, 361–371.
• [17] Royen, T. (1991). Multivariate gamma distributions with one-factorial accompanying correlation matrices and applications to the distribution of the multivariate range. Metrika, 38, 299–315.
• [18] Russell, O., and Sun, W. (2022). Some new Gaussian product inequalities. Preprint, arXiv:2201.04242.
• [19]

Šidák, Z. (1971). On multivariate normal probabilities of rectangles: Their dependence on correlations.

Ann. Math. Statist., 42, 169–175.
• [20] Wei, A. (2014). Representations of the absolute value function and applications in Gaussian estimates. J. Theor. Probab., 27, 1059–1070.
• [21] Zheng, Y., and Cai, N. (2011). A note on the relations between some concepts of positive dependence. Commun. Statist. Theory & Methods, 40, 1335–1341.