Let us fix a nonnegative integer and let . We say that a sequence of nonnegative real numbers is log-concave if is a discrete interval and the inequality is satisfied for . The sequence is called ultra-log-concave if is log-concave. Let us define to be the class of all random variables taking values in
and such that their probability mass functionis ultra-log-concave. We shall slightly abuse notation by using the same letter to denote both the law of and its probability mass function. Note that if ,
, stands for the probability mass function of the binomial distribution, then ultra log-concavity ofis equivalent to log-concavity of and thus equivalent to being log-concave with respect to the binomial distribution .
Note that can also be considered. In this case random variables take values in the set of nonnegative integers and are called ultra-log-concave if is log-concave, where stands for the probability mass function of the Poisson random variable with parameter . Thus, ultra-log-concavity is in this case equivalent to saying that is log-concave with respect to .
Ultra-log-concave random variables attracted considerable attention of researchers over last two decades. The definition itself is, according to our best knowledge, due to Pemantle  who introduced it in the context of theory of negative dependence of random variables. After reading the unpublished at that moment manuscript of Pemantle, Liggett wrote his article  where he proved that the convolution of a random variable and a random variable is a random variable. Note that the border case of this statement is the family of binomial random variables in the case of finite support (if and are independent, then ) and the family of Poisson distributions for random variables with infinite support (if and are independent, then ). A short proof of this fact, surprisingly connecting the statement to the famous Alexandrov-Fenchel inequality in convex geometry, was given by Gurvits in . The statement for random variables with infinite support is actually much older and due to Walkup, see Theorem 1 in . A direct and simpler proof of Walkup’s theorem appeared also in  is the context of Khintchine inequalities, see also a recent proof from  using localization techniques. It is worth mentioning that the same statement holds true if one replaces log-concavity with log-convexity and is due to Davenport and Pólya, .
More recently, Johnson in  considered random variables in the context of Shannon entropy , where we use the convention . He proved that the Poisson distribution maximizes entropy in the class of ultra log–concave distributions under fixed mean. The author uses an interesting semigroup technique based on adding Poisson random variable and the operation of thinning. In  Aravinda, Marsiglietti and Melbourne established concentration inequalities for the class . The main ingredient of the proof (see Lemma 2.1 therein) was the inequality satisfied for any , where . By considering Taylor expansion around the authors deduced the inequality . The main tools used to establish these results was a rather sophisticated localization technique developed in .
Our first goal of the present paper is to generalize and give simple proofs of the mentioned above results from  and . In particular, we shall work with the class for arbitrary (here for ) we denote by ). We prove the following theorem.
Let and so that . Then
for any convex one has ,
for we have and for we have .
Similarly, if and so that . Then
for any convex one has ,
for we have and for we have .
We shall use the following generalization of the lemma due to Barthe and Naor, see Lemma 8 in .
Let be real random variables with laws satisfying and such that the Radon-Nikodym derivative exists and the function changes sign at most two times. Then changes sign precisely two times and if the sign pattern of is , then for any convex function one has .
The inequality valid for all convex
defines the so-called Choquet order on the space of probability distributions. We give a simple proof of the above lemma using the technique of intersecting densities developed in and used in the context of information theory in  and convex geometry in 
. The lemma was originally formulated and used for continuous random variables. However, here we shall demonstrate its relevance in the discrete setting. The following corollary is immediate.
Let be random variables supported in with laws satisfying . Assume that is log-concave and the sequence changes sign at most two times. Then changes sign precisely two times and if the sign pattern of is , then .
Let us also mention that it is clearly possible to formulate an analogue of this corollary in the continuous setting.
In the third chapter we go beyond the convex case discussed in Theorem 1. We develop a discrete analogue of the concept of degrees of freedom introduced in  and use it to prove the following theorem.
Let be an ultra-log-concave random variable with integral mean. Then
We first prove our main lemma.
Proof of Lemma 1.
Since , we get that has to change sign at least once. Suppose changes sign exactly once at point . Since implies , we get that . This is a contradiction, since the integrant has fixed sign. We have proved that changes sign exactly two times.
Now, our goal is to show that , which is, for any constants , equivalent to . Suppose changes sign in points . Let us choose in such a way that for (simple system of two linear equations). By convexity we see that has sign pattern and changes sign exactly in . Since has sign pattern , the integrant is non-positive and the assertion follows. ∎
The proof of Corollary 3 is immediate.
Proof of Corollary 3.
Proof of Theorem 1.
(a) We can assume that , otherwise and there is nothing to prove. Take . Let be the law of and the law of . With the notation of Lemma 2 we have . Since the sequence is log-affine, by the definition of class we see that is log-concave. Since is supported on a discrete interval and on this interval is a concave sequence, we get that the equation has at most two solutions. Thus changes sign at most two times. Lemma 2 implies that changes sign precisely two times and the concavity of implies that the sign pattern of is . The assertion follows from Lemma 2.
Points (b) and (c) follow immediately from (a). Note that .
(d) Again let . According to Corollary 3 we have to verify the log-concavity of , which reduces, after canceling log-affine factors, to the inequality , . This is equivalent with , which is .
The proofs of points (a’) and (b’) are very similar, but simpler. The last step in the proof of point (d’) is to verify the inequality for , which is equivalent to . ∎
3. Discrete degrees of freedom
Suppose is a log-concave sequence supported in some finite discrete interval which without loss of generality can be assumed to be . We say that has degrees of freedom if there exist linearly independent sequences supported in and such that for all the sequence
is log-concave in .
We shall prove the following lemma describing sequences with small number of degrees of freedom. The proof is a rather straightforward adaptation of the argument presented in .
Let . Suppose a positive log-concave sequence supported in has degrees of freedom. Then with , where are arithmetic progressions.
Since is strictly positive and log-concave, it can be written in the form , where is convex. The sequence is called the slope sequence. Clearly the slope sequence is non-decreasing. We prove the lemma by contrapositive. We shall assume that cannot be written as a maximum of arithmetic progressions. Our goal is then to prove that has at least degrees of freedom.
Define the sequence inductively by taking and as long as the set is non-empty. Thus with , as is not piecewise linear with at most pieces.
For let us define the sequence via the expression
It is not hard to show that are convex. We shall assume that so that the sequence is not constant. If this is not the case it suffices to reflect the picture and use instead of .
Claim 1. There exists such that for all the sequence
Proof of Claim 1.
On each of the intervals , , where we take , the above sequence is given by the expression of the form , where is an arithmetic progression. We first check that for we have for sufficiently small. By continuity one can assume that . We want to prove convexity of
The first term is affine. For small the function is increasing and convex. Since the sequence is convex, it is enough to show that is a convex sequence whenever is an increasing convex function and is convex. This is straightforward since
Now we are left with checking our inequality in points . But since then , the inequality follows by a simple continuity argument.
Claim 2. The sequences are linearly independent.
Proof of Claim 2.
Let . Let us consider , . To prove that are linearly independent, it suffices to show that are linearly independent. Indeed, suppose that . This means that
If are linearly independent, it follows that for , which easily leads to for all .
Now the fact that are linearly independent is easy since for is supported in . These intervals form a decreasing sequence, so in order to show that every combination in fact has zero coefficients it is enough to evaluate this equality first at points to conclude that (note that the support of for is contained in ) and then consecutively at points to conclude that . ∎
Combining Claim 1 and Claim 2 finishes the proof.
Let us now consider the space of all log-concave sequences is an interval . We shall identify the sequence
with a vector in. Suppose we are given vectors and real numbers . Let us introduce the polytope
We will be assuming that this polytope is bounded, which will be the case in our applications. Let us now assume that we are given a convex continuous functional . The following lemma is well known and can be found in the continuous setting in .
The supremum of a convex continuous functional on is attained on some sequence having at most degrees of freedom.
Let . By compactness of the supremum of on is attained. By convexity of on the maximum is the same as the maximum on and is attained in some point . Moreover, as we work in a finite dimensional Euclidean space, is also compact. A baby version of the Krein-Milman theorem shows that is a convex combination of extreme points of , that is , where positive numbers sum up to one. By convexity attains its maximum on also in all the points . Thus, the maximum of on is attained in some extreme point of . Clearly extreme points of must belong to . It is therefore enough to show that if has more than degrees of freedom, then is not an extreme point of .
Suppose with support and there exist linearly independent sequences supported in and such that for all the sequence
is log-concave in and thus also in . Therefore, it belongs to . Note that the set of parameters for which form a linear subspace of dimension at least . If then this subspace is non-trivial and contains two antipodal points and . Note that and thus is not an extreme point of as both and belong to .
We are now ready to prove Theorem 4.
Proof of Theorem 4.
Step 1. Let and let be the probability mass function of . Our goal is to prove the inequality . By an approximation argument one can assume that has its support contained in . Note that , where . We would like to maximize the linear (and thus convex) functional under the constraints given by vectors (fixing to be a probability distribution) and , fixing the mean. Thus Lemma 6 implies that the maximum is attained on sequences having at most two degrees of freedom and therefore for of the form for some . As a consequence, in order to prove the inequality it is enough to consider only sequences of the form
Step 2. One can assume that is non-constant. Clearly . Our goal is to prove the inequality
This simplifies to which after taking the logarithm reads . Recall that . Plugging this in gives the equivalent form
It would therefore be enough to show that the function
is nonnegative for all . Taking will then finish the proof.
Step 3. By a direct computation we have
Claim 1. For all we have .
Proof of Claim 1..
By Cauchy-Schwarz inequality
The assertion follows by dividing both sides by . ∎
Claim 2. The function has a unique zero .
According to a theorem due to Gurvits  (see also [10, 11] for alternative proofs) a function of the form is log-concave for if the sequence is log-concave. Thus is log-concave for . Equivalently is a decreasing function on and thus is increasing.
If then while . By intermediate value property has a unique zero in . If then and is the unique zero of . ∎
We can now easily finish the proof. By Claims 1 and 2 we see that is nonpositive on and nonnegative on . Therefore attains its minimum at . It is therefore enough to check the inequality . Clearly implies that . Thus . The inequality is therefore equivalent to and is obvious as is a truncated sum defining the exponential function.
-  H. Aravinda, A. Marsiglietti, J. Melbourne, Concentration Inequalities for Ultra Log-Concave Distributions, 2021, preprint, arXiv:2104.05054
-  M. Bartczak, P. Nayar and S. Zwara, Sharp Variance-Entropy Comparison for Nonnegative Gaussian Quadratic Forms, IEEE Trans. Inform. Theory 67, no. 12 (2021), 7740–7751.
F. Barthe, A. Naor, Hyperplane projections of the unit ball of, Discrete Comput. Geom. 27 (2002), no. 2, 215–226.
-  M. Białobrzeski, P. Nayar, Rényi entropy and variance comparison for symmetric log-concave random variables, 2021, preprint, arXiv:2108.10100
-  H. Davenport, G. Pólya, On the product of two power series, Canad. J. Math. 1 (1949), 1–5.
-  Y. Eitan, The centered convex body whose marginals have the heaviest tails, preprint, arXiv:2110.14382
-  A. Eskenazis, P. Nayar, T. Tkocz, Sharp comparison of moments and the log-concave moment problem, Adv. Math. 334 (2018), 389–416.
-  M. Fradelizi and O. Guédon, A generalized localization theorem and geometric inequalities for convex bodies, Adv. Math. 204 no. 2 (2006), 509–529.
-  L. Gurvits, On multivariate Newton-like inequalities, Advances in combinatorial mathematics, Springer, Berlin (2009), 61–78.
-  L. Gurvits, A short proof, based on mixed volumes, of Liggett’s theorem on the convolution of ultra-logconcave sequences, Electron. J. Combin. 16, Note 5 (2009).
-  A. Havrilla, P. Nayar, T. Tkocz, Khinchin-Type Inequalities via Hadamard’s Factorisation, International Mathematics Research Notices, 2021, rnab313, https://doi.org/10.1093/imrn/rnab313
-  O. Johnson, Log-concavity and the maximum entropy property of the Poisson distribution, Stoch. Process. Their Appl. 117, no. 6 (2007), 791–802.
-  T.M. Liggett, Ultra logconcave sequences and negative dependence, J. Combin. Theory Ser. A 79 no. 2 (1997), 315–325.
-  M. Madiman, P. Nayar, and T. Tkocz, Sharp moment-entropy inequalities and capacity bounds for log-concave distributions, IEEE Transactions on Information Theory 67, no. 1, 2021, 81–94.
-  A. Marsiglietti, J. Melbourne, Geometric and Functional Inequalities for Log-Concave Probability Sequences, preprint, arXiv:2004.12005
-  P. Nayar P, K. Oleszkiewicz, Khinchine type inequalities with optimal constants via ultra log-concavity, Positivity 16 (2012), 359–371.
-  R. Pemantle, Towards a theory of negative dependence, J. Math. Phys. 41 no. 3 (2000), 1371–1390.
-  D. W. Walkup, Pólya sequences, binomial convolution and the union of random sets, J. Appl. Probab. 13 (1976), 76–85.