## 1. Introduction

In his seminal paper [1], Keith Ball proved that the maximal -dimensional volume of the section of the cube

by an hyperplane is

. Therefore proving a conjecture by Hensley [10].More precisely, for with , put for the volume of the intersection of the cube with the hyperplane , where is the usual scalar product in and stands for the (-dimensional) volume.

###### Theorem 1 (Ball [1]).

For all unit vector

and all , it holds . Moreover, equality holds only if and has only two non-zero coordinates having value .Ball’s result means that the maximal volume of the sections of the cube by hyperplanes are achieved when the section is a product of a -dimensional cube with the diagonal of a -dimensional cube . The original proof is based on Fourier tansform and series expansion. Alternative proofs can be found in [23] (based on distribution functions) and very recently in [22] (by mean of a transport argument).

Ball used Theorem 1 to give a negative answer to the famous Busemann-Petty problem in dimension 10 and higher [2]. His paper has inspired many research in convex geometry and is still very current. We refer to [11, 14, 16, 15, 6] to quote just a few of the most recent papers in the field and refer to the reference therein for a more detailed description of the literature.

Our first main result is the following quantitative version of Ball’s theorem.

###### Theorem 2.

Fix . Let with and be such that . Then, there exists two indices such that

Moreover, and in particular, for all , .

Ball’s slicing theorem, combined with a result of Rogozin [27], was used by Bobkov and Chistyakov [3] to derive an optimal inequality for min-entropy power. Namely, they proved that

(1) |

for any independent random variables

, with the min-entropy power we now define. We may call the latter*Bobkov-Chistyakov’s min-entropy power inequality*.

For a ( valued) random variable , the *min-Entropy power* is defined as

when

and otherwise. When is absolutely continuous with respect to the Lebesgue measure, with density , then is the essential supremum of with respect to the Lebesgue measure.

The nomenclature “min-entropy power” is information theoretic. In that field the entropy power inequality refers to the fundamental inequality due to Shannon [28] which demonstrates that independent random variables with densities satisfy

where denotes the “entropy power”, with the Shannon entropy . The Rényi entropy [25], for defined as for and through continuous limits otherwise, gives a parameterized family of entropies that includes the usual Shannon entropy as a special case (by taking ). It can be easily seen (through Jensen’s inequality, and the expression ) that for a fixed variable , the Rényi entropy is decreasing in . Thus for a fixed variable , the parameter , , furnishes the minimizer of the family , and is often referred to as the “min-entropy”. Hence the terminology and notation min-entropy power used is in analogy with the Shannon entropy power . Entropy power inequalities for the full class of Rényi entropies have been a topic of recent interest in information theory, see e.g. [4, 5, 17, 18, 21, 24, 26], and for more background we refer to [19] and references therein.

In [3] it was observed in a closing remark that the constant in (1) is sharp. Indeed by taking and and to be i.i.d. uniform on an interval (1) is seen to hold with equality. In the following theorem, we demonstrate that this is (essentially) the only equality case. In fact, thanks to the quantitative form of Ball’s slicing theorem above, we can derive a quantitative form of Bobkov-Chistyakov’s min entropy power inequality, see Corollary 6 below, that, in turn, allows us to characterize equality cases in (1) which constitutes our second main theorem.

###### Theorem 3.

For independent random variables,

(2) |

with equality if and only if there exists and and such that is uniform on a set , and

is a uniform distribution on

and for , is a point mass.Note that this is distinct from the -dimensional case, see [20], where sharp constants can be approached asymptotically for i.i.d. and uniform on a -dimensional ball. More explicitly, for , if denotes all finite collections of independent -valued random variables

where are i.i.d. and uniform on a -dimensional Euclidean unit ball.

We end with a quantitative Khintchine’s inequality. Though our result is independent, we stress that, as it is well known in the field and as it was pointed out by Ball himself in [1, Additional remarks], the inequality of Theorem 1 is however related to Khintchine’s inequalities.

Denote by symmetric -Bernoulli variables. Khintchine’s inequalities assert that, for any there exist some constant , such that for all and all it holds

(3) |

Such inequalities were proved by Khintchine in a special case [13], and studied in a more systematic way by Littlewood, Paley and Zygmund.

The best constants in (3) are known. This is due to Haagerup [8], after partial results by Steckin [29], Young [31] and Szarek [30]. In particular, Szarek proved that , that was a long outstanding conjecture of Littlewood, see [9].

The connection between Theorem 1 and Khintchine’s inequalities goes as follows: as fully derived in [6], Ball’s theorem can be rephrased as

where are i.i.d. random vectors in uniform on the centered Euclidean unit sphere . As a result Ball’s slicing of the cube can be seen as a sharp Khintchine-type ienquality.

Our last main result is a quantitative version of (the lower bound in) Khintchine’s inequality for , that has the same flavour of Theorem 2 (thought being independent).

###### Theorem 4.

Fix , an integer and such that , satisfying

Then, there exists two indices such that

Also, it holds and in particular, for any , .

The proofs of Theorem 2 and Theorem 4 are based on a careful analysis of Ball’s integral inequality

and, respectively, Haagerup’s integral inequality

in the special case . It is worth mentioning that Theorem 4 is restricted to because the latter integrals can be made explicit only in that case. In order to deal with general (at least , say, with implicitly defined through the Gamma function, see [8]), one would need to study very carefully the map and prove that it is increasing and then decreasing on with careful control of its variations. The difficulty is also coming from the fact that, at , . This in particular makes the quantitative version difficult to state properly. Indeed, for , the extremizers in the lower bound of (3) are those with two indices equal to and the others vanishing. While for , there are no extremizers for finite (the ”extremizer” is

in the limit (by the central limit theorem)). At

the two ”extremizers” coexist. Theorem 4 is therefore only a first attempt in the understanding of quantitative forms of Khintchine’s inequalities.## 2. Quantitative slicing: Proof of Theorem 2

In this section, we give a proof of Theorem 2

. We need first to recall part of the original proof by Ball, based on Fourier and anti-Fourier transform. We may omit some details that can be found in

[1].By symmetry we can assume without loss of generality that for all . Reducing the dimension of the problem if necessary, we will further reduces to for all .

In [1] it is proved that for all (see also [23, step 1]). The argument is geometric. Put for the -th unit vector of the canonical basis. Then it is enough to observe that the volume of equals the volume of its projection to the hyperplane (orthogonal to the -th direction) divided by the cosine of the angle of and , that is precisely , while the projection of on has volume . Therefore for all , which proves one inequality of Theorem 2.

We follow the presentation of [23, step 2]. Let be the Fourier transform of . By definition, we have

Therefore, taking the anti-Fourier transform, Ball obtained the following explicit formula^{1}^{1}1An alternative explicit formula is given by Franck and Riede [7] (with different normalization).
The authors ask if there could be an alternative proof of Ball’s theorem based on their formula.
for :

Applying Holder’s inequality, since , one gets

(4) |

Ball’s theorem follows from the fact that with equality only if . Changing variable, this is equivalent to proving that

(5) |

for every (for this is an identity). The latter is known as Ball’s integral inequality
and was proved in [1]^{2}^{2}2An asymptotic study of such integrals can be found in [12]. (see [23, 22] for alternative approaches).

One key ingredient in the proof of Theorem 2 is a reverse form of Ball’s integral inequality given in Lemma 5 below.

Turning to our quantitative question, observe that if for all , , then (2) would imply that , a contradiction. Therefore, there must exist such that . The aim is now to prove that is closed to . In fact, changing variables (), we observe that

Hence, is equivalent to saying that

Lemma 5 guarantees that, if , then . If then which amounts to . In any case

since for any .

Iterating the argument, assume that for all , . Since , (2) would imply that

where we used that and some algebra. This is a contradiction. Therefore, there exists a second index such that . Proceeding as for , we can conclude that necessarily

The expected result concerning , follows.

Since we can conclude that

Thus, for all . This ends the proof of the theorem.

###### Lemma 5.

Let be such that

for some small . Then, .

###### Proof.

Set . We use the technology developed in [1] where it is proved that

and

with

Therefore, the assumption

can be recast

Note that, in [1], it is proved that so that the left hand side of the latter is positive and in fact an infinite sum of positive terms. Hence, the first term of the sum must not exceed the right hand side. Since and , it holds

Returning to the variable it follows that from which the expected result follows since . ∎

## 3. min-Entropy power inequality

In this section we extend the quantitative slicing results for the unit cube, to a quantitative version (Corollary 6 below) of Bobkov and Chistyakov’s min-entropy power inequality (Inequality (1)) for random variables in . Then we prove the full characterization of extremizers of this min-entropy power inequality, i.e. we prove Theorem 3.

The quantitative version of Bobkov and Chistyakov’s min-entropy power inequality reads as follows.

###### Corollary 6.

For independent random variables and if

(6) |

then there exists indices and such that

while

Its proof relies on the following result by Rogozin.

###### Theorem 7 (Rogozin [27]).

For independent random variables, let be independent random variables uniform on an origin symmetric interval chosen such that , with the interpretation that is deterministic, and equal to zero, in the case that . Then,

###### Proof of Corollary 6.

Suppose that, for

(7) |

then by Theorem 7,

Writing and we can re-write this inequality as

where we observe that is a unit vector and is the uniform distribution on the unit cube. Moreover since are log-concave and symmetric, is as well, and hence . Thus, we have

###### Proof of Theorem 3.

We distinguish between sufficiency and necessity. The former being simpler.

- Necessity:

Writing for convenience by Corollary 6 when , equality in (2) implies that

That is

and since symmetric rearrangement preserves min-entropy and reduces the entropy of independent sums, Letting represent the densities of and respectively, this implies

which can only hold if . Reversing the roles of and , we must also have . Since obviously holds, we have the following chain of inclusions,

For this it follows that and are i.i.d. uniform distributions.

Thus, and are uniform distributions as well. Without loss of generality we may assume that and are uniform on sets of measure , and . Denote . Then
is uniformly continuous and with . Indeed, because continuous compactly supported functions are dense in , it follows^{3}^{3}3Given an , there exists continuous and compactly supported such that . Since is continuous and compactly supported, it is uniformly continuous, and hence for small enough , , Thus that for , for . Further , so that for sufficiently small, can be made arbitrarily small as well. Thus,

hence is indeed uniformly continuous.

Taking to be continuous, compactly supported functions approximating in , we have

Since the right hand side goes to zero, and is compactly supported, it must be true that tends to zero for large . Thus attains its maximum value at some point , and thus we can rewrite the equality of the min-entropies of , and , as . Thus almost surely .

Put . By the same argument, since , is uniform on a set . Thus, . Hence, for , and the are deterministic. Letting , the proof of necessity is complete.

- Sufficiency:

To prove sufficiency, assume that is uniform on a set , uniform on and a point mass for then,

Observe that

Thus and it follows that . ∎

## 4. Quantitative khintchine’s inequality

In this section we prove Theorem 4 that resembles the proof of Theorem 2. We need to recall some results from [8].

Assume without loss of generality that for all . Put

From [8, Lemma 1.4 (and its proof)], we can extract that

is an increasing function of , with and . Haagerup also proved [8, Lemma 1.3] that

(8) |

with the convention that if (recall the definition of from (3)). For completeness, let us reproduce the argument using Nazarov and Podkorytov’s presentation [23]. From the identity

applied to , we have