## 1 Introduction

The celebrated *Spencerβs Theorem* in discrepancy theory [Spencer1985SixSD]

shows that "six standard deviations suffice" for balancing vectors in the

-norm: for any , there exist signs such that . More generally, Spencer showed that for vectors in with one can achieve a bound of . While his proof used a nonconstructive form of the*partial coloring lemma*based on the pigeonhole principle, in the past decade several approaches starting with the breakthrough work of Bansal [DiscrepancyMinimization-Bansal-FOCS2010] did succeed in computing such signs in polynomial time [DiscrepancyMinimization-LovettMekaFOCS12, ConstructiveDiscrepancy-Rothvoss-FOCS2014, DBLP:journals/corr/LevyRR16, DBLP:journals/rsa/EldanS18].

As for balancing vectors of bounded -norm, the situation has been more delicate. In the same paper, Spencer [Spencer1985SixSD] showed a nonconstructive bound of for the discrepancy of vectors and also stated a conjecture of KomlΓ³s that this may be improved to . This was improved to by Banaszczyk [BalancingVectors-Banaszczyk98] who showed that in fact for any set of vectors of -norm at most 1 and any convex body of Gaussian measure at least , some combination of such vectors lies in . For the more general setting of discrepancy, the work of Barthe, GuΓ©don, Mendelson and Naor [GeometryOfLpBall-BartheGuedonMendelsonNaor2005] shows that, for , a scaling of -dimensional slices of the ball in does have Gaussian measure at least , thus implying a corresponding upper bound for balancing vectors from to . For , this matches the to bound of . Banaszczykβs proof was nonconstructive and the first polynomial time algorithm in the general convex body setting was found only recently by Bansal, Dadush, Garg and Lovett [GramSchmidtWalk-BansalDGL-STOC18], while the KomlΓ³s conjecture remains an open problem. The work of [GramSchmidtWalk-BansalDGL-STOC18] actually shows that for any vectors there exists an efficiently computable distribution over signs so that the sum is -subgaussian and will be in

with good probability. Interestingly, this means their algorithm is

*oblivious*to the body , which is a striking difference to the regime of where any algorithm needs to be dependent on . The connection between Banaszczykβs theorem and subgaussianity is due to Dadush et al.Β [DBLP:conf/approx/DadushGLN16].

For the general setting of balancing vectors from to norms, not much was known beyond Spencerβs theorem () or what can be deduced from Banaszczykβs theorem as above: any vector in also belongs to , thus implying a discrepancy bound of . Even in the square case , it has been an open problem to remove the dependency on [8555088]. The goal of this paper is to provide a unified approach for balancing from to via optimal constructive fractional partial colorings, which yield optimal bounds for most of the range . We obtain such fractional partial colorings by proving a new measure lower bound on the relevant linear preimages of balls (Section 3) and an improved algorithm which works for sets of Gaussian measure for any (Section 4), as opposed to previous work ([ConstructiveDiscrepancy-Rothvoss-FOCS2014, DBLP:journals/rsa/EldanS18]) which required measure for *sufficiently small* .

As an application of our results, we show a slight improvement to the bounds for the well-known Beck-Fiala conjecture [BECK19811], a discrete version of KomlΓ³s. It asks for a bound on the discrepancy of any , each with at most ones. We establish the conjecture for and show slightly improved bounds when is close to (Corollary 4).

Notation. Let denote the unit ball in the -norm. The *Gaussian measure* of a measurable set is given by . We denote the *mean width* of a convex set as . The Euclidean distance to a set is denoted by . If is a matrix, we denote its rows by and its columns by . Naturally, a matrix can also be interpreted as a (not necessarily invertible) linear map. Then for any set , we use the notation .

### 1.1 Our contribution

Our main contribution is a tight bound on partial colorings for balancing from to :

###### Theorem 1.

Let and . Then for any , there exists a polynomial-time computable partial coloring with so that

for some universal constant .

We would like to mention that, as noted by Banaszczyk [Banaszczyk1993], the condition does not weaken the theorem: in fact for the upper bound can only be larger than that of by a factor of two. On the other hand, the condition is natural, for otherwise if we would need a polynomial dependence on the dimension , even for . By iteratively applying TheoremΒ 1 we can obtain a full coloring at the expense of another factor of , with the caveat that whenever :

###### Theorem 2.

Let and with . Then for any , there exist polynomial-time computable signs so that

for some universal constant .

This significantly improves upon the general bound from Banaszczykβs theorem in [8555088] when for (not too small) and .

When and , we get the following corollary which matches, up to a constant, the lower bound of [Banaszczyk1993] known to hold for any norm:

###### Corollary 3 ( version of Spencerβs theorem).

Let and . Then for any , there exist polynomial-time computable signs so that

for some universal constant .

The following corollary shows the Beck-Fiala conjecture holds for and slightly improves upon the best known bound of [BalancingVectors-Banaszczyk98] when is close to :

###### Corollary 4 (Bound for Beck-Fiala).

Let and , each with at most ones. Then there exist polynomial-time computable signs so that

for some universal constant .

Finally, we show the partial coloring bound in TheoremΒ 1 is tight at least when :

###### Theorem 5.

Let . There exist infinitely many positive integers for which we can find such that for any with one has

for some universal constant .

As we mentioned earlier, the result of GluskinΒ [RootNDisc-Gluskin89] and GiannopoulosΒ [Giannopoulos1997]
shows that for a *small enough* constant, a symmetric convex body with
contains a partial coloring with a linear number of entries in . We can prove that for fractional colorings *any* constant suffices. Our argument even works for intersections with a large enough subspace.

###### Theorem 6.

For all , there is a constant so that the following holds: There is a randomized polynomial time algorithm which for a symmetric convex set with , a shift and a subspace with , finds an with and .

## 2 Preliminaries

We will use two elementary inequalities dealing with

-norms. The first one estimates the ratio between different norms:

###### Lemma 7.

For any and , we have .

It is instructive to note that this bound implies . If one has an upper bound on the largest entry in a vector β say β then one can strengthen the first inequality to . More generally:

###### Lemma 8.

For any and , we have .

We will also need the following version of *Khintchineβs inequality*, see e.g. the excellent textbook of
Artstein-Avidan, Giannopoulos and MilmanΒ [AsymptoticGeometricAnalysisBook2005].

###### Lemma 9 (Khintchineβs inequality).

Given , and , we have

where is a universal constant.

This fact can be derived from a standard Chernov bound which guarantees that for a vector with one has ; then one can analyze that the regime of dominates the contribution to . We use it to show the following:

###### Lemma 10.

Given and and , we have

###### Proof.

By convexity of , Jensenβs inequality in and Khintchineβs inequality in (LemmaΒ 9) we have

If , write as . Then by LemmaΒ 7,

Now suppose that . Define to be the vector with th coordinate . Since is a norm, we can use the triangle inequality to get

Either way, we conclude that , as desired. β

A well-known correlation inequality for Gaussian measure is the following:

###### Lemma 11 (Ε idakΒ [SidaksLemma67] and KathriΒ [KhatriCorrelationInequality67]).

For any symmetric convex set and strip , one has .

It is worth noting that a recent result of RoyenΒ [ProofOfGCI-Royen-Arxiv2014] extends this to any two arbitrary symmetric sets, though its full power will not be needed. We refer to the exposition of LataΕa and MatlakΒ [RoyensProofOfGCI-LatalaMatlak-Arxiv2017]. We also need a one-dimensional estimate:

###### Lemma 12.

For a strip , one has

We use the following scaling lemma to deal with constant factors:

###### Lemma 13.

Let be a measurable set and be a closed Euclidean ball such that . Then for all . In particular, if for some constant then also .

For Section 4 we also need two helpful results. For the first one, seeΒ [Handel2014ProbabilityIH].

###### Theorem 14.

If is -Lipschitz, then for one has

The classical *Urysohn Inequality* states that among all convex bodies of identical volume,
the Euclidean ball minimizes the width. We will need a variant that is phrased in terms of the Gaussian
measure rather than volume. For a proof, see Eldan and SinghΒ [DBLP:journals/rsa/EldanS18].

###### Theorem 15 (Gaussian Variant of Urysohnβs Inequality).

Let be a convex body and let be so that . Then .

## 3 Main technical result

In this section we show our measure lower bound for balancing vectors from to :

###### Theorem 16.

Let and . Then for any ,

In order to show TheoremΒ 16, roughly speaking it will suffice to show the corresponding bounds for the two special cases of , which can be bootstrapped into a general bound. First we address the simpler case which at heart is based on Khintchineβs inequality:

###### Lemma 17.

Let and . Then for any ,

###### Proof.

Next, we deal with the crucial case :

###### Lemma 18.

Let and . Then for any with columns and rows , the body satisfies

###### Proof.

The main idea in the proof is that we can convert the bound on the -norm of the columns into information about the -norm of the rows . Namely,

(1) |

We rescale the row vectors to and abbreviate , so that Eq.Β (1) simplifies to . We may then apply Ε idakβs LemmaΒ 11 and bound the one-dimensional measure:

Here we have used an estimate that remains to be proven:

Claim I. *For any and one has where is a universal constant.*

Proof of Claim I.
It will suffice to show for any :

To see this, let and note that it suffices to show

For we can use the inequality to see that the left side is at most 1. For we use instead to get

where in the last step we use the Stirling bound for . β

###### Remark 1.

This argument is largely motivated by the result of Ball and PajorΒ [ConvexBodiesWithFewFaces-PajorBall-PAMS-1990] which bounds volume instead of Gaussian measure. More specifically, [ConvexBodiesWithFewFaces-PajorBall-PAMS-1990] prove that for and any matrix , the set

satisfies . In contrast, our LemmaΒ 18 provides a simpler proof of a stronger result (up to a constant scaling), since the volume of a convex body is always at least its Gaussian measure.

We are now ready to show TheoremΒ 16:

###### Proof of TheoremΒ 16.

Let and let denote the matrix with columns . By LemmaΒ 8 we know that for any with and one has . Phrased in geometric terms this means . We would like to point out that this is a crucial point to obtain a dependence solely on rather than the larger parameter . Next, note the fact that for any sets and which we use together with the inequality of Ε idak and Kathri (LemmaΒ 11) to obtain the estimate

where we have used the measure lower bounds from LemmasΒ 17 andΒ 18. This shows the claimed bound whenever , where the hidden constant can be removed by scaling the corresponding convex body, see LemmaΒ 13.

It remains to prove that we can bootstrap the existing bound for the regime of large . So let us assume that . Let be a parameter to be determined and remark that LemmaΒ 7 gives . Applying the above measure lower bound for implies

We can rewrite the above upper bound on -norm as

Taking gives the desired result as then and LemmaΒ 13 can again deal with such constant scaling. β

Now our main result on existence of partial colorings easily follows:

###### Proof of TheoremΒ 1.

Next, we show how to obtain a full coloring by iteratively finding partial colorings.

###### Proof of TheoremΒ 2.

The intuition behind the extra factor for obtaining a full coloring is as follows: abbreviate the exponent as . Then it takes iterations until the term decreases by a factor of 1/2 which dominates the miniscule growth of the logarithmic term. Then indeed the overall discrepancy is dominated by the discrepancy from the first iterations.

We can now demonstrate how a nontrivial choice of -norms can be beneficial in classical discrepancy settings:

###### Proof of CorollaryΒ 4.

Consider column vectors with at most nonzero entries per .
First let us study the case . Since for each column , TheoremΒ 2 provides a coloring with . ^{1}^{1}1In fact for a more careful choice of gives a better discrepancy bound of , even though the Beck-Fiala conjecture asks only for .

Now if , we take with . Then and TheoremΒ 2 gives with

We conclude this section by showing that the term in our bounds is necessary:

###### Proof of TheoremΒ 5.

Consider the case . Consider an *Hadamard matrix*, which is a matrix so that all rows and columns are orthogonal. Such matrices are known to exist at least whenever is a power of 2. The columns satisfy and for any with we know that and , so that by LemmaΒ 7 we have

For

, take an identity matrix

. For every with we have , and the columns of
Comments

There are no comments yet.