DeepAI

Convex Influences

We introduce a new notion of influence for symmetric convex sets over Gaussian space, which we term "convex influence". We show that this new notion of influence shares many of the familiar properties of influences of variables for monotone Boolean functions f: {±1}^n →{±1}. Our main results for convex influences give Gaussian space analogues of many important results on influences for monotone Boolean functions. These include (robust) characterizations of extremal functions, the Poincaré inequality, the Kahn-Kalai-Linial theorem, a sharp threshold theorem of Kalai, a stability version of the Kruskal-Katona theorem due to O'Donnell and Wimmer, and some partial results towards a Gaussian space analogue of Friedgut's junta theorem. The proofs of our results for convex influences use very different techniques than the analogous proofs for Boolean influences over {±1}^n. Taken as a whole, our results extend the emerging analogy between symmetric convex sets in Gaussian space and monotone Boolean functions from {±1}^n to {±1}

• 23 publications
• 4 publications
• 26 publications
11/01/2019

Kruskal-Katona for convex sets, with applications

The well-known Kruskal-Katona theorem in combinatorics says that (under ...
09/15/2022

Quantum Talagrand, KKL and Friedgut's theorems and the learnability of quantum Boolean functions

We extend three related results from the analysis of influences of Boole...
12/22/2020

Quantitative Correlation Inequalities via Semigroup Interpolation

Most correlation inequalities for high-dimensional functions in the lite...
09/26/2019

Concentration on the Boolean hypercube via pathwise stochastic analysis

We develop a new technique for proving concentration inequalities which ...
09/26/2019

Stability of Talagrand's influence inequality

We strengthen several classical inequalities concerning the influences o...
09/08/2022

Quasi-Random Influences of Boolean Functions

We examine a hierarchy of equivalence classes of quasi-random properties...
03/06/2021

Consensus Maximisation Using Influences of Monotone Boolean Functions

Consensus maximisation (MaxCon), which is widely used for robust fitting...

1 Introduction

Background: An intriguing analogy. This paper is motivated by an intriguing, but at this point only partially understood, analogy between monotone Boolean functions over the hypercube and symmetric convex sets in Gaussian space. Perhaps the simplest manifestation of this analogy is the following pair of easy observations: since a Boolean function is monotone if whenever for all , it is clear that “moving an input up towards ” by flipping bits from to can never decrease the value of . Similarly, we may view a symmetric111A set is symmetric if whenever convex set as a 0/1 valued function, and it is clear from symmetry and convexity that “moving an input in towards the origin” can never decrease the value of the function.

The analogy extends far beyond these easy observations to involve many analytic and algorithmic aspects of monotone Boolean functions over

under the uniform distribution and symmetric convex subsets of

under the Gaussian measure. Below we survey some known points of correspondence (several of which were only recently established) between the two settings:

1. Density increments. The well-known Kruskal-Katona theorem [Kruskal:63, katona1968theorem] gives quantitative information about how rapidly a monotone increases on average as the input to is “moved up towards .” Let be a monotone function and let be the fraction of the many weight- inputs for which outputs 1; the Kruskal-Katona theorem implies (see e.g. [lovasz2007combinatorial]) that if for some bounded away from 0 and 1 and , then Analogous “density increment” results for symmetric convex sets are known to hold in various forms, where the analogue of moving an input in up towards is now moving an input in in towards the origin, and the analogue of is now , which is defined to be the fraction of the origin-centered radius- sphere that lies in . For example, Theorem 2 of the recent work [DS21-weak-learning] shows that if is a symmetric convex set (which we view as a function ) and satisfies , then .

2. Weak learning from random examples. Building on the above-described density increment for symmetric convex sets, [DS21-weak-learning] showed that any symmetric convex set can be learned to accuracy in time given many random examples drawn from . [DS21-weak-learning] also shows that any -time weak learning algorithm (even if allowed to make membership queries) can achieve accuracy no better than . These results are closely analogous to the known (matching) upper and lower bounds for -time weak learning of monotone functions with respect to the uniform distribution over : Blum et al. [BBL:98] showed that is the best possible accuracy for a -time weak learner (even if membership queries are allowed), and O’Donnell and Wimmer [OWimmer:09] gave a time weak learner that achieves this accuracy using random examples only.

3. Analytic structure and strong learning from random examples. [BshoutyTamon:96] showed that the Fourier spectrum of any -variable monotone Boolean function over is concentrated in the first levels. Analogously, [KOS:08] showed that the same concentration holds for the first levels of the Hermite spectrum222The Hermite polynomials form an orthonormal basis for the space of square-integrable real-valued functions over Gaussian space; the Hermite spectrum of a function over Gaussian space is analogous to the familiar Fourier spectrum of a function over the Boolean hypercube. See Section 2 for details. of the indicator function of any convex set. In both cases this concentration gives rise to a learning algorithm, using random examples only, running in time and learning the relevant class (either monotone Boolean functions over the -dimensional hypercube or convex sets under Gaussian space) to any constant accuracy.

4. Qualitative correlation inequalities. The well-known Harris-Kleitman theorem [harris60, kleitman66] states that monotone Boolean functions are non-negatively correlated: any monotone must satisfy . The Gaussian Correlation Inequality [roy14] gives an exactly analogous statement for symmetric convex sets in Gaussian space: if are any two symmetric convex sets, then , where now expectations are with respect to .

5. Quantitative correlation inequalities. Talagrand [Talagrand:96] proved the following quantitative version of the Harris–Kleitman inequality: for monotone ,

 \E[fg]−\E[f]\E[g]≥1C⋅Ψ(n∑i=1\Infi[f]\Infi(g)). (1)

Here , is an absolute constant, is the influence of coordinate on (see Section 2), and the expectations are with respect to the uniform distribution over . In a recent work [DNS20] proved a closely analogous quantitative version of the Gaussian Correlation Inequality: for symmetric convex subsets of ,

 \E[KL]−\E[K]\E[L]≥1C⋅Υ(n∑i=1˜K(2ei)˜L(2ei)), (2)

where is , is a universal constant, denotes the degree-2 Hermite coefficient in direction (see Section 2), and expectations are with respect to .

We remark that in many of the above cases the proofs of the two analogous results (Boolean versus Gaussian) are very different from each other even though the statements are quite similar. For example, the Harris-Kleitman theorem has a simple one-paragraph proof by induction on , whereas the Gaussian Correlation Inequality was a famous conjecture for four decades before Thomas Royen proved it in 2014.

Motivation. We feel that the examples presented above motivate a deeper understanding of this “Boolean/Gaussian analogy.” This analogy may be useful in a number of ways; in particular, via this connection known results in one setting may suggest new questions and results for the other setting.333Indeed, the recent Gaussian density increment and weak learning results of [DS21-weak-learning] were inspired by the Kruskal-Katona theorem and the weak learning algorithms and lower bounds of [BBL:98] for monotone Boolean functions. Similarly, the recent quantitative version of the Gaussian Correlation Theorem established in [DNS20] was motivated by the existence of Talagrand’s quantitative correlation inequality for monotone Boolean functions. Thus the overarching goal of this paper is to strengthen the analogy between monotone Boolean functions over and symmetric convex sets in Gaussian space. We do this through the study of a new notion of influence for symmetric convex sets in Gaussian space.

1.1 This Work: A New Notion of Influence for Symmetric Convex Sets

Before presenting our new notion of influence for symmetric convex sets in Gaussian space, we first briefly recall the usual notion for Boolean functions. For , the influence of coordinate on , denoted , is , where is uniform random over and denotes with its -th coordinate flipped. It is a well-known fact (see e.g. Proposition 2.21 of [ODbook]) that for monotone Boolean functions , we have , the degree-1 Fourier coefficient corresponding to coordinate .

Inspired by the relation for influence of monotone Boolean functions, and by the close resemblance between Equation 1 and Equation 2, [DNS20] proposed to define the influence of along direction , for a symmetric convex set and , to be

 \Infv[K]:=−˜K(2v),

the (negated) degree-2 Hermite coefficient444We observe that if is a symmetric set then since its indicator function is even, the degree-1 Hermite coefficient must be 0 for any direction . of in direction (see Section 3.1 for a detailed definition). [DNS20] proved that this quantity is non-negative for any direction and any symmetric convex (see Section 3.1). They also defined the total influence of to be

 \TInf[f]:=n∑i=1\Infei[f] (3)

and observed that this definition is invariant under different choices of orthonormal basis other than , but did not explore these definitions further.

The main contribution of the present work is to carry out an in-depth study of this new notion of influence for symmetric convex sets. For conciseness, and to differentiate it from other influence notions (which we discuss later), we will sometimes refer to this new notion as “convex influence.”

Inspired by well known results about influence of monotone Boolean functions, we establish a number of different results about convex influence which show that this notion shares many properties with the familiar Boolean influence notion. Intriguingly, and similar to the Boolean/Gaussian analogy elements discussed earlier, while the statements we prove about convex influence are quite closely analogous to known results about Boolean influences, the proofs and tools that we use (Gaussian isoperimetry, Brascamp-Lieb type inequalities, theorems from the geometry of Gaussian space such as the -inequality [s-inequality], etc.) are very different from the ingredients that underlie the corresponding results about Boolean influence.

1.2 Results and Organization

We give an overview of our main results below.

Basics, examples, Margulis-Russo, and extremal functions.

We begin in Section 3.1 by working through some basic properties of our new influence notion. After analyzing some simple examples in Section 3.2, we next show in Section 3.3

that the total convex influence for a symmetric convex set is equal to (a scaled version of) the rate of change of the Gaussian volume of the set as the variance of the underlying Gaussian is changed. This gives an alternate characterization of total convex influence, and may be viewed as an analogue of the Margulis-Russo formula for our new influence notion. We continue in

Section 3.4 by giving some straightforward characterizations of extremal symmetric convex sets vis-a-vis our influence notion, namely the ones that have the largest individual influence in a single direction and the largest total influence. As one would expect, these extremal functions are the Gaussian space analogues of the Boolean dictator and majority functions respectively. Next, we compare our new influence notion with some other previously studied notions of influence over Gaussian space (Section 3.5). These include the “geometric influences” that were studied by [keller2012geometric]

as well as the standard notion (from the analysis of functions over product probability domains, see e.g. Chapter 8 of

[ODbook]) of the expected variance of the function along one coordinate when all other coordinates are held fixed.

Total influence lower bounds.

In Section 4 we give two lower bounds on the total convex influence (Equation 3) for symmetric convex sets, which are closely analogous to the classical Poincaré and KKL Theorems. Our KKL analogue is quadratically weaker than the KKL theorem for Boolean functions; we conjecture that a stronger bound in fact holds, which would quantitatively align with the Boolean variant (see Item 1 of Section 1.4). Our proofs, which are based on the “-inequality” of Latała and Oleskiewicz [s-inequality] and on the Gaussian isoperimetric inequality, are quite different from the proofs of the analogous statements for Boolean functions.

(A consequence of) Friedgut’s junta theorem.

In Section 5 we establish a convex influences analogue of a consequence of Friedgut’s junta theorem. Friedgut’s junta theorem states that any Boolean with small total influence must be close to a junta. This implies that for any monotone with small total influence, “averaging out” over a small well-chosen set of input variables (the variables on which the approximating junta depends) results in a low-variance function. We prove a closely analogous statement for symmetric convex sets with small total convex influence, thus capturing a convex influence analogue of this consequence of Friedgut’s junta theorem. (We conjecture that a convex influence analogue holds for Friedgut’s original junta theorem; see Item 2 of Section 1.4.)

Sharp thresholds for functions with all small influences.

In Section 6 we establish a “sharp threshold” result for symmetric convex sets in Gaussian space, which is analogous to a sharp threshold result for monotone Boolean functions due to Kalai [Kalai:04]. Building on earlier work of Friedgut and Kalai [FriedgutKalai:96], Kalai [Kalai:04] showed that if is a monotone Boolean function and is such that (i) all the -biased influences of are and (ii) the expectation of under the -biased measure is , then must have a “sharp threshold” in the following sense: the expectation of under the -biased measure (-biased measure, respectively) is (, respectively) for some with . For our sharp threshold result, we prove an analogous statement for symmetric convex sets, where now takes the place of the -biased distribution over and the -biased convex influences (see Section 3.3) take the place of the -biased influences. Interestingly, the sharpness of our threshold is quantitatively better than the known analogous result [Kalai:04] for monotone Boolean functions; see Section 6 for an elaboration of this point.

A stable density increment result.

Finally, in Section 7, we use our new influence notion to give a Gaussian space analogue of a “stability” version of the Kruskal-Katona theorem due to O’Donnell and Wimmer [OWimmer:09]. In [OWimmer:09] it is shown that the density increment of the Kruskal-Katona theorem (see Item 1 at the beginning of this introduction) can be strengthened to as long as a “low individual influences”-type condition holds. We analogously show that a similar strengthening of the Gaussian space density increment result mentioned in Item 1 earlier can be achieved under the condition that the convex influence in every direction is low.

1.3 Techniques

We give a high-level overview here of the techniques for just one of our results, namely our analogue of the KKL theorem, Section 4.2. Several of our other results either employ similar tools (for example, our robust density increment result, Section 7, and our main sharp threshold result, Section 6) or else build off of Section 4.2 (for example, our analogue of a consequence of Friedgut’s junta theorem, Section 5).

The KKL theorem states that if has every coordinate influence small, specifically , then the total influence of must be large compared to ’s variance, specifically it must hold that . This is a dramatic strengthening of the Poincaré inequality (which only states that ) and is a signature result in the analysis of Boolean functions with many applications. The classical proof of the KKL theorem is based on hypercontractivity [Bon70, Bec75], and only recently [EldanGross:20, KKKMS:21] have proofs been given which avoid the use of hypercontractivity.

Our convex influences analogue of the KKL theorem states that if is a symmetric convex set and the convex influence in every direction is at most and , then the total convex influence must be at least Our proof does not employ hypercontractivity but instead uses tools from convex geometry. It proceeds in two main conceptual steps:

1. First, we use a Brascamp–Lieb-type inequality due to Vempala [Vempalafocs10] to argue that the maximum convex influence of in any coordinate can be lower bounded in terms of the Gaussian volume of and its “width” (equivalently, the radius of the largest origin-centered ball contained in , which is called the in-radius of and is denoted ). This lets us show that (see Equation 19).

2. Next, we argue that (see Equation 18), which together with the lower bound on gives the result. This is shown using our Margulis-Russo analogue, the Gaussian Isoperimetric Theorem, and concavity of the Gaussian isoperimetric function.

1.4 Discussion and Future Work

We believe that much more remains to be discovered about this new notion of influences for symmetric convex sets. We list some natural concrete (and not so concrete) questions for future work:

1. A stronger KKL-type theorem for convex influences? We conjecture that the factor of in our KKL analogue, Section 4.2, can be strengthened to . As witnessed by Section 3.2, this would be essentially the strongest possible quantitative result, and would align closely with the original KKL theorem [KKL:88].

2. An analogue of Friedgut’s theorem for convex influences? As described earlier, our Section 5 establishes a Gaussian space analogue of a consequence of Friedgut’s Junta Theorem [Friedgut:98] for Boolean functions over . The following would give a full-fledged Gaussian space analogue of Friedgut’s Junta Theorem:

[Friedgut’s Junta Theorem for convex influences] Let be a convex symmetric set with . Then there are orthonormal directions and a symmetric convex set , such that

1. depends only on the values of , and

3. Are low-influence directions (almost) irrelevant? Related to the previous question, we note that it seems to be surprisingly difficult to show that low-influence directions “don’t matter much” for convex sets. For example, it is an open question to establish the following, which would give a dimension-free robust version of the last assertion of Section 3.1:

Let be symmetric and convex, and suppose that is such that Then there is a symmetric convex set such that

1. depends only on the projection of onto the dimensional subspace orthogonal to , and

2. for some function depending only on (in particular, independent of ) and going to 0 as

While the corresponding Boolean statement is very easy to establish, natural approaches to Item 3 lead to open (and seemingly challenging) questions regarding dimension-free stable versions of the Ehrhard-Borell inequality [Figalli:20, Zvavitch:20].

4. Algorithmic results? Finally, a broader goal is to further explore the similarities and differences between the theory of convex symmetric sets in Gaussian space and the theory of monotone Boolean functions over . One topic where the gap in our understanding is particularly wide is the algorithmic problem of property testing. The problem of testing monotonicity of functions from to is rather well understood, with the current state of the art being an -query upper bound and an -query lower bound [KMS15, CWX17]. In contrast, the problem of testing whether an unknown region in

is convex (with respect to the standard normal distribution) is essentially wide open, with the best known upper bound being

queries [chen2017sample] and no nontrivial lower bounds known.

2 Preliminaries

In this section we give preliminaries setting notation and recalling useful background on convex geometry, log-concave functions, and Hermite analysis over .

2.1 Convex Geometry and Log-Concavity

Below we briefly recall some notation, terminology and background from convex geometry and log-concavity. Some of our main results employ relatively sophisticated results from these areas; we will recall these as necessary in the relevant sections and here record only basic facts. For a general and extensive resource we refer the interested reader to [aga-book].

We identify sets with their indicator functions , and we say that is symmetric if . We write to denote the origin-centered ball of radius in . If is a nonempty symmetric convex set then we let denote and we refer to this as the in-radius of .

Recall that a function is log-concave if its domain is a convex set and it satisfies for all and . In particular, the -indicator functions of convex sets are log-concave.

Recall that the marginal of on the set of variables is obtained by integrating out the other variables, i.e. it is the function

 g(xi1,…,xik)=∫\Rn−kf(x1,…,xn)dxj1…dxjn−k,

where . We recall the following fact: [[Dinghas, Leindler, Prekopa, Prekopa2] (see Theorem 5.1, [LV07])] All marginals of a log-concave function are log-concave. The next fact follows easily from the definition of log-concavity: [[Ibragimov:56], see e.g. [An:95]] A one-dimensional log-concave function is unimodal.

2.2 Gaussian Random Variables

We write to mean that

is a standard Gaussian random variable, and will use the notation

 φ(z):=1√2πe−x2/2andΦ(z):=∫z−∞φ(t)dt

to denote the pdf and the cdf of this random variable.

Recall that a non-negative random variable

is distributed according to the chi-squared distribution

if where and that a draw from the chi distribution is obtained by making a draw from and then taking the square root.

We define the shell-density function for , , to be

 αK(r):=\Prx\bx∈rSn−1[\bx∈K], (4)

where the probability is with respect to the normalized Haar measure over ; so equals the fraction of the origin-centered radius- sphere which lies in We observe that if is convex and symmetric then is a nonincreasing function. A view which will be sometimes useful later is that

is the probability that a random Gaussian-distributed point

lies in , conditioned on

2.3 Hermite Analysis over \calN\pbra0,σ2n

Our notation and terminology here follow Chapter 11 of [ODbook]. We say that an -dimensional multi-index is a tuple , and we define

 |α|:=n∑i=1αi. (5)

We write to denote the -dimensional Gaussian distribution with mean and variance , and denote the corresponding measure by . When the dimension is clear from context we simply write instead, and sometimes when we simply write for . For and , we write to denote the space of functions

that have finite second moment

under the Gaussian measure , that is:

 ∥f∥22=\Ex\bz∼\calN\pbra0,σ2n[f(\bz)2]1/2<∞.

We view as an inner product space with for . We define “biased Hermite polynomials,” which yield an orthonormal basis for :

[Hermite basis] For , the -biased Hermite polynomials are the univariate polynomials defined as

 hj,σ(x):=hj\pbraxσ,wherehj(x):=(−1)j√j!exp(x22)⋅djdxjexp(−x22).

[Easy extension of Proposition 11.33, [ODbook]] For and , the collection of -variate -biased Hermite polynomials given by where

 hα,σ(x):=n∏i=1hαi,σ(x)

forms a complete, orthonormal basis for .

Given a function and , we define its (-biased) Hermite coefficient on to be . It follows that is uniquely expressible as with the equality holding in ; we will refer to this expansion as the (-biased) Hermite expansion of . When , we will simply write instead of and instead of . Parseval’s and Plancharel’s identities hold in this setting:

For , we have:

 \laf,g\ra =\Ex\bz∼\calN(0,σ2)n[f(\bz)g(\bz)]=∑α∈\Nn˜fσ(α)˜gσ(α), (Plancherel) \laf,f\ra =\Ex\bz∼\calN(0,σ2)n[f(\bz)2]=∑α∈\Nn˜fσ(α)2. (Parseval)

The following notation will sometimes come in handy.

Let and . We define ’s -biased Hermite coefficient of degree along , written , to be

 \wtfσ(kv):=\Ex\bx∼\calN(0,σ2)n\sbraf(\bx)⋅hk,σ\pbrav⋅\bx

(as usual omitting the subscript when ).

We will write to denote the

standard basis vector for

.

In this notation, for example, . Finally, for a measurable set , it will be convenient for us to write to denote , the (standard) Gaussian volume of .

3 Influences for Symmetric Convex Sets

In this section, we first introduce our new notion of influence for symmetric convex sets over Gaussian space and establish some basic properties. In Section 3.2 we analyze the influences of several natural symmetric convex sets, and in Section 3.3 we give an analogue of the Margulis-Russo formula (characterizing the influences of monotone Boolean functions) which provides an alternative equivalent view of our new notion of influence for symmetric convex sets in terms of the behavior of the sets under dilations. We characterize the symmetric convex sets which have extremal max influence and total influence in Section 3.4. Finally, in Section 3.5, we compare our new notion of influence with some previously studied influence notions over Gaussian space.

3.1 Definitions and Basic Properties

[Influence for symmetric log-concave functions] Let be a symmetric (i.e. ) log-concave function. Given a unit vector , we define the influence of direction on as being

 \Infv[f]:=−˜f(2v)=\Ex\bx∼\calN(0,1)n\sbra−f(\bx)h2(v⋅\bx)=\Ex\bx∼\calN(0,1)n\sbraf(\bx)⋅\pbra1−(v⋅\bx)2√2,

the negated “degree-2 Hermite coefficient in the direction .” Furthermore, we define the total influence of as

 \TInf[f]:=n∑i=1\Infei[f].

Note that the indicator of a symmetric convex set is a symmetric log-concave function, and this is the setting that we will be chiefly interested in. The following proposition (which first appeared in [DNS20], and a proof of which can be found in Appendix A) shows that these new influences are indeed “influence-like.” An arguably simpler argument for the non-negativity of influences is presented in Section 3.3.

[Influences are non-negative] If is a centrally symmetric, convex set, then for all . Furthermore, equality holds if and only if whenever (i.e. the projection of orthogonal to coincides with that of ) almost surely.

We note that the total influence of a symmetric, convex set is independent of the choice of basis; indeed, we have

 \TInf[K]=\Ex\bx∼\calN(0,1)n\sbraf(\bx)\pbran−∥\bx∥2√2 (6)

which is invariant under orthogonal transformations. Hence any orthonormal basis could have been used in place of in defining .

We note that (as is shown in the proof of Section 3.1), the influence of a fixed coordinate is not changed by averaging over some set of other coordinates:

Let be a symmetric, convex set, and define the log-concave function as

 Kei(x):=\Ex\bx∼\calN(0,1)n−1\sbraK(\bx1,…,\bxi−1,x,\bxi+1,…,\bxn). (7)

Then we have

 \Infei[K]=\Infe1[Kei]=\TInf[Kei]. (8)

We conclude with the following useful relationship between the in-radius of a symmetric convex set and its max influence along any direction. Section 3.1 is proved in Appendix A.

Let be a centrally symmetric convex set with , and let be the in-radius of . Then there is some direction such that

 \Infv[K]≥Δe−r2in23/2π.

3.2 Influences of Specific Symmetric Convex Sets

In this subsection we consider some concrete examples by analyzing the influences of a few specific symmetric convex sets, namely “slabs”, balls, and cubes. As we will see, these are closely analogous to well-studied monotone Boolean functions (dictator, Majority, and Tribes, respectively).

[Analogue of Boolean dictator: a “slab”] Given a vector , define . As suggested by the notation, this is the analogue of a single Boolean variable , i.e. a “dictatorship.” For simplicity, suppose for some , i.e. . We then have

 \Infei\sbra\Dictw={Θ\pbrac⋅exp\pbra−c2/2i=10i≠1.

Note that while in the setting of the Boolean hypercube there is only one “dictatorship” for each coordinate, in our setting given a particular direction we can have “dictatorships” of varying widths and volumes.

[Analogue of Boolean Majority: a ball] Let denote the ball of radius . Analogous to the Boolean majority function, we argue that for we have that for all

Recall from Equation 6 that

 \TInf\sbraB=1√2\Ex\bx∼\calN(0,1)n\sbraB(\bx)\pbran−∥\bx∥2.

By the Berry-Esseen Central Limit Theorem (see

[berry, esseen] or, for example, Section 11.5 of [ODbook]), we have that for ,

 ∣∣∣\Prx\bx∼\calN(0,1)n\sbra∥\bx∥2−n√n≤t−\Prx\by∼\calN(0,1)\sbra\by≤t∣∣∣≤c√n

for some absolute constant . In particular, this implies that

 \Prx\bx∼\calN(0,1)n\sbra∥\bx∥2≤n−√n≥\Prx\by∼N(0,1)[\by≤−1]−c√n≥0.15.

Since and is never negative, it follows that

 \Ex\bx∼\calN(0,1)n\sbraB(\bx)\pbran−∥\bx∥2≥Θ\pbra√n

from which symmetry implies that for all The upper bound follows from Parseval’s identity.

Our last example is analogous to the “Tribes CNF” function introduced by Ben-Or and Linial [BenOrLinial:85short] (alternatively, see Definition 2.7 of [ODbook]):

[Analogue of Boolean : a cube] Let denote the axis-aligned cube of side-length and , i.e. let be the unique value such that

 \Prx\bg∼\calN(0,1)\sbra|\bg|≤r=\pbra121/n=1−Θ(1)n. (9)

By standard tail bounds on the Gaussian distribution, we have that . Because of the symmetry of , we have for all . Note, however, that we can write

 Cr(x)=n∏i=1\Dict1/r(xi)

where is as defined in Section 3.2. By considering the Hermite representation of it is easy to see that

 \Infei[Cr]=\Ex\bg∼\calN(0,1)\sbra\Dict1/r(\bg)n−1\TInf\sbra\Dict1/r.

By our choice of above, we have and so

 \Ex\bg∼\calN(0,1)\sbra\Dict1/r(\bg)n−1=Θ(1).

From Section 3.2, we know , and so we have

 \Infei[Cr]=Θ\pbrare−r2/2. (10)

We now recall the following tail bound on the normal distribution (see Theorem 1.2.6 of [durrett_2019] or Equation 2.58 of [TAILBOUND]):

 φ(r)(1r−1r3)≤\Prx\bg∼N(0,1)[\bg≥r]≤φ(r)(1r−1r3+3r5), (11)

where is the density function of . Combining Equation 9, Equation 10 and Equation 11 we get that which corresponds to the influence of each individual variable on the Boolean “tribes” function.

3.3 Margulis-Russo for Convex Influences: An Alternative Characterization of Influences via Dilations

In this subsection we give an alternative view of the notion of influence defined above, in terms of the behavior of the Gaussian measure of the set as the variance of the underlying Gaussian is changed.555Since , decreasing (respectively increasing) the variance of the underlying Gaussian measure is equivalent to dilating (respectively shrinking) the set. This is closely analogous to the Margulis-Russo formula for monotone Boolean functions on (see [Russo:81, Margulis:74] or Equation (8.9) in [ODbook]), which relates the derivative (with respect to ) of the -biased measure of a monotone function to the -biased total influence of .

We start by defining -biased convex influences, which are analogous to -biased influences from Boolean function analysis (see Section 8.4 of [ODbook]).

[-biased influence] Given a centrally symmetric convex set , we define the -biased influence of direction on as being

 \Inf(σ)v[K]:=−˜fσ(2v)=\Ex\bx∼\calN(0,1)n\sbra−f(\bx)h2,σ(v⋅\bx),

the negated degree-2 -biased Hermite coefficient in the direction . We further define the -biased total influence of as

 \TInf(σ)[K]:=n∑i=1\Inf(σ)ei[K].

The proof of the following proposition, which asserts that the rate of the change of the Gaussian measure of a symmetric convex set with respect to is (up to scaling) equal to the -biased total influence of , is deferred to Appendix A. We note that this relation was essentially known to experts (see e.g. [s-inequality]), though we are not aware of a specific place where it appears explicitly in the literature.

[Margulis-Russo for symmetric convex sets] Let be a centrally symmetric convex set. Then for any we have

 ddσ2\E\bx∼\calN(0,σ2)n\sbraK(\bx)=−\TInf(σ)[K]σ2√2=−1σ2√2n∑i=1\Inf(σ)ei[K].

Note that decreasing (respectively increasing) the variance of the background Gaussian measure is equivalent to dilating (respectively shrinking) the symmetric convex set while keeping the background measure fixed; this lets us write

 \TInf[K]=1√2limδ→0γn(K)−γn\pbra(1−δ)Kδ (12)

for a symmetric convex . We also note that Section 3.3 easily extends to the following coordinate-by-coordinate version (which also admits a similar description in terms of dilations):

[Coordinate-wise Margulis-Russo] Let be a centrally symmetric convex set. Then for any , we have

 ddσ2i\Ex\bxi∼\calN(0,σ2i)j≠i : \bxj∼\calN(0,σ2)\sbraK(\bx)∣∣∣σ2i=σ2=−1σ2√2\Inf(σ)ei[K].

In particular, we have

 \Infei[K]=−√2ddσ2\Ex\bxi∼\calN(0,σ2)j≠i : \bxj∼\calN(0,1)\sbraK(\bx)∣∣∣σ2=1.

Note that decreasing the variance of the underlying Gaussian measure along a coordinate direction cannot cause the volume of the set to decrease. It follows then that for all .

3.4 Extremal Symmetric Convex Sets

The unique maximizer of across all monotone Boolean functions is the dictator function . The next proposition gives an analogous statement for the “dictatorship” function from Section 3.2, for every possible Gaussian volume:

Let be a symmetric convex set and let . Let be chosen so that the Gaussian volume of equals that of , i.e. . Then .

Proof.

Without loss of generality (for ease of notation) we take . Let be the function obtained by marginalizing out variables , so

 gK(x1)=\Ex(\bx2,…,\bxn)∼N(0,1)n−1\sbraK(x1,\bx2,…,\bxn).

As noted following Section 3.1, we have that . We observe that by definition we have

 \Infe1[gK]=1√2⋅\Ex\bx1∼N(0,1)\sbra(1−\bx21)gK(\bx1).

Since is a decreasing function of for all , it is easy to see that the symmetric -valued function that maximizes subject to having is the function for which for and for . This corresponds precisely to having ; so in fact taking maximizes over all measurable subsets of of Gaussian volume (not just over all symmetric convex sets of that volume). ∎

We note that a slight extension of this argument can be used to give a robust version of Section 3.4, showing that for any , any symmetric convex set (in fact any measurable set ) of Gaussian volume that has close to must in fact be close to . This is analogous to the easy fact that any monotone Boolean function with close to 1 must be close to the function .

Next we give a similar result but for total convex influence rather than influence in a single direction, analogous to the well known fact that the Majority function maximizes total influence across all -variable monotone Boolean functions : Let be a symmetric convex set, and let be chosen so that the Gaussian volume of equals that of , i.e. . Then .

Proof.

The argument is similar to that of Section 3.4. We have

 \TInf[K]=\Ex\bx∼\calN(0,1)n\sbraK(\bx)\pbran−∥\bx∥22√2=1√2⋅\Ex\br∼χ(n)\sbra(n−\br2)αK(\br)

(recall Equation 4), where is the -distribution with degrees of freedom. We observe that taking results in for and for , that the range of is contained in for any , and that is a decreasing function of for all . Combining these observations, it is easily seen that taking in fact maximizes the expression on the RHS over all measurable subsets of of volume (not just over all symmetric convex sets of that volume). ∎

As before, the argument above can be used to establish a robust version of Section 3.4, showing that any symmetric convex set (in fact any measurable set ) of Gaussian volume that has close to must in fact be close to .

3.5 Other Notions of Influence

Here, we compare the notion of influence for symmetric convex sets proposed in Section 3.1 with two previous notions of influence, namely i) the geometric influence introduced in [keller2012geometric]; and ii) the expected variance along a fiber which coincides with the usual notion of influence for Boolean functions on the hypercube.

3.5.1 Geometric Influences

In [keller2012geometric], Keller, Mossel, and Sen introduced the notion of geometric influence for functions over Gaussian space, and proved analogues of seminal results from the analysis of Boolean functions—including the KKL theorem, the Margulis–Russo lemma, and an analogue of Talagrand’s correlation inequality—for this notion of influence. Informally, the geometric influence captures the expected lower Minkowski content along each one-dimensional fiber of a set.

[Geometric influences] Given a Borel measurable set , its lower Minkowski content (with respect to the standard Gaussian measure), denoted , is defined as

 γ+(K):=liminfr→0+γ\pbraK+[−r,r]−γ(K)r.<