Approximate degree, secret sharing, and concentration phenomena

06/02/2019 ∙ by Andrej Bogdanov, et al. ∙ 0

The ϵ-approximate degree deg_ϵ(f) of a Boolean function f is the least degree of a real-valued polynomial that approximates f pointwise to error ϵ. The approximate degree of f is at least k iff there exists a pair of probability distributions, also known as a dual polynomial, that are perfectly k-wise indistinguishable, but are distinguishable by f with advantage 1 - ϵ. Our contributions are: We give a simple new construction of a dual polynomial for the AND function, certifying that deg_ϵ(f) ≥Ω(√(n log 1/ϵ)). This construction is the first to extend to the notion of weighted degree, and yields the first explicit certificate that the 1/3-approximate degree of any read-once DNF is Ω(√(n)). We show that any pair of symmetric distributions on n-bit strings that are perfectly k-wise indistinguishable are also statistically K-wise indistinguishable with error at most K^3/2·(-Ω(k^2/K)) for all k < K < n/64. This implies that any symmetric function f is a reconstruction function with constant advantage for a ramp secret sharing scheme that is secure against size-K coalitions with statistical error K^3/2(-Ω(deg_1/3(f)^2/K)) for all values of K up to n/64 simultaneously. Previous secret sharing schemes required that K be determined in advance, and only worked for f= AND. Our analyses draw new connections between approximate degree and concentration phenomena. As a corollary, we show that for any d < n/64, any degree d polynomial approximating a symmetric function f to error 1/3 must have ℓ_1-norm at least K^-3/2(Ω(deg_1/3(f)^2/d)), which we also show to be tight for any d > deg_1/3(f). These upper and lower bounds were also previously only known in the case f= AND.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The -approximate degree of a function , denoted , is the least degree of a multivariate real-valued polynomial such that for all inputs .111In this work, for convenience we also consider functions mapping to . Such a is said to be an approximating polynomial for . This is a central object of study in computational complexity, owing to its polynomial equivalence to many other complexity measures including sensitivity, exact degree, deterministic and randomized query complexity [21], and quantum query complexity [6].

By linear programming duality,

has -approximate degree more than if and only if there exist a pair of probability distributions and over the domain of such that and are perfectly -wise indistinguishable (i.e., all -wise projections of and are identical), but are -distinguishable by , namely . Said equivalently, a sound and complete certificate for -approximate degree being more than is a dual polynomial that contains no monomials of degree or less, and such that and .

Dual polynomials have immediate applications to cryptographic secret sharing: a dual polynomial for is a description of a cryptographic scheme for sharing a 1-bit secret amongst parties, where the secret can be reconstructed by applying to the shares, and the scheme is secure against coalitions of size (see [4] for details).

Motivation for explicit constructions of dual polynomials. Recent years have seen significant progress in proving new approximate degree lower bounds by explicitly constructing dual polynomials exhibiting the lower bound [8, 25, 10, 26, 11, 7, 12, 28]. These new lower bounds have in turn resolved significant open questions in quantum query complexity and communication complexity. At the technical core of these results are techniques for constructing a dual polynomial for composed functions , given dual polynomials for and individually.

Often, an explicitly constructed dual polynomial showing that exhibits additional metric properties, beyond what is required simply to witness . Much of the major recent progress in proving approximate degree lower bounds has exploited these additional metric properties [11, 7, 12, 28]. Accordingly, even if cases where an approximate degree lower bound for a function is known, it can often be useful to construct an explicit dual polynomial witnessing the lower bound. Hence, we are optimistic that the new constructions of dual polynomials given in this work will find future applications.

Explicit constructions of dual polynomials are also necessary to implement the corresponding secret-sharing scheme, and to analyze the complexity of the algorithm that samples the shares of the secret.

Our results in a nutshell. Our results fall into two categories. In the first category, we reprove several known approximate degree lower bounds by giving the first explicit constructions of dual polynomials witnessing the lower bounds. Specifically, our dual polynomial certifies that the -approximate degree of the -bit function is . This construction is the first to extend to the notion of weighted degree, and yields the first explicit certificate that the -approximate degree of any (possibly unbalanced) read-once DNF is . Interestingly, our dual polynomial construction draws a novel and clean connection between the approximate degree of and anti-concentration of the Binomial distribution.

In the second category, we prove new and tight results about the size of the coefficients of polynomials that approximate symmetric functions. Specifically, we show that for any , any degree polynomial approximating to error must have coefficients of weight (-norm) at least . We show this bound is tight (up to logarithmic factors in the exponent) for any . These bounds were previously only known in the case [24, 5]. Our analysis actually establishes a considerably more general result, and as a consequence we obtain new cryptographic secret sharing schemes with symmetric reconstruction procedures (see Section 1.2 for details).

1.1 A New Dual Polynomial for

To describe our dual polynomial for , it will be convenient to consider the function to have domain and range , with if and only if . In their seminal work, Nisan and Szegedy [21] proved that the -approximate degree of the function on inputs is . More generally, it is now well-known that the -approximate degree of is [16, 6]. These works do not construct explicit dual polynomials witnessing the lower bounds; this was achieved later in works of Špalek [29] and Bun and Thaler [8].

Our first contribution is the construction of a new dual polynomial for , which is simple enough to describe in a single equation:

(1)

Here, is a random subset of of size at most (where determines the degree of the polynomials against which the exhibited lower bound holds), and is an (explicit) normalization constant.

In the language of secret sharing, to share a secret , the dealer samples shares with probability proportional to , conditioned on the parity of the shares being equal to .

In Corollary 2.2 we show that certifies that every degree- polynomial must differ from the function by at some input. In other words, the approximation error of a degree- polynomial is lower bounded by the probability that a sum of unbiased independent bits deviates from its mean by .

Our function given in (1), unlike previous dual polynomials [16, 29, 10, 27], also certifies that the weighted -approximate degree of with weights is (see Corollary 2.3).222 For a polynomial

, a weight vector

assigns weight to variable . The weighted degree of is the maximum weight over all monomials appearing in , where the weight of a monomial is the sum of the weights of the variables appearing within it. The weighted -approximate degree of , denoted , is the least weighted degree of any polynomial that approximates pointwise to error . This lower bound is tight for all , matching an upper bound of Ambainis [1]. The only difference in our dual polynomial construction for the weighted case is in the distribution over sets , and the lower bound in the weighted case is derived from anti-concentration of weighted

sums of Bernoulli random variables.

Both statements are corollaries of the following theorem.

Theorem 1.1.

Define as if and only if . The function defined in Equation (1) is a dual witness for for .

By combining, in a black-box manner, the dual polynomial for the weighted-approximate degree of with prior work (e.g., [17, Proof of Theorem 7]), one obtains, for any read-once DNF , an explicit dual polynomial for the fact that . Very recent work of Ben-David et al. [2] established this result for the first time, shaving logarithmic factors off of prior work [10, 17]. In fact, Ben-David et al. [2] prove more generally that any depth- read-once - formula has approximate degree . Their method, however, does not appear to yield an explicit dual polynomial, even in the case .

Discussion. It has been well known that the -approximate degree of the function on variables is [21, 6], a fact which has many applications in theoretical computer science. This is superficially reminiscent of Chernoff bounds, which state that the middle layers of the Hamming cube contain a fraction of all inputs (i.e., “most” -bit strings have Hamming weight close to ). However, these two phenomena have not previously been connected, and it is not a priori clear why approximate degree should be related to concentration of measure. An approximating polynomial for must approximate at all inputs in . Why should it matter that most (but very far from all) inputs have Hamming weight close to ?

The new dual witness for constructed in Equation (1) above provides a surprising answer to this question. The connection between (anti-)concentration and approximate degree of arises not because of the number of inputs to that have Hamming weight close to , but because of the number of parity functions on bits that have degree close to . This connection appears to be rather deep, as evidenced by our construction’s ability to yield a tight lower bound in the case of weighted approximate degree.

1.2 Indistinguishability for Symmetric Distributions

In this section, for convenience we consider functions mapping to . Two distributions and over are (statistically) -wise indistinguishable if for all subsets of size , the induced marginal distributions and are within statistical distance . When , we say they are (perfectly) -wise indistinguishable.

For general pairs of distributions, perfect -wise indistinguishability does not imply any sort of security against distinguishers of size . Any binary linear error-correcting code of distance and block length

induces a pair of distributions (the uniform distribution over codewords and one of its affine shifts) that are perfectly

-wise indistinguishable, yet perfectly -wise distinguishable.

In contrast, we prove that perfect -wise indistinguishability for symmetric distributions implies strong statistical security against larger adversaries:

Theorem 1.2.

If and are symmetric over and perfectly -wise indistinguishable, then they are statistically -wise indistinguishable for all .

Theorem 1.2 has the following direct consequence for secret sharing schemes over bits with symmetric reconstruction. We say are -reconstructible by if .

Corollary 1.3.

Let be a symmetric Boolean function. There exists a pair of distributions and that are -indistinguishable for all , but are -reconstructible by .

Corollary 1.3 is an immediate consequence of our Theorem 1.2, and the fact that any symmetric function has an optimal dual polynomial that is itself symmetric. In the special case (or equivalently ), Corollary 1.3 implies the existence of a visual secret sharing scheme (see, for example [20]) that is -statistically secure against all coalitions of size , simultaneously for all up to size . This property, where security guarantees are in place for many coalition sizes at the same time, is in contrast to an earlier result of Bogdanov and Williamson [5] where they proved that for any fixed coalition size , there is a visual secret sharing scheme that is -statistically secure. In their construction, the distribution of shares and depend on the value of .

We remark that the bound of Corollary 1.3 cannot hold in general for , since there exists distributions that are perfectly -wise indistinguishable but are reconstructible by the majority function on all inputs. We do not however know if a bound of the form is tight in this context.

Tight weight-degree tradeoffs for polynomials approximating symmetric functions.

Let be any function. For any integer , denote by the minimum weight of any degree- polynomial that approximates pointwise to error . By the weight of a polynomial, we mean the -norm of its coefficients over the parity (Fourier) basis333In fact, our main weight lower bound (Corollary 1.4) holds over any set of functions (not just parities) that each depend on at most variables.. In Section 4, we observe that Corollary 1.3 implies weight-degree trade-off lower bounds for symmetric functions.

Corollary 1.4.

For any symmetric function , any constant , and any integer such that , we have .

The following theorem shows that the lower bound obtained in Corollary 1.4 is tight (up to polylogarithmic factors in the exponent) for all symmetric functions.

Theorem 1.5.

For any symmetric function , any constant and , .444Here and throughout, the notation hides polylogarithmic factors in .

Theorem 1.5 also implies that Corollary 1.3 is tight (up to polylogarithmic factors in the exponent) for all symmetric and for all . This is because any improvement to Corollary 1.3 would yield an improvement to Corollary 1.4, contradicting Theorem 1.5.

Essentially Optimal Ramp Visual Secret Sharing Schemes. The following result shows that in the case , Corollary 1.3 is essentially tight for all , and Theorem 1.2 is tight as a reduction from perfect to approximate indistinguishability for symmetric distributions. It does so by constructing essentially optimal ramp visual secret sharing schemes.555A visual secret sharing scheme is a scheme where the reconstruction function is the of some subset of the shares. A ramp scheme is one where there is not necessarily a sharp threshold between the perfect secrecy and reconstruction thresholds; in particular, we allow for .

Theorem 1.6.

For all there exist symmetric -wise indistinguishable distributions and over -bit strings that are -reconstructible by , where is the of the first bits of .

Discussion of Theorem 1.6. This theorem gives the existence of a ramp visual secret sharing scheme that is perfectly secure against any parties, but in which any parties can reconstruct the secret with the above advantage. This generalizes the schemes in [5] where only reconstruction by all parties was considered.

Let us express the reconstruction advantage appearing in Theorem 1.6 in a manner more easily comparable to other results in this manuscript. Standard results on anti-concentration of the Binomial distribution state that (see, e.g., [18]). The Cauchy-Schwarz inequality then implies that the reconstruction advantage appearing in Theorem 1.6 is at least .666 Theorem 1.6 is closely related to Theorem 1.1, in that Theorem 1.6 gives another anti-concentration-based proof that for . However, the two results are incomparable. Theorem 1.6 does not yield an explicit dual polynomial for , and the -approximate degree lower bound for implied by Theorem 1.6 is loose by the factor appearing in the expression for . On the other hand, Theorem 1.1 only yields a visual secret sharing scheme with reconstruction by all parties, while Theorem 1.6 yields a ramp scheme with non-trivial reconstruction advantage by the of the first (out of ) parties.

Hence, the visual secret sharing schemes given in Theorem 1.6 are nearly optimal; if the reconstruction advantage could be improved by more than the leading factor (or the constant factor in the exponent), then this would contradict Theorem 1.2 which upper bounds the distinguishing advantage of any statistical test over bits against symmetric, perfectly -wise indistinguishable distributions. Theorem 1.6 also shows that the indistinguishability parameter in Theorem 1.2 cannot be significantly improved, even in the restricted case where the only statistical test is .

In Section 6 we describe another application of Theorem 1.2 to security against share consolidation and “downward self-reducibility” of visual secret shares.

1.3 Related Works

Prior Work. Servedio, Tan, and Thaler [24] established Corollary 1.4 and Theorem 1.5 in the special case , showing that degree polynomials that approximate the function require weight .777These bounds for were implicit in [24], but not explicitly highlighted. The upper bound was explicitly stated in [13, Lemma 4.1], which gave applications to differential privacy, and the lower bound in [9, Lemma 32], which used it to establish tight weight-degree tradeoffs for polynomial threshold functions computing read-once DNFs. They used this result to establish tight weight-degree tradeoffs for polynomial threshold functions computing decision lists. As previously mentioned, Bogdanov and Willamson [5] generalized the weight-vs-degree lower bound from [24] beyond polynomials, thereby obtaining a visual secret-sharing scheme for any fixed that is -statistically secure.

Elkies [14] and Sachdeva and Vishnoi [23] exploit concentration of measure to prove a tight upper bound on the degree of univariate polynomials that approximate the function over the domain . Their techniques inspired our (much more technical) proof of Theorem 1.2.

Other Related Work. This work subsumes Bogdanov’s manuscript [3], which shows a slightly weaker lower bound on the weighted approximate degree of AND, and does not derive an explicit dual polynomial. In independent work, Huang and Viola [15] prove a weaker form of our Corollary 1.3: their distributions depend on the value of . They also prove (a slightly tighter version of) Theorem 1.5, thereby establishing that the statistical distance in Corollary 1.3 is tight.

1.4 Techniques and Organization

The proof of Theorem 1.1 (Section 2) is an elementary verification that the function given in (1) is a dual polynomial. The only property that is not immediate is correlation with . Verifying this property amounts to upper bounding the normalization constant , which follows from orthogonality of the Fourier characters.

In the proof of Theorem 1.2 (Section 3), a -bit statistical distinguisher for symmetric distribution is first decomposed into a sum of at most tests that evaluate to 1 only when the input has Hamming weight exactly . Lemma 3.3 shows that the univariate symmetrizations of these distinguishers can be pointwise approximated by a degree- polynomial with error at most .

To construct the desired approximation, we derive an identity relating the moment generating function of the squared Chebyshev coefficients of

(interpreted as relative probabilities) to the average magnitude of a polynomial related to on the unit complex circle (Claims 3.6 and 3.7). We bound these magnitudes analytically (Claim 3.8) and derive tail inequalities for the Chebyshev coefficients from bounds on the moment generating function as in standard proofs of Chernoff-Hoeffding bounds.

In the special case when the secrecy parameters and are fixed and the number of parties approaches infinity, turns out to equal , where is some quantity independent of . In this case, the Chebyshev coefficients are the regular coefficients of the polynomial .888The -th coefficient of is the value of the -th Kravchuk polynomial with parameter evaluated at . When , , or , the coefficients of are exponentially concentrated around the middle as they follow the binomial distribution. We prove that this exponential decay in magnitudes happens for all values of , which requires understanding complicated cancellations in the algebraic expansion of . We generalize this analysis to the finitary setting .

We prove Theorem 1.5 (Section 4) by writing any symmetric function as a sum of at most many conjunctions, and approximating each conjunction to such low error (namely error ) that the sum of all approximations is an approximation for itself. Theorem 1.5 then follows by constructing low-weight, low-degree polynomial approximations for each conjunction in the sum.

Theorem 1.6 (Section 5) is proved by lower bounding the error of degree polynomial approximations to the symmetrization of the function

. By duality, a lower bound on approximation error translates into a secret sharing scheme with the same reconstruction advantage. To lower bound the error, we estimate the values of the coefficients in the Chebyshev expansion of

with indices larger than . Owing to orthogonality, the largest of these coefficients lower bounds the approximation error of any degree- polynomial.

In Section 6 we formulate a security of secret sharing against consolidation and downward self-reducibility of visual schemes, and derive these properties from the main results.

2 Dual Polynomial For the Weighted Approximate Degree of AND

In this section we prove Theorem 1.1 and derive its two corollaries about the unweighted and weighted approximate degree of AND.

Notation and Definitions. Let . Given a vector , define the weight of a monomial to equal . Define the -weighted degree of a polynomial to be the maximum weight of a monomial in it. That is, if , then define

Define the -weighted -approximate degree to be the minimum -weighted degree of a polynomial that satisfies for all in the domain of . Given two real-valued functions over domain , define .

Lemma 2.1.

For any finite set and any function iff there exists a function satisfying the following conditions.

  • Pure high degree: For any real polynomial of weighted degree is at most .

  • Normalization: ,

  • Correlation: ,

We call a dual witness for . The lemma follows by linear programming duality and is a straightforward generalization of previous results (see e.g. [29, 10]). We prove the “if” direction, which is sufficient for our purposes.

Proof.

For any of weighted degree at most ,

The dual polynomial of interest is

where , is the uniform distribution over the sets , and is the normalization constant

Proof of Theorem 1.1.

We prove the theorem by showing that satisfies the three conditions of Lemma 2.1. The expression can be written as a sum of products of pairs of monomials of weight at most , so its weighted degree is at most . Thus every monomial that occurs in the expansion of must have weighted degree at least , and so has pure high weighted degree at least as desired.

The scaling by in the definition of ensures that has norm 1. The correlation of and is given by Finally, the normalization constant evaluates to

since the inner summation over evaluates to when , and zero otherwise.

It remains to show that equals the desired expression for . For a set , let be the string that assigns values and to elements inside and outside , respectively. Then , so

Corollary 2.2 (Approximate degree of AND).

Recall that denotes the function satisfying if and only if . If has degree at most , then for some , where is a random variable.

The expression on the right is lower bounded by the larger of and . In the large regime (), this bound is tight  [16, 6]

Proof.

Apply Theorem 1.1 to the weight vector . ∎

Earlier constructions of dual polynomials for AND are quite different from our Corollary 2.2 [16, 29, 10, 27]

and are based on real-valued polynomial interpolation. Specifically, for a carefully chosen set

of size , the prior constructions consider a univariate polynomial , and they define where denotes the Hamming weight of . Clearly has degree at most . A fairly complicated calculation is required to show that, for an appropriate choice of , defining in this way ensures that captures an -fraction of the -mass of .

Corollary 2.3 (Weighted approximate degree of AND).

.

The proof uses the Paley-Zygmund inequality:

Lemma 2.4 (Paley-Zygmund inequality).

Let

be any random variable with finite variance. Then, for any

,

Proof of Corollary 2.3.

We apply the Paley-Zygmund inequality to . First, and . Then

where the first equality follows from the sign-symmetry of . Applying Theorem 1.1 with yields the claim. ∎

3 Approximate Indistinguishability from Perfect Indistinguishability

In this section, we prove Theorem 1.2, which states that any pair of symmetric and perfectly -wise indistinguishable distributions over are also approximately indistinguishable against statistical tests that observe of the bits. We may and will assume without loss of generality that the statistical test is a symmetric function, meaning that it depends only on the Hamming weight of the observed bits of its input.

Let and denote an arbitrary pair of symmetric -wise indistinguishable distributions over . We will be interested in obtaining an upper bound on the statistical distance of their projections to any indices of , namely the advantage where is a symmetric function and is any set of size . We can decompose into a sum of tests , where outputs 1 if and only if the Hamming weight of its input is exactly . Specifically, we decompose as

(2)

where each is either zero or one. We will bound the distinguishing advantage of each in the sum individually. This advantage is captured by a univariate function that expresses in terms of the Hamming weight of its input, after shifting and scaling the Hamming weight to reside in the interval .

Fact 3.1.

Let be any set of size . There exists a univariate polynomial of degree at most such that the following holds. For all , where is a random string of Hamming weight .

Proof.

This statement is a simple extension of Minsky and Papert’s classic symmetrization technique [19]. Specifically, Minsky and Papert showed that for any polynomial , there exists a univariate polynomial of degree at most the total degree of , such that for all , . Apply this result to and let . The fact then follows from the observation that the total degree of is at most , since this function is a -junta. ∎

In particular, the value is a probability for every . Moreover, this probability must equal zero when the Hamming weight of is less than or greater than . Therefore has distinct zeros at the points , where

(3)

and so must have the form

(4)

for some that does not depend on .999, , and also depend on and but we omit those arguments from the notation as they will be fixed in the proof. As is probability when , the function is 1-bounded at those inputs. In fact, is uniformly bounded on the interval :

Claim 3.2.

Assuming , for all .

The proof is in Section 3.4. Formula (4) and Claim 3.2 will be applied to show that has a good uniform polynomial approximation on the interval .

Lemma 3.3.

Assuming , there exists a degree- polynomial such that for all .

Lemma 3.3 is the main technical result of this section. It is proved in Section 3.1.

Proof of Theorem 1.2.

Now let be a general distinguisher on inputs. By Facts A.1 and A.2 (see Appendix), can be assumed to be a symmetric Boolean-valued function. We bound the distinguishing advantage as follows. Recalling that and are -indistinguishable symmetric distributions over , for any set of size we have:

Therefore, and are -wise indistinguishable for . ∎

3.1 Proof of Lemma 3.3

We will prove Lemma 3.3 by studying the Chebyshev expansion of . To this end we take a brief detour into Chebyshev polynomials and an even briefer one into Fourier analysis.

Chebyshev polynomials.

The Chebyshev polynomials are a family of real polynomials , 1-bounded on , with having degree . We extend the definition to negative indices by setting . The Chebyshev polynomials are orthogonal with respect to the measure supported on . Therefore every degree- polynomial has a unique (symmetrized) Chebyshev expansion

where are the Chebyshev coefficients of .

The Chebyshev polynomials satisfy the following identity, which plays an important role in our analysis:

Fact 3.4.

.

This formula, together with the “base cases” and , specifies all Chebyshev polynomials.

We will also need the following form of Parseval’s identity for univariate polynomials.

Claim 3.5 (Parseval’s identity).

For every complex polynomial , the sum of the squares of the magnitudes of the coefficients of equals , where is a random complex number of magnitude 1.

Proof outline.

We will argue that the Chebyshev expansion of has small weight on the coefficients when . Zeroing out those coefficients then yields a good degree- approximation of as desired.

The upper bound on the Chebyshev coefficients of is derived in two steps. The first step, which is of an algebraic nature, expresses the Chebyshev coefficients of as regular coefficients of a related polynomial .101010We omit the dependence on as this parameter remains constant throughout the proof. We are interested in the coefficients of the derived polynomial , which represent the Chebyshev coefficients of amplified by the exponential scaling factor .

The second step, which is analytic, upper bounds the magnitude of the coefficients of . The main tool is Parseval’s identity, which identifies the sum of the squares of these coefficients by the average magnitude of over the complex unit circle . We bound the maximum magnitude by explicitly analyzing the function . This step comprises the bulk of our proof.

The third step translates the bound on the squared 2-norm of the amplified coefficients into a tail bound on by optimizing over a suitable value of . This is analogous to the standard derivation of Chernoff-Hoeffding bounds by analysis of the moment generating function of the relevant random variable.

We now sketch how this outline is executed for the special case where tends to infinity while and remain fixed. Although this setting is technically much easier, it allows us to highlight the main conceptual points of our argument. The analysis for finite can be viewed as an approximation of this proof strategy.

Sketch of the limiting case .

By the expansion (4) of , as tends to infinity converges uniformly to the function

as this corresponds to Fact 3.1 when the bits of the string are independent and -biased. As is a probability for every , Claim 3.2 follows immediately.

Step 1. Our algebraic treatment of the Chebyshev transform yields that the Chebyshev coefficient of is the -th regular coefficient of the polynomial

(5)

Step 2. The evaluation of the polynomial at satisfies the identity

(6)

where . This happens to equal

(7)

and is in particular uniformly bounded by for all . This similarity between and is the crux of our analysis.

Step 3. By Parseval’s identity, after suitable shifting and cancellation, the amplified sum of Chebyshev coefficients is upper bounded by . Therefore the tail can have value at most . This upper bound holds for all , and plugging in the approximate minimizer yields a bound of the desired form .

Outline of the general case.

We now give the outline of our full proof for the general case and relevant technical statements that we use to prove our main upper bound. Identity (5) generalizes to the following statement:

Claim 3.6.

The Chebyshev coefficient of is the -th regular coefficient of the polynomial

where is as in Equation (4).

The general form of identity (6) is:

Claim 3.7.

For , , and ,

where .

Owing to the second term in , there is no identity analogous to (7) when is finite and has zeros inside . Nevertheless, can be uniformly bounded either by a sufficiently small multiple of , or a fixed quantity that is constant in the parameter range of interest.

Claim 3.8.

Assume and . Then

We now prove Lemma 3.3. Claim 3.6 is proved in Section 3.2. Claim 3.7 is proved in Section 3.3. Claims 3.2 and 3.8 are proved in Section 3.4 as the proofs share the same structure.

Fact 3.9.

.

Proof.

By Fact 3.1, both sides are degree- polynomials that agree on points so they are identical. ∎

Proof of Lemma 3.3.

By Fact 3.9 we may and will assume that . Let