One-way communication complexity and non-adaptive decision trees

We study the relationship between various one-way communication complexity measures of a composed function with the analogous decision tree complexity of the outer function. We consider two gadgets: the AND function on 2 inputs, and the Inner Product on a constant number of inputs. Let IP denote Inner Product on 2b bits. 1) If f is a total Boolean function that depends on all of its inputs, the bounded-error one-way quantum communication complexity of f ∘ IP equals Ω(n(b-1)). 2) If f is a partial Boolean function, the deterministic one-way communication complexity of f ∘ IP is at least Ω(b · D_dt^→(f)), where D_dt^→(f) denotes the non-adaptive decision tree complexity of f. For our quantum lower bound, we show a lower bound on the VC-dimension of f ∘ IP, and then appeal to a result of Klauck [STOC'00]. Our deterministic lower bound relies on a combinatorial result due to Frankl and Tokushige [Comb.'99]. It is known due to a result of Montanaro and Osborne [arXiv'09] that the deterministic one-way communication complexity of f ∘ XOR_2 equals the non-adaptive parity decision tree complexity of f. In contrast, we show the following with the gadget AND_2. 1) There exists a function for which even the randomized non-adaptive AND decision tree complexity of f is exponentially large in the deterministic one-way communication complexity of f ∘ AND_2. 2) For symmetric functions f, the non-adaptive AND decision tree complexity of f is at most quadratic in the (even two-way) communication complexity of f ∘ AND_2. In view of the first point, a lower bound on non-adaptive AND decision tree complexity of f does not lift to a lower bound on one-way communication complexity of f ∘ AND_2. The proof of the first point above uses the well-studied Odd-Max-Bit function.

Authors

• 9 publications
• 6 publications
• Parity Decision Tree Complexity is Greater Than Granularity

We prove a new lower bound on the parity decision tree complexity D_⊕(f)...
10/19/2018 ∙ by Anastasiya Chistopolskaya, et al. ∙ 0

• When Is Amplification Necessary for Composition in Randomized Query Complexity?

Suppose we have randomized decision trees for an outer function f and an...
06/19/2020 ∙ by Shalev Ben-David, et al. ∙ 0

• Towards Stronger Counterexamples to the Log-Approximate-Rank Conjecture

We give improved separations for the query complexity analogue of the lo...

• Classical-Quantum Separations in Certain Classes of Boolean Functions– Analysis using the Parity Decision Trees

In this paper we study the separation between the deterministic (classic...
04/27/2020 ∙ by Chandra Sekhar Mukherjee, et al. ∙ 0

• Disjointness through the Lens of Vapnik-Chervonenkis Dimension: Sparsity and Beyond

The disjointness problem - where Alice and Bob are given two subsets of ...
06/24/2020 ∙ by Anup Bhattacharya, et al. ∙ 0

• Complexity Analysis of Tree Share Structure

The tree share structure proposed by Dockins et al. is an elegant model ...
10/05/2020 ∙ by Xuan-Bach Le, et al. ∙ 0

• On relating one-way classical and quantum communication complexities

Let f: X × Y →{0,1,} be a partial function and μ be a distribution with ...
07/24/2021 ∙ by Naresh Goud Boddu, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Composed functions are important objects of study in analysis of Boolean functions and computational complexity. For Boolean functions and , their composition is defined as follows: . In other words, is the function obtained by first computing on disjoint inputs of bits each, and then computing on the resultant bits. Composed functions have been extensively looked at in the complexity theory literature, with respect to various complexity measures [BdW01, HLS07, Rei11, She12, She13, BT15, Tal13, Mon14, BK16, GJ16, AGJ17, GLSS19, BB20].

Of particular interest to us is the case when is a communication problem (also referred to as “gadget”). More precisely, let and be Boolean functions. Consider the following communication problem: Alice has input and Bob has input where for all . Their goal is to compute using as little communication as possible. A natural protocol is the following: Alice and Bob jointly simulate an efficient query algorithm for , using an optimal communication protocol for to answer each query. Lifting theorems are statements that say this naive protocol is essentially optimal. Such theorems enable us to prove lower bounds on the rich model of communication complexity by proving feasibly easier-to-prove lower bounds in the query complexity (decision tree) model.

Various lifting theorems have been proved in the literature: lifting a query complexity measure to various one-sided zero communication complexity measures [GLM16], lifting parallel decision tree complexity to round-constrained communication complexity [dRNV16], lifting deterministic query complexity to deterministic communication complexity [RM99, GPW18, CKLM19, WYY17], lifting DAG-like query complexity to DAG-like communication complexity [GGKS20], lifting randomized query complexity to randomized communication complexity [GPW20], lifting parity decision tree complexity to deterministic communication complexity using the XOR gadget [HHL18], lifting AND-decision tree complexity to deterministic communication complexity using the AND gadget [KLMY20], a deterministic lifting theorem for the Equality gadget [LM19], deterministic and randomized lifting theorems for gadgets with small discrepancy [CFK21], etc.

In this work we are interested in the one-way communication complexity of composed functions. In this setting, a natural protocol is for Alice and Bob to simulate a non-adaptive decision tree for the outer function, using an optimal one-way communication protocol for the inner function. Thus, the one-way communication complexity of is at most the non-adaptive decision tree complexity of times the one-way communication complexity of .

Lifting theorems in the one-way model are less studied than in the two-way model. Montanaro and Osborne [MO09] observed that the deterministic one-way communication complexity of equals the non-adaptive parity decision tree complexity of . Thus, non-adaptive parity decision tree complexity lifts “perfectly” to deterministic communication complexity with the XOR gadget. Kannan et al. [KMSY18]

showed that under uniformly distributed inputs, bounded-error non-adaptive parity decision tree complexity lifts to one-way bounded-error distributional communication complexity with the XOR gadget. Hosseini, Lovett and Yaroslavtsev

[HLY19] showed that randomized non-adaptive parity decision tree complexity lifts to randomized communication complexity with the XOR gadget in the one-way broadcasting model with players.

We explore the tightness of the naive communication upper bound for two different choices of the gadget : the Inner Product function, and the two-input AND function. For each choice of , we compare the one-way communication complexity of with an appropriate type of non-adaptive decision tree complexity of . Below, we motivate and state our results for each choice of the gadget.

Let , and denote quantum -error, randomized -error and deterministic one-way communication complexity, respectively. When we allow the parties to share an arbitrary input-independent entangled state in the beginning of the protocol, denote the one-way quantum -error communication complexity by (see Section 2.4 for formal definitions). Let denote deterministic non-adaptive decision tree complexity (see Section 2.3 for a formal definition). For an integer , let denote the Inner Product Modulo 2 function, that outputs the parity of the bitwise AND of two -bit input strings (see Definition 2.1). Let denote the outer function, and denote the function . Our first result shows that if is a total function that depends on all of its input bits, the quantum (and hence, randomized) bounded-error one-way communication complexity of is . Let denote the binary entropy function.

Theorem 1.1.

Let be a total Boolean function that depends on all its inputs (i.e., it is not a junta on a strict subset of its inputs), and let . Let denote the Inner Product function on input bits for . Then,

 Q→cc,ε(fIP) ≥(1−Hbin(ε))n(b−1), Q∗,→cc,ε(fIP) ≥(1−Hbin(ε))n(b−1)/2.
Remark 1.2.

In an earlier manuscript [San17], the second author proved a lower bound of on a weaker complexity measure, namely , via information-theoretic tools. Kundu [Kun17] subsequently observed that a quantum lower bound can also be obtained by additionally using Holevo’s theorem. They also suggested to the second author via private communication that one might be able to recover these bounds using a result of Klauck [Kla00]. This is indeed the approach we take, and we thank them for suggesting this and pointing out the reference.

In order to prove Theorem 1.1, we appeal to a result of Klauck [Kla00, Theorem 3], who showed that the one-way -error quantum communication complexity of a function is at least , where denotes the VC-dimension of (see Definition 2.12). In the case when the parties can share an arbitrary entangled state in the beginning of a protocol, Klauck showed a lower bound of . We exhibit a set of inputs that witnesses the fact that .

Note that Theorem 1.1 is useful only when . Indeed, no non-trivial lifting statement is true for when is the AND function on bits, since in this case, , whose one-way communication complexity is 1.

Our second result with the Inner Product gadget relates the deterministic one-way communication complexity of to the deterministic non-adaptive decision tree complexity of , where is an arbitrary partial Boolean function.

Theorem 1.3.

Let be arbitrary, and be a partial Boolean function. Let . Then,

 D→cc(fIP)=Ω(b⋅D→dt(f)).

Given a protocol , our proof extracts a set of variables of cardinality linear in the complexity of , whose values always determine the value of . The following claim which follows directly from the work of Frankl and Tokushige [FT99] is a crucial ingredient in our proof.

Theorem 1.4.

Let . Let be such that for all , , . Then, .

We give the details of the derivation of Theorem 1.4 from the result of Frankl and Tokushige in Appendix C. Theorem 1.4 admits simple proofs when is large compared to . See [GMWW17] for a proof when is a prime power, and . Their proof is based on polynomials over finite fields. We give a different proof for all in Appendix D.444For every , the proof can be extended to work for all . However, such statements will only enable us to prove a lifting theorem for a gadget of size . To prove Theorem 1.3 for constant-sized gadgets we need to set to .

Remark 1.5.

An analogous lifting theorem for deterministic one-way protocols for total outer functions follows as a special case of both Theorem 1.1 and Theorem 1.3. However, the statement admits a simple and direct proof based on a fooling set argument.

Theorem 1.1 and Theorem 1.3 give lower bounds even when the gadget is a constant-sized Inner Product function. It is worth mentioning here that prior works that consider lifting theorems with the Inner Product gadget [CKLM19, WYY17, CFK21], albeit in the two-way model of communication complexity, require a super-constant gadget size.

Interactive communication complexity of functions of the form have gained a recent interest [KLMY20, Wu21]. In order to state and motivate our results regarding when the inner gadget is the 2-bit AND function, we first discuss some known results in the case when the inner gadget is the 2-bit XOR function.

Consider non-adaptive decision trees, where the trees are allowed to query arbitrary parities of the input variables. Denote the best cost of such a tree computing a Boolean function , by . An efficient non-adaptive parity decision tree for can easily be simulated to obtain an efficient deterministic one-way communication protocol for . Thus, . Montanaro and Osborne [MO09] observed that this inequality is, in fact, tight for all Boolean functions. More precisely,

Claim 1.6 ([Mo09]).

For all Boolean functions ,

 D→cc(f∘XOR)=NAPDT(f).

If the inner gadget were AND instead of XOR, then the natural analogous decision tree model to consider would be non-adaptive decision trees that have query access to arbitrary ANDs of subsets of inputs. Denote the best cost of such a tree computing a Boolean function , by . Clearly, the one-way communication complexity of is bounded from above by , since a non-adaptive AND decision tree can be easily simulated to give a one-way communication protocol for of the same cost. Thus, . On the other hand, one can show that (see Claim 2.16). Thus we have

We explore if an analogous statement to Claim 1.6 holds true if the inner function were AND instead of XOR. That is, is the second inequality in Equation (1) always tight?

We give a negative answer in a very strong sense and exhibit a function for which the first inequality is tight (up to an additive constant). We show that there is an exponential separation between these measures even if one allows randomization in the decision trees. We use to denote the randomized non-adaptive AND decision tree complexity of .

Theorem 1.7.

There exists a function such that

The function we use to witness the bound in Theorem 1.7 is a modification of the well-studied Odd-Max-Bit function, which we denote . This function outputs 1 if and only if the maximum index of the input string that contains a 0, is odd (see Definition 2.2). A -cost one-way communication protocol is easy to show, since Alice can simply send Bob the maximum index where her input is 0 (if it exists), and Bob can use this along with his input to conclude the parity of the maximum index where the bitwise AND of their inputs is 0. For a lower bound of on , we exhibit a hard distribution under which no low-cost deterministic non-adaptive AND decision tree can compute with high accuracy.

Theorem 1.7 implies that, in contrast to the lifting theorem with the XOR gadget (Claim 1.6), the measure of non-adaptive AND decision tree complexity does not lift to a one-way communication lower bound for . However we show that a statement analogous to Claim 1.6 does hold true for symmetric functions , albeit with a quadratic factor, even when the measure is two-way communication complexity, denoted .

Theorem 1.8.

Let be a symmetric function. Then,

In fact we prove a stronger bound in which above is replaced by , where denotes the communication matrix of (see Section 2.4). That is, we show for symmetric functions that

Since it is well known (see Equation (7)) that the communication complexity of a function is at least as large as the logarithm of the rank of its communication matrix, this implies Theorem 1.8. Among other things, Buhrman and de Wolf [BdW01] observed that the log-rank conjecture holds for symmetric functions composed with AND. In particular, they showed that if is symmetric, then . This was also observed recently by Wu [Wu21], who also showed other results regarding the communication complexity of functions in connection with the log-rank conjecture. While we have a quadratically worse dependence in the RHS of Equation (2), our upper bound is on a complexity measure that can be exponentially larger than communication complexity in general, as demonstrated by Theorem 1.7.

Buhrman and de Wolf showed a lower bound on for symmetric functions . An upper bound on implicitly follows from prior work on group testing [DR83], but we provide a self-contained probabilistic proof for completeness. Combining these two results yields Equation (2), and hence Theorem 1.8.

Suitable analogues of Theorem 1.7 and Theorem 1.8 can be easily seen to hold when the inner gadget is OR instead of AND. In this case, the relevant decision tree model is non-adaptive OR decision trees. Interestingly, these decision trees are studied in the seemingly different context of non-adaptive group testing algorithms. Non-adaptive group testing is an active area of research (see, for, example, [CH08] and the references therein), and has additionally gained significant interest of late in view of the ongoing pandemic (see, for example, [ŽLG21]).

1.3 Organization

We introduce the necessary preliminaries in Section 2. In Section 3 we prove our results regarding the Inner Product gadget (Theorem 1.1 and Theorem 1.3, respectively). In Section 4 we prove our results regarding the AND gadget (Theorem 1.7 and Theorem 1.8). In Section A we show some results regarding the Addressing function, and we provide missing proofs from the main text in the remaining appendices.

2 Preliminaries

2.1 Notation

All logarithms in this paper are taken base 2. We use the notation to denote the set . We often identify subsets of

with their corresponding characteristic vectors in

. The view we take will be clear from context.

We now introduce function composition. Let be a Boolean function and let be a communication problem. Then denotes the function corresponding to the communication problem in which Alice is given input , Bob is given input , and their goal is to compute . We first define the Inner Product Modulo 2 function on input bits, denoted (we drop the dependence of on for convenience; the value of will be clear from context).

Definition 2.1 (Inner Product Modulo 2).

For an integer , define the Inner Product Modulo 2 function, denoted by

 IP(x1,…,xb,y1,…,yb)=⊕i∈[b](AND(xi,yi)).

Define . If is a partial function, so is .

Definition 2.2 (Odd-Max-Bit).

Define the Odd-Max-Bit function, denoted , by

 OMBn(x)={1if max{i∈[n]:xi=0}is odd0otherwise. (3)

Define .

Remark 2.3.

In the literature, is typically defined with a 1 in the max of Equation (3) instead of 0. That function behaves very differently from our . For example, it is known that even the weakly unbounded-error communication complexity of (under the standard definition of ) is polynomially large in  [BVW07]. In contrast, it is easy to show that even the deterministic one-way communication complexity of equals with our definition (see Theorem 4.4).

Definition 2.4 (Binary entropy).

For , the binary entropy of ,

, is defined to be the Shannon entropy of a random variable taking two distinct values with probabilities

and .

 Hbin(p):=plog1p+(1−p)log11−p.

In particular, if , then . Let be an arbitrary subset of the Boolean hypercube, and let be a partial Boolean function. If , then is said to be a total Boolean function. When not explicitly mentioned otherwise, we assume Boolean functions to be total.

2.2 Möbius expansion of Boolean functions

Every Boolean function has a unique expansion as

 f=∑S⊆[n]˜f(S)ANDS, (4)

where denotes the AND of the input variables in and each is a real number. We refer to the functions as monomials, the expansion in Equation (4) as the Möbius expansion of , and the real coefficients as the Möbius coefficients of . It is known [Bei93] that the Möbius coefficients can be expressed as

 ˜f(S)=∑X⊆S(−1)|S∖X|f(X). (5)

Define the Möbius sparsity of , denoted , to be the number of Möbius coefficients of that are non-zero. That is,

 (6)

We require the following definition, which captures the number of realizable -patterns of the monomials the Möbius support of .

Definition 2.5 (Möbius pattern complexity).

Let be a Boolean function, and let be its Möbius expansion. For an input , define the pattern of to be . Define the Möbius pattern complexity of , denoted , by

 PatM(f)=∣∣{P∈{0,1}S:P=(ANDS(x))S∈S for some x∈{0,1}n}∣∣.

When clear from context, we refer to the Möbius pattern complexity of just as the pattern complexity of .

2.3 Decision trees and their variants

For a partial Boolean function , the deterministic non-adaptive query complexity (alternatively the non-adaptive decision tree complexity) is the minimum integer such that the following is true: there exist indices , such that for every Boolean assignment to the input variables , is constant on . It is easy to see that if is a total function that depends on all input variables, then .

Definition 2.6 (Non-adaptive parity decision tree complexity).

Define the non-adaptive parity decision tree complexity of , denoted by , to be the minimum number of parities such that can be expressed as a function of these parities. In other words, the non-adaptive parity decision tree complexity of equals the minimal number for which there exists such that the function value is determined by the values for all .

Definition 2.7 (Non-adaptive AND decision tree complexity).

Define the non-adaptive AND decision tree complexity of , denoted by , to be the minimum number of monomials such that can be expressed as a function of these monomials. In other words, the non-adaptive AND decision tree complexity of equals the minimal number for which there exists such that the function value is determined by the values for all . We refer to such a set as a NAADT basis for .

We also require a randomized variant of non-adaptive AND decision trees.

Definition 2.8 (Randomized non-adaptive AND decision tree complexity).

A randomized non-adaptive AND decision tree  computing is a distribution over non-adaptive AND decision trees with the property that for all . The cost of is the maximum depth of a non-adaptive AND decision tree in its support. Define the randomized non-adaptive AND decision tree complexity of , denoted by , to be the minimum cost of a randomized non-adaptive AND decision tree that computes .

We first note some simple observations about the non-adaptive AND decision tree complexity of Boolean functions.

Claim 2.9.

Let be a Boolean function and let be a NAADT basis for . Then, every monomial in the Möbius support of equals , for some .

Proof.

Since is a NAADT basis for , the values of determine the value of . That is, we can express as

 f=∑T⊆[k]bT∏i∈TANDSi∏j∉T(1−ANDSj),

for some values of . If a particular pattern of is not attainable, we set for the corresponding subset. Expanding this expression only yields monomials that are products of ’s from . The claim now follows since the Möbius expansion of a Boolean function is unique. ∎

Claim 2.10.

Let be a Boolean function with Möbius sparsity . Then,

Proof.

The upper bound follows from the fact that knowing the values of all ANDs in the Möbius support of immediately yields the value of by plugging these values in the Möbius expansion of . That is, the Möbius support of acts as a NAADT basis for .

For the lower bound, let , and let be a NAADT basis for . Claim 2.9 implies that every monomial in the Möbius expansion of is a product of some of these ’s. Thus, the Möbius sparsity of is at most , yielding the required lower bound. ∎

Every Boolean function can be uniquely written as . This representation is called the Fourier expansion of and the real values are called the Fourier coefficients of . The Fourier sparsity of is defined to be number of non-zero Fourier coefficients of .

Sanyal [San19] showed the following relationship between non-adaptive parity decision complexity of a Boolean function and its Fourier sparsity.

Theorem 2.11 ([San19]).

Let be a Boolean function with Fourier sparsity . Then,

 NAPDT(f)=O(√rlogr).

This theorem is tight up to the logarithmic factor, witnessed by the Addressing function.

2.4 Communication complexity

The standard model of two-party communication complexity was introduced by Yao [Yao79]. In this model, there are two parties, say Alice and Bob, each with inputs . They wish to jointly compute a function of their inputs for some function that is known to them, where is a subset of . They use a communication protocol agreed upon in advance. The cost of the protocol is the number of bits exchanged in the worst case (over all inputs). Alice and Bob are required to output the correct answer for all inputs . The communication complexity of is the best cost of a protocol that computes , and we denote it by . See, for example, [KN97], for an introduction to communication complexity.

In a deterministic one-way communication protocol, Alice sends a message to Bob. Then Bob outputs a bit depending on and . The complexity of the protocol is the maximum number of bits a message contains for any input to Alice. In a randomized one-way protocol, the parties share some common random bits . Alice’s message is a function of and . Bob’s output is a function of and . The protocol is said to compute with error if for every , the probability over of the event that Bob’s output equals is at least . The cost of the protocol is the maximum number of bits contained in Alice’s message for any and . In the one-way quantum model, Alice sends Bob a quantum message, after which Bob performs a projective measurement and outputs the measurement outcome. Depending on the model of interest, Alice and Bob may or may not share an arbitrary input-independent entangled state for free. As in the randomized setting, a protocol computes with error if for all .

The deterministic (-error randomized, -error quantum, -error quantum with entanglement, respectively) one-way communication complexity of , denoted by (, , , respectively), is the minimum cost of any deterministic (-error randomized, -error quantum, -error quantum with entanglement, respectively) one-way communication protocol for .

Total functions whose domain is induce a communication matrix whose rows and columns are indexed by strings in , and the ’th entry equals . It is well known (see, for instance, [KN97]) that

 logrank(MF)≤Dcc(F)≤rank(MF), (7)

where denotes real rank. One of the most famous conjectures in communication complexity is the log-rank conjecture, due to Lovász and Saks [LS88], that proposes that the communication complexity of any Boolean function is polylogarithmic in its rank, i.e. the first inequality in Equation (7) is always tight up to a polynomial dependence.

Buhrman and de Wolf [BdW01] observed that the Möbius sparsity of a Boolean function equals the rank of the communication matrix of . In view of the first inequality in Equation (7), this yields

 Dcc(f∘AND)≥log(spar(f)). (8)

We require the definition of the Vapnik-Chervonenkis (VC) dimension [VC71].

Definition 2.12 (VC-dimension).

Consider a function . A subset of columns of is said to be shattered if all of the patterns of 0’s and 1’s are attained by some row of when restricted to the columns . The VC-dimension of a function is the maximum size of a shattered subset of columns of .

Klauck [Kla00] showed that the one-way quantum communication complexity of a function is bounded below by the VC-dimension of .

Theorem 2.13 ([Kla00, Theorem 3]).

Let be a Boolean function. Then,

 Q→cc,ε(F) ≥(1−Hbin(ε))VC(f), Q∗,→cc,ε(F) ≥(1−Hbin(ε))VC(F)/2.

2.5 Pattern complexity and one-way communication complexity

In this section we observe that the logarithm of the pattern complexity, , of a Boolean function equals the deterministic one-way communication complexity of . We also give bounds on in terms of . As a consequence we also show that .

Claim 2.14.

Let be a Boolean function. Then,

 D→cc(f∘AND)=⌈log(PatM(f))⌉.

It is well known that the one-way communication complexity of a function equals the logarithm of the number of distinct rows in its communication matrix. It is also not hard to show that the number of distinct rows in the communication matrix of equals the pattern complexity of . Together these prove Claim 2.14. For completeness we provide a self-contained proof in Section E.

Next we show that the pattern complexity of is bounded below by the non-adaptive AND decision tree complexity of .

Claim 2.15.

Let be a Boolean function. Then,

Proof.

Let and suppose . Let be a NAADT basis for where for all . For each set we define the following string :

 Xi(ℓ)={1ℓ∈Si0otherwise.

Consider two indices . By definition and . If , then . Similarly if , then . Thus, we have

 (ANDSi(Xi),ANDSj(Xi))≠(ANDSi(Xj),ANDSj(Xj)).

Hence each of the strings induces a different pattern for , concluding the proof since the number of strings chosen equals . ∎

Combining Claim 2.14 and Claim 2.15, we have the following claim.

Claim 2.16.

Let be a Boolean function. Then,

Proof.

For the upper bound on , let be a NAADT basis for . By Claim 2.9, every monomial in the Möbius support of is a product of some of these ’s. Since there are at most possible values for and since these completely determine the pattern of for any given , we have

which proves the required upper bound in view of Claim 2.14.

The lower bound follows from Claim 2.14 and Claim 2.15 since we have

3 Composition with Inner Product

In this section we prove Theorem 1.1 and Theorem 1.3, which are our results regarding the quantum and deterministic one-way communication complexities of functions composed with a small Inner Product gadget.

3.1 Quantum complexity

In this section, we prove Theorem 1.1 which gives a lower bound on the quantum one-way communication complexity of for total functions that depend on all its inputs.

Proof of Theorem 1.1.

By Theorem 2.13, it suffices to show that . Since is a function that depends on all its input variables, the following holds. For each index , there exist inputs

 z(i,0) =z(i)1,…,z(i)i−1,0,z(i)i+1,…,z(i)n, z(i,1) =z(i)1,…,z(i)i−1,1,z(i)i+1,…,z(i)n

such that and . That is, and have different function values, but differ only on the ’th bit.

For each and , define a string as follows. For all and ,

 y(i,j)k,ℓ=⎧⎪⎨⎪⎩z(i)kif k≠i and ℓ=11if k=i and ℓ=j0otherwise.

That is, for , the ’th block of is , and the ’th block of is . Consider the set of -many columns of , one for each . We now show that this set of columns is shattered. Consider an arbitrary string

 c=c1,2,…,c1,b,…,cn,2,…,cn,b∈{0,1}n(b−1).

We show below the existence of a row that yields this string on restriction to the columns described above. Define a string as follows. For all and ,

 xi,1 =1, xi,j ={ci,jif bi=01−ci,jif bi=1.

That is, the first element of each block of is , and the remaining part of any block, say the ’th block, equals either the string or its bitwise negation, depending on the value of .

To complete the proof, we claim that the row of corresponding to this string equals the string when restricted to the columns . To see this, fix and and consider . Next, for each with , the inner product of the ’th block of with the ’th block of equals , since and the first element of the ’th block of equals , and all other elements of the block are 0 by definition. In the ’th block of , only the ’th element is non-zero, and equals 1 by definition. Moreover, if , and equals otherwise. Hence, the inner products of the ’th blocks of and equals if , and equals otherwise. Thus, the string obtained on taking the block-wise inner product of and equals

 z(i)1,…,z(i)i−1,ci,j,z(i)i+1,…,z(i)n if bi=0 z(i)1,…,z(i)i−1,1−ci,j,z(i)i+1,…,z(i)n if bi=1.

By our definitions of and for each , it follows that the value of when applied to either of these inputs equals . This concludes the proof. ∎

3.2 Deterministic complexity

In this section we prove Theorem 1.3, which gives a lower bound on the deterministic one-way communication complexity of for even partial functions . A crucial ingredient of our proof is Theorem 1.4. We derive Theorem 1.4 from a result of Frankl and Tokushige [FT99] in Appendix C. Now we proceed to the proof of Theorem 1.3.

Proof of Theorem 1.3.

Let and let be an optimal one-way deterministic protocol for of complexity . induces a partition of into at most parts; each part corresponds to a distinct message. There are inputs to Alice such that for each , . Let be the set of those inputs. Identify with . By the pigeon-hole principle there exists one part in the partition induced by that contains at least strings in . Theorem 1.4 (which is applicable since the assumption implies that ) implies that there are two strings such that . Let . Let denote a generic input to . We claim that for each Boolean assignment to the variables in , is constant on . This will prove the theorem, since querying the variables determines ; thus . Towards a contradiction, assume that there exist such that . We will construct a string in the following way:

: Choose such that .

: Choose such that and .

Note that we can always choose a as above since for each , , and for each ,