1 Introduction
1.1 Number On the Forehead and Simultaneous models
The Number On the Forehead (NOF) model is a multiparty communication model introduced by Chandra, Furst and Lipton [CFL83] that generalizes the two player communication game of Yao [Yao79]. In this model, players are given inputs on which they want to compute some function . Each player sees all of the input , except . The situation is as if input is written on the forehead of player .
In order to collaboratively evaluate , the players communicate by broadcasting bits according to a predetermined protocol. This protocol specifies whose turn it is to speak, and which bit is to be sent given the information exchanged so far and the input seen by the speaking player. It also determines when communication stops. At the end, all the players must be able to recover from the input they see and the transcript of the exchange. The cost of the protocol on input is the number of exchanged bits, and the total cost is the worst case cost on all inputs. The party deterministic communication complexity of , denoted , is the cost of the most efficient protocol computing .
In most of the settings, the ’s are bits long (for some parameter ) and . In this case, the naive protocol is to broadcast first the entire input (this can be done by player 2), and then player 1 computes and sends the result to the other players. This protocol has cost (where is the number of bits required for sending ), which proves . Consequently, a protocol will be said to be efficient if it has cost (i.e. we seek for exponential speedup over the naive protocol).
Among the many variants of the previous framework (randomized, quantum, etc.), we will be interested in the simultaneous (or Simultaneous Message Passing  SMP) model [Yao79, NW93, BKL95, PRS97] in which the players cannot speak to each other but instead send information to a referee. The referee does not know the players’ inputs, and cannot give any information back. At the end, the referee must be able to recover from what she obtained from the players. The simultaneous deterministic communication complexity is denoted , and it always satisfies . It has often been easier to reason first in this weaker model for proving lower bounds [BGKL04, PRS97, BPSW05, BYJKS02]. It is also more suitable and fruitful for studying certain functions, such as Equality in the two party setting [Yao79, Amb96, NS96, BK97, BCWdW01, GRd08, BGK15]. We will show in the next section that the simultaneous deterministic communication model is also closely connected to lower bound results in the complexity class .
1.2 The log n barrier problem and lower bounds
The NOF model has proved to be of value in the study of many areas of computer science, such as branching programs [CFL83], Ramsey theory [CFL83], circuit complexity [HG91, BT94], quasirandom graphs [CT93], proof complexity [BPS07], etc. One of the most interesting connections, pointed out by Håstad and Goldmann [HG91] and refined in [BGKL04], is a way to derive lower bounds for the complexity class^{1}^{1}1 refers to the functions computable by constantdepth polysize circuits with unbounded fanin And, OR, Not and gates (where outputs iff the sum of its inputs is divisible by ). from lower bounds in the simultaneous NOF model. More precisely, according to a result from Yao, Beigel and Tarui [Yao90, BT94], any function can be expressed as a depth2 circuit whose top gate is a symmetric gate of fanin , and each bottom gate is an And gate of fanin (for some constants ). Consequently, for any partition of the input of between players in the simultaneous NOF model, there exists a partition of the And gates between the players such that each of them sees all the input bits she needs to evaluate the gates she received. The players can then send to the referee the number of gates that evaluate to 1, which enables the referee to compute . The total cost of this protocol is . Conversely, any superpolylogarithmic lower bound in the simultaneous NOF model for a function and a partition of its input between players would imply .
Separating from other complexity classes is a central question in complexity theory. It is conjectured that does not contain the majority function Maj, but the only result known so far is [Wil14]. The aforementioned connection with communication complexity has motivated the search for a function which is hard to compute for players in the simultaneous NOF model. This problem is called the barrier.
Obtaining lower bounds in the NOF model is a challenging task, as the current methods become very weak when . The only general lower bound technique known so far is the discrepancy method and its variants [BNS92, CT93, Raz00, She11]. One of the early application of it was an lower bound on the randomized complexity of the Generalized Inner Product (Gip) function [BNS92]. A long series of generalizations and improvements of the discrepancy method subsequently led to an (resp. ) lower bound on the randomized (resp. deterministic) complexity of the Disjointness (Disj) function [Tes03, BPSW06, CA08, LS09, BH12, She16, She14, RY15]. It might seem like other lower bound arguments could prove that Gip and Disj remain hard for players. However, surprising nonsimultaneous [Gro94, ACFN15] and simultaneous [BGKL04, ACFN15] protocols proved that the aforementioned lower bounds are nearly optimal, and that these two functions cannot break the barrier. Very recently, Podolskii and Sherstov [PS17] showed that the randomized complexity of Gip and Disj is exactly when , and built a function having complexity independently of . Although these last results do not break the barrier, they are the first superconstant lower bounds proved for explicit functions when .
1.3 Composed Functions
An input to players in the NOF model can be visualized as a boolean matrix where row is the number on the forehead of player . The protocols known so far for Gip and Disj strongly rely on the particular way these functions act on matrix . They both consist in applying the function on each of the columns of , followed by the (for Gip) or (for Disj) function on the resulting bits. Since Gip and Disj do not break the barrier, a natural move has been to try other and functions, and to increase the number of columns on which each function applies. These are called the composed functions, formally defined below and depicted in Figure 1.
Definition 1 (Boolean input version).
Fix a blockwidth parameter , and consider functions and where . Given , the composed function for players outputs where is the block of width in the matrix representation of the input. When , we denote by .
Both and are composed functions for , with the additional property that , Nor and And are symmetric functions (i.e. invariant under any permutation of their input). Since the majority function Maj is conjectured to be outside of , Babai et. al. [BKL95, BGKL04] suggested to look at and for breaking the barrier (where outputs 1 if at least bits of the input block are set to 1, and if for seen as bits numbers).
Another way to look at composed functions of blockwidth is to interpret each subrow of each block as a number in , where . This representation of the input as a matrix over some set is sometimes more convenient to use. Below, we reformulate Definition 1 using this point of view.
Definition 2 (Integer input version).
Fix an integer and consider functions and where . Given , the composed function for players outputs where is the column in the matrix representation of the input. When , we denote by .
The set of all composed functions (resp. ) over is denoted (resp. ). Similarly, is the set of for symmetric and symmetric functions, is the set of for symmetric and any , etc. If (which corresponds to blockwidth ), we will drop the subscript and write , , etc. We have for instance and .
The first efficient protocol for composed functions with or more players was given by Grolmusz [Gro94]. It is a nonsimultaneous protocol of cost for any composed function in (the inner function is fixed to be And) when . The study of composed functions with symmetric outer function was subsequently continued, as it captures many other interesting cases in communication complexity. Babai et. al. [BKL95] proposed first as a candidate to break the barrier. However, in a subsequent work [BGKL04], they found a simultaneous protocol that applies to (where holds for compressible symmetric functions^{2}^{2}2A class (parameterized by ) of symmetric functions is compressible if for any function , set and input there is a message of size such that can be computed for any from knowledge of and . The and functions are compressible [BGKL04]., a subset of that contains Maj and And). It has cost when . Later, Ada et. al. [ACFN15] generalized this result to , with a simultaneous protocol of cost for players. The only protocol known so far for blockwidth has been discovered by Chattopadhyay and Saks [CS14]. It has cost for when (which is efficient for ). However, it is not simultaneous in the deterministic setting (the authors showed how to make it simultaneous using shared randomness between the players). Thus, none of these previous results prevents from breaking the barrier in the SMP model with composed functions of blockwidth as small as . The goal of this paper is to rule out this possibility for all symmetric composed functions of constant blockwidth .
1.4 Summary of Results and Comparison to Previous Protocols
Below, we describe our main results, and summarize in Table 2 the complexity of all the known protocols for composed functions. Then, we review the main ideas used in the previous literature, and we explain how we differ from them.
Our results
In this paper, we describe the first deterministic simultaneous protocol for symmetric composed functions of blockwidth . Our result is divided into two parts. We first give (Section 3.1) a protocol of cost for when the number of players is . In a second time (Section 3.2), we build upon this result to give a simultaneous protocol of cost for when . Unlike the first protocol, this last result also works with different inner functions and it is efficient even if is superpolylogarithmic.


Simultaneous 



Grolmusz [Gro94]  No  
Babai et. al. [BGKL04]  Yes  
Ada et. al. [ACFN15]  Yes  
C. and Saks [CS14]  No  
This work  Yes 
Adjacent vertices of the hypercube. For blockwidth and an input matrix , denote the number of times column occurs in . Grolsmusz [Gro94] noticed that if is a sequence of adjacent vertices of the hypercube (i.e. differs from by exactly one coordinate) then . Moreover, if position is the coordinate at which and differ, then the quantity is known by player . This leads to a straightforward simultaneous protocol of cost for computing , provided that is known by the referee. In his initial work, Grolsmusz [Gro94] gave a nonsimultaneous way to find some initial . Ada et. al. [ACFN15] noticed later that this step can be made simultaneous using the protocol of Babai et. al. [BGKL04], and that the idea of Grolsmusz (initially designed for ) easily adapts to . Unfortunately, this "hypercube view" does not generalize to blockwidth : for each and , the number of vertices that differ from only at position is now . It is easy to see that writing a similar telescoping sum as above, in which each term would be known by a player, is no longer possible.
Counting up to symmetry. Given a matrix over , for all denote the number of columns of with exactly occurrences of each (we do not put since it is always equal to ). These numbers provide less information than the ’s defined above, but they still unable us to compute for all . If is distributed between players in the NOF model (player does not see row ), a naive simultaneous protocol is to have each player i send the number of columns which contain, from her point of view, exactly occurrences of each element (for all ). Babai et. al. [BGKL04] analyzed this protocol in the case , and showed that it gives the referee enough information to recover the ’s, provided that . In Section 3.1, we extend this analysis to any . The core of the proof, as in [BGKL04], is to define a specific equation (using the ’s) whose only integral solution is the ’s.
The shifted basis technique. The only protocol [CS14] known prior to this work for blockwidth is based on the following observation: given polynomial representations of the inner functions (over variables ), each term involving strictly less than variables can be evaluated on input matrix by at least one player (in fact, by all the players that have one of the missing variables on their foreheads). The key idea of [CS14] is to get rid of the remaining terms by expressing the in a shifted basis where all terms of degree will evaluate to on (shifting for instance monomial by means to replace it with ). To this end, it would suffice to find some that shares at least one coordinate in common with each column of . Provided that is large enough, [CS14] showed that a randomly picked
has this property with high probability. This gives rise to a simultaneous protocol for
if the players have access to a shared random string. In the deterministic setting (no shared randomness), it is not known how to make this protocol simultaneous.Different inner functions, and reducing the number of players. The communication complexity is expected to decrease as grows up (since the overlap of information among the players increases). However, this fact is not reflected in the cost of our first protocol (Section 3.1). This issue is closely related to that of having different inner functions . Indeed, the problem of computing with players on a matrix can be changed into computing with the first players on the submatrix (first rows of ), where is defined as and is the values occurring from row to in the th column of (note that the new functions are still symmetric, but unknown to the referee). Our first protocol cannot handle different inner functions, but this issue will be solved in Section 3.2 where we describe a protocol for based on a new use of the polynomial representations (different than [CS14]). We will show that each inner function can be represented into a (small) basis of symmetric functions (Section 2), which will allow us to split the problem of computing on into computing each on a wellchosen matrix . This last step can be done with the initial protocol of Section 3.1.
2 Polynomial Representations for Symmetric Functions
Throughout this paper, will denote the set of integers and is the finite field with elements. Furthermore, a function is said to be symmetric (or symmetric) if it is invariant under any permutation of the input variables (i.e. for any input and permutation , we have ).
The protocol designed in Section 3.2 for composed functions requires a concise polynomial representation of the inner functions . Informally, we look for a field and polynomials with variables , such that:

for all ,

the order of is at least (so that the set of values taken by for can be embedded into )

the polynomials can be represented in a basis of size when is constant

the values of the coefficients of the polynomials in this basis are less than , for some absolute constant independent of and .
The first step towards this end is to look at the usual multilinear representation (also called Fourier expansion [O’D14]) of a function . For each we define the indicator polynomial to be . It is easy to see that it takes value when and value when . Consequently, we have for all . If we let be the monomial , then there exist real coefficients such that it can be rewritten as the following multilinear polynomial
(1) 
Moreover, the coefficients are given by the Möbius inversion formula
(2) 
where is the number of in , and means whenever .
Polynomial (1) is called the multilinear representation of function . It satisfies requirements (a) and (b) above, but not requirement (c). Indeed, these polynomials are expressed in the basis of monomials which has size .
In order to reduce the size of the basis, we restrict ourselves to the symmetric functions (as will be the case in Section 3.2). This condition leads to the following equalities between coefficients.
Lemma 1.
For any and any permutation , if is a symmetric function then the coefficients and in the multilinear representation of are equal (where ).
Proof.
The proof is direct from Equation (2). ∎
This lemma motivates the definition of the following polynomials, that will be used to obtain a basis for the symmetric functions over .
Definition 3.
Given , the monomial symmetric polynomial over variables is defined to be the sum of all the distinct monomials where ranges over all the permutations.
Example 1.
If and then .
According to Lemma 1, any symmetric function can be expressed as a linear combination of monomial symmetric polynomials. From this observation, we can derive a basis for the symmetric functions by taking all the distinct monomial symmetric polynomials. We specify a subset of elements that corresponds to this basis.
Definition 4.
We define a tuple to be sorted, if for all , and whenever (where is the Hamming weight of , and is the lexicographic order over ). The set of all the sorted tuples over is denoted .
Lemma 2.
The set is a basis for the symmetric functions . Moreover, it has size .
Proof.
It is straightforward to see that all the possible monomial symmetric polynomials belong to , and that no two elements in this set have a monomial in common. Thus, it is a basis for the symmetric functions.
Consider the total order over defined as if and only if , or and . Each can be seen as a (distinct) nondecreasing sequence of length from the totally ordered set of size . The total number of such sequences is known to be . ∎
Finally, given a parameter , we want the coefficients of the symmetric functions in the chosen basis to be less than for some constant independent of and (requirement (d)). To this end, it suffices to reformulate the previous results over a field , for some prime . We obtain the following polynomial representation for symmetric functions:
Proposition 3.
Any symmetric function can be written as
where is prime, and is the monomial symmetric polynomial corresponding to the sorted tuple . Moreover, has size .
3 Simultaneous Protocol for
We now describe in detail our simultaneous protocol for symmetric composed functions. The result is divided into two parts. We first give in Section 3.1 a protocol of cost for when . This is a generalization of the idea of [BGKL04], which was based on solving a particular equation. We build upon this result in Section 3.2 to give an efficient protocol of cost for when and is constant. This last result uses the protocol of Theorem 4 as a subroutine, and the polynomial representations described in Section 2.
3.1 The Equation Solving part
We extend the protocol for from [BGKL04] to any . It applies to all functions in as long as , but it is not efficient if is nonconstant or if the number of players is superpolylogarithmic (we will remove this last condition in the next section). For convenience in the proof, we state the result over instead of :
Theorem 4.
Let be a matrix over , where and . For , denote the number of columns of with exactly occurrences of each . For each , let player see all of except row . If then there exists a deterministic simultaneous NOF protocol of cost , at the end of which the referee knows all the ’s.
Proof.
The communication part of the protocol is pretty simple: each player sends to the referee the number of columns which contain, from her point of view (i.e. without taking row into account), exactly occurrences of each element (for all ).
The referee computes then (for all ). The important thing to note is that these numbers must verify the following equalities:
(3) 
To see why it is true, consider a column of that contributes to a given . Either contains exactly occurrences of each element , or there is one that occurs times in (the other having exactly occurrences in ). In the first case, contributes to and to the quantity of each player having a entry of on her forehead (there are such players). In the second case, contributes to and to the quantity of each player having a entry of on her forehead (there are such players). Thus, the total contribution for is .
Equalities (3) can be seen as a system of equations whose unknowns are the ’s. Since the referee is not computationally restricted she can enumerate all the integral solutions, but she does not know which one corresponds to matrix . The key lemma is to show that Equations (3), under mild constraints
(4) 
have at most one integral solution when . We prove it by induction on (the base case corresponds to the work of [BGKL04], the induction step is more involved and is given in Appendix A). Consequently, the referee is able to know unambiguously the correct ’s that correspond to .
This protocol is clearly simultaneous since the players do not need to talk to each other. Each of the players sends numbers . Thus the total communication cost is at most . ∎
Corollary 5.
Let , and suppose . There is a deterministic simultaneous NOF protocol of cost , at the end of which the referee can compute all composed functions of her choice.
This result can also be adapted to the case of players by splitting the initial matrix into sufficiently many parts. Previously, Ada et. al. [ACFN15] also generalized their work to any number of players, by giving a protocol of cost for . However, it was not simultaneous and it does not apply to .
Proposition 6.
Let be a matrix over , where and . For , denote the number of columns of with exactly occurrences of each . For each , let player see all of except row . If then there exists a deterministic simultaneous NOF protocol of cost at most , at the end of which the referee knows all the ’s.
Proof.
We split into matrices, each of size (except one matrix that can have less columns). These matrices have few enough columns to apply (separately) the protocol of Theorem 4 on them. The ’s for the original matrix are computed by recombining all the obtained results. The total cost is . ∎
3.2 The Polynomial Representation part
Using the polynomial representation of Proposition 3, we give a protocol that improves upon Corollary 5 in two ways: it is still efficient when is superpolylogarithmic, and the inner functions can be different (i.e. it applies to instead of ).
Theorem 7.
Let , and suppose . For any composed function there exists a deterministic simultaneous NOF protocol that computes it with cost .
Proof.
Let . In order to use the polynomial representation of Section 2, we change the range of the functions as
Comments
There are no comments yet.