1 Introduction
Communication complexity concerns itself with characterizing the minimum number of bits that distributed parties need to exchange in order to accomplish a given task (such as computing a function ). Over the years, it has established striking connections with various areas of complexity theory and information theory, providing tools for solving central problems in such domains. Since it is in general hard to pin down precisely the communication cost of a task, various lower bound methods have been developed over the years. One such method is the logarithm of the rank of the matrix that encodes the values the function takes on various inputs. More precisely, this matrix is defined as . The following well known conjecture posits that this lower bound is polynomially tight for the deterministic communication complexity of .
Conjecture 1 (LogRank Conjecture, [Ls88]).
There exists a universal constant such that the deterministic communication complexity of every total Boolean function is .
See Ref. [CMS18] and reference therein for more details about this and the other conjectures discussed in this work. A natural randomized analogue of Conjecture 1 is the following, comparing randomized communication complexity to the logarithm of the approximate rank rather than actual rank of . (See Section 2.1 for definitions.)
Conjecture 2 (LogApproximateRank Conjecture, [Ls09]).
There exists a universal constant such that the randomized communication complexity (with error ) of every total Boolean function is .
In a recent breakthrough work [CMS18], Chattopadhyay, Mande and Sherif establish that Conjecture 2 is false by exhibiting a function with an exponential separation between the randomized communication complexity (with constant error) and LogApproximateRank. Their function is a composition of the bit Xor function and a function that they call Sink. The work [CMS18] asked if their function had implications for the following quantum version of Conjecture 2.
Conjecture 3 (Quantum LogApproximateRank Conjecture, [Ls09]).
There exists a universal constant , such that the quantum communication complexity of every total Boolean function is .
Here we prove that Conjecture 3 is false as well. Before proceeding to the statement of our main result, we define the Sink function.
Definition 4 (Sink [Cms18]).
Sink function is defined on a complete directed graph of vertices, using variables , in the following way. Let if there is a directed edge from vertex to and if there is a directed edge from vertex to . The function Sink computes whether or not there is a sink in the graph. In other words, iff such that all edges adjacent to are incoming.
The function of interest for communication complexity is , where each Xor takes as input one bit from Alice and one from Bob. For simplicity of notation, we will denote this function as . Our main theorem is as follows, which lower bounds the quantum information complexity () of .
Theorem 5.
Any tround entanglement assisted protocol for achieving error satisfies , with
being the uniform distribution on 1+1 bits
^{4}^{4}4A random variable on
bits takes values over bits on Alice’s side and bits on Bob’s side..The desired lower bound on entanglement assisted quantum communication complexity () of follows by optimizing over the number of round .
Corollary 6.
It holds that .
Hence, combining with the following upper bound on the logapproximaterank due to Ref. [CMS18], the function witnesses an exponential separation between logapproximaterank and quantum communication, and refutes the quantum logapproximaterank conjecture of Lee and Shraibman [LS09].
Theorem 7 ( [Cms18]).
It holds that


.
In a subsequent version of [CMS18], Chattopadhyay et. al. improved the upper bound on to .
1.1 Independent work
Sinha and de Wolf [SdW18] used the fooling distribution method, in independent and simultaneous work, to obtain the same lower bound on the quantum communication complexity of . This differs from our techniques which we describe below.
1.2 Proof overview
At a highlevel, our argument follows the wellestablished information complexity approach [KNTZ07, CSWY01, BJKS04, JRS03a, BBCR10]. We view a given function as some composition of many instances of a simpler component function , and argue through a direct sum property a reduction from to . This is achieved by embedding inputs to into inputs to , where the remaining inputs to are sampled from some suitable distribution in order to achieve the desired direct sum property. Following this, we show a lower bound on the information complexity for g.
In the present context, is a composition of many instances of the Equality function, in a way that the input bits are shared across the instances. In Ref. [CMS18], the authors use Shearer’s lemma to handle such overlap between the inputs across the instances and derive a corruption lower bound. For the reduction from to Equality, we also wish to use a Shearertype inequality. We further argue that a lower bound on information complexity of Equality (for protocols that make small error in the worst case) under uniform distribution implies a lower bound on information complexity of . But it is not clear, a priori, that Equality should have high information cost under that distribution, as this function has trivial communication complexity under the uniform distribution. It turns out that the cutandpaste argument of Anshu, Belovs, BenDavid, Göös, Jain, Kothari, Lee and Santha [ABB16] yields a constant lower bound on information complexity of good protocols for Equality, even under the uniform distribution.
Broadly, our quantum lower bound proceeds along lines similar to above. The quantum cutandpaste argument of Anshu, BenDavid, Garg, Jain, Kothari and Lee [ABDG17] in the quantum setting yields a round dependent lower bound on the quantum information complexity (QIC) [KNTZ07, JRS03b, JN14, Tou15, KLLGR16] of good protocols for Equality, even under the uniform distribution. But the quantum version of the embedding argument requires new methods. In the classical setting, using classical information cost , as soon as we have Alice and Bob privately sample the remaining inputs, the Shearertype embedding follows almost directly from a Shearer like inequality for information [GKR15]. In the quantum setting, we would similarly like to use a Shearertype inequality for quantum information [ATYY17]. However, it is not immediately clear how to make the protocol embedding work for quantum information cost . We instead settle on an alternate notion of quantum information cost (variants of which have appeared before [JRS05, JN14, LT17, ATYY17]) that works well only for product input distributions. The argument then goes through by carefully using this notion, and it is equivalent to up to a rounddependent factor. What we get is a Shearertype embedding protocol for product input distributions that allows some specific preprocessing of the inputs. We provide such a general version in Section 4.1 in the quantum setting, while we give a more direct proof in the classical setting.
Hence, overall we get a round dependent lower bound on the quantum information complexity of , and the round independent lower bound on quantum communication complexity follows by optimizing over the number of rounds in any good protocol.
2 Preliminaries and notation
For integer , let represent the set Let and be finite sets and be a natural number. Let be the set , the cross product of , times. Let
be a probability distribution on
. Let represent the probability of according to . We write to denote that the random variable is distributed according to . We use the same symbol to represent a random variable and its distribution whenever it is clear from the context. The expectation value of function on is defined as where means that is drawn according to the distribution of . We say and are independent iff for each . For joint random variables , will denote the distribution of .We now introduce some quantum information theoretic notation. We assume the reader is familiar with standard concepts in quantum computing [NC00, Wil12, Wat18].
Let be a finitedimensional complex Euclidean space, i.e., for some positive integer with the usual complex inner product , which is defined as . We will also refer to
as an Hilbert space. We will usually denote vectors in
using braket notation, e.g., .The norm (also called the trace norm) of an operator on is , which is also equal to (vector)
norm of the vector of singular values of
. A quantum state (or a density matrix or simply a state) is a positive semidefinite matrix on with . The state is said to be a pure state if its rank is , or equivalently if , and otherwise it is called a mixed state. Let be a unit vector on , that is . With some abuse of notation, we use to represent the vector and also the density matrix , associated with . Given a quantum state on , the support of , denoted , is the subspace ofspanned by all eigenvectors of
with nonzero eigenvalues.
A quantum register is associated with some Hilbert space . Define . Let represent the set of all linear operators on . We denote by the set of density matrices on the Hilbert space . We use subscripts (or superscripts according to whichever is convenient) to denote the space to which a state belongs, e.g, with subscript indicates . If two registers and are associated with the same Hilbert space, we represent this relation by . For two registers and , we denote the combined register as , which is associated with Hilbert space . For two quantum states and ,
represents the tensor product (or Kronecker product) of
and . The identity operator on is denoted .Let . We define the partial trace with respect to of as
where is an orthonormal basis for the Hilbert space . The state is referred to as a reduced density matrix or a marginal state. Unless otherwise stated, a missing register from subscript in a state will represent partial trace over that register. Given , a purification of is a pure state such that . Any quantum state has a purification using a register with . The purification of a state, even for a fixed , is not unique as any unitary applied on register alone does not change .
An important class of states that we will consider are the classical quantum states. They are of the form , where is a probability distribution. In this case, can be viewed as a probability distribution and we shall continue to use the notations that we have introduced for probability distribution, for example, to denote the average .
A quantum superoperator (or a quantum channel or a quantum operation) is a completely positive and trace preserving (CPTP) linear map (mapping states from to states in ). The identity operator in Hilbert space (and associated register ) is denoted . A unitary operator is such that . The set of all unitary operations on register is denoted by .
A outcome quantum measurement is defined by a collection , where is a positive semidefinite operator, where means is positive semidefinite. Given a quantum state , the probability of getting outcome corresponding to is and getting outcome corresponding to is .
2.0.1 Distance measures for quantum states
We now define the distance measures we use and some properties of these measures. Before defining the distance measures, we introduce the concept of fidelity between two states, which is not a distance measure but a similarity measure. Note that all the notions introduced below also apply to classical random variables, when viewed as diagonal quantum states in some basis.
Definition 8 (Fidelity).
Let be quantum states. The fidelity between and is defined as
For two pure states and , we have . We now introduce the two distance measures we use.
Definition 9 (Distance measures).
Let be quantum states. We define the following distance measures between these states.
Trace distance:  
Bures metric: 
Note that for any two quantum states and , these distance measures lie in . The distance measures are if and only if the states are equal, and the distance measures are if and only if the states have orthogonal support, i.e., if .
Conveniently, these measures are closely related.
Fact 10.
For all quantum states , we have
Proof.
We now review some properties of the Bures metric.
Fact 11 (Facts about Bures metric).
Fact 11.A (Triangle inequality [Bur69]).
The following triangle inequality and a weak triangle inequality hold for the Bures metric and the square of the Bures metric.
Fact 11.B (Averaging over classical registers).
For classicalquantum states with , we have
Finally, an important property of both these distance measures is monotonicity under quantum operations [Lin75, BCF96].
Fact 12 (Monotonicity under quantum operations).
For quantum states , , and a quantum operation , it holds that
with equality if is unitary. In particular, for bipartite states , it holds that
2.0.2 Mutual information
We start with the following fundamental information theoretic quantities. We refer the reader to the excellent sources for quantum information theory [Wil12, Wat18] for further study.
Definition 13.
Let be a quantum state. We then define the following.
von Neumann entropy: 
We now define mutual information and conditional mutual information.
Definition 14 (Mutual information).
Let be a quantum state. We define the following measures.
Mutual information:  
Conditional mutual information: 
We will need the following basic properties.
Fact 15 (Properties of ).
Let be a quantum state. We have the following.
Fact 15.A (Nonnegativity).
If is a product state, then
Fact 15.B (Chain rule).
Fact 15.C (Monotonicity).
For a quantum operation , with equality when is unitary. In particular
Fact 15.D (Averaging over conditioning register).
For classicalquantum state (register is classical) :
The following lemma, known as the Average Encoding Theorem [KNTZ07], formalizes the intuition that if a classical and a quantum registers are weakly correlated, then they are nearly independent.
Lemma 16.
For any with a classical system and states ,
(1) 
The following Shearertype inequality for quantum information was shown in Ref. [ATYY17]. Classical variants appeared in [GKR15, RS15].
Lemma 17.
Consider registers and define . Consider a quantum state such that . Let be a random set picked independently of satisfying for all and . Then it holds that
2.1 Classical communication complexity
Let be a total function (that is, its value is defined on every input) and . In a twoparty communication task, Alice is given an input , Bob is given and the task is to compute by exchanging as few bits as possible. The parties are allowed to possess preshared randomness () and private randomness (, ). Without loss of generality, we can assume that Alice communicates first and also gives the final output. The communication cost of a protocol , denoted by , is the maximum number of bits the parties have to communicate over all possible inputs and values of the shared and private randomness. Let represent the twoparty randomized communication complexity of with worst case error , i.e., the communication of the best twoparty randomized protocol for with error at most over any input . Worstcase error of the protocol over the inputs is denoted by .
Definition 18 (XOR function).
A function is called an XOR function if there exists a function such that for all . We denote .
Definition 19 (Rank).
The rank of a matrix , denoted by is the minimum integer for which there exist rank 1 matrices such that .
Definition 20 (Nonnegative Rank).
The nonnegative rank of a matrix , denoted by is the minimum integer for which there exist rank 1 matrices with nonnegative entries such that .
Definition 21 (Approximate rank).
Let and be an matrix. The approximate rank of is defined as
Definition 22 (Approximate nonnegative rank).
Let and be an matrix. The approximate nonnegative rank of is defined as
Definition 23 (Distributional Information Complexity).
Distributional information complexity of a randomized protocol with respect to a distribution is defined as
Definition 24 (Max Distributional Information Complexity).
Maxdistributional information complexity of a randomized protocol is defined as
Definition 25 (Information Complexity of a function).
Information complexity of a function is defined as
Note that since one bit of communication can hold at most one bit of information, for any protocol and distribution we have . This implies that information complexity of a function is a lower bound on the randomized communication complexity of a function.
Lemma 26 (Cutandpaste lemma (Lemma 6.3 in [Bjks04])).
Let and be two inputs to a randomized protocol . Then
Fact 27 (Pythagorean property (Lemma 6.4 in [Bjks04])).
Let and be two inputs to a randomized protocol . Then
2.2 Quantum communication complexity
In quantum communication complexity, two players wish to compute a classical function for some finite sets and . The inputs and are given to two players Alice and Bob, and the goal is to minimize the quantum communication between them required to compute the function.
While the players have classical inputs, the players are allowed to exchange quantum messages. Depending on whether or not we allow the players arbitrary shared entanglement, we get , boundederror quantum communication complexity without shared entanglement and , for the same measure with shared entanglement. Obviously . In this paper we will only work with , which makes our results stronger since we prove lower bounds in this work.
An entanglement assisted quantum communication protocol for a function is as follows. Alice and Bob start with preshared entanglement . Upon receiving inputs , where Alice gets and Bob gets
, they exchange quantum messages. At the end of the protocol, Alice applies a two outcome measurement on her qubits and correspondingly outputs
or . Let be the random variable corresponding to the output produced by Alice in , given input .Let be a distribution over . Let inputs to Alice and Bob be given in registers and in the state
(2) 
Let these registers be purified by and respectively, which are not accessible to either players. Denote
(3) 
Let Alice and Bob initially hold register with shared entanglement . Then the initial state is
(4) 
Alice applies a unitary such that the unitary acts on conditioned on . She sends to Bob. Let be a relabeling of Bob’s register . He applies such that the unitary acts on conditioned on . He sends to Alice. Players proceed in this fashion for messages, for even, until the end of the protocol. At any round , let the registers be , where is the message register, is Alice’s register and is Bob’s register. If
is odd, then
and if is even, then . On input , let the joint state in registers be . Then the global state at round is(5) 
We define the following quantities.
Worstcase error:  
Quantum CC of a protocol:  
Quantum CC of : 
Our first fact links with the distance between a pair of final states corresponding to inputs with different outputs.
Fact 28 (Error vs. distance).
Consider a nonconstant function , and let and be inputs such that . For any protocol with rounds, it holds that
In below, let represent Alice and Bob’s registers after reception of the message at round . That is, at even round , and at odd , . We will need the following version of the quantumcutandpaste lemma from [NT17] (also see [JRS03b, JN14] for similar arguments). This is a special case of [NT17, Lemma 7] and we have rephrased it using our notation.
Lemma 29 (Quantum cutandpaste).
Let be a quantum protocol with classical inputs and consider distinct inputs for Alice and for Bob. Let be the initial shared state between Alice and Bob. Also let be the shared state after round of the protocol when the inputs to Alice and Bob are respectively. For odd, let
and for even , let
Then
As discussed in the introduction, approximate rank lower bounds boundederror quantum communication complexity with shared entanglement [LS08]:
Fact 30.
For any twoparty function and , we have .
2.3 Quantum information complexity
Definition 31.
Given a quantum protocol with classical inputs distributed as , the quantum information cost is defined as
(6) 
Definition 32.
Given a quantum protocol with classical inputs distributed as , the cumulative Holevo information cost is defined as
Definition 33.
Given a quantum protocol and a product distribution over the classical inputs, the cumulative superposedHolevo information cost is defined as
Note that for product input distributions on and for each ,
(7)  
(8) 
Combining with other results in Ref. [LT17], we get the following for any round protocol and any product distribution :
(9)  
(10)  
(11)  
(12) 
.
3 Lower bound on the information complexity of
3.1 Reducing Equality to
We define the Equality function as
Recall the Sink function from Definition 4. Following [CMS18] we use projections of the inputs in our proof to analyze the input of the Sink function. Let . Let be the set of input coordinates that correspond to the edges incident to . We use the notation to denote the input projected to the coordinates in . Note that decides whether or not is a sink. By , we refer to the bit string such that is a sink if and only if . Sink can be written as
since only one of the vertex can be a sink in the complete directed graph. Our communication function is . Similar to Sink, can be represented as
Our first result is as follows.
Theorem 34.
Suppose . Let be a protocol for which makes a worst case error of at most . There exists a protocol for EQ that makes a worst case error of at most . Furthermore, it holds that
where is the uniform distribution over inputs to EQ and is uniform over the inputs to .
Proof.
We have
where the information quantities are evaluated on and the associated . Let be a random variable which takes values in with uniform probability. Let (similarly ) be the restriction of (similarly ) to coordinates in . Since each coordinate appears in exactly two sets in , we have . Thus, from Lemma 17, we have
(13)  
(14) 
The protocol for EQ is now as follows, for inputs (we use as inputs here to avoid confusion with for ).

Alice and Bob take a sample from using shared randomness. Let be such that .

They set and . Alice samples uniformly at random from private randomness and Bob samples uniformly at random from private randomness. Here is the complement of . This specifies the input for .

They run the protocol and output accordingly.
Observe that and are distributed uniformly if and are. Thus,
where the information quantities are evaluated on and the associated , and the desired information bound follows by (13).
To bound the worst case error of , we argue as follows. Fix some input to . If , then which implies that error of on this input is same as the error of on the corresponding , hence at most . Now consider the case where . The function evaluates to 1 only if for some . Since, , we conclude that (if it exists) cannot be equal to . Moreover, the edge adjacent to is already fixed by , and if it is not consistent with the corresponding value in , then
Comments
There are no comments yet.