# Quantum Log-Approximate-Rank Conjecture is also False

In a recent breakthrough result, Chattopadhyay, Mande and Sherif [ECCC TR18-17] showed an exponential separation between the log approximate rank and randomized communication complexity of a total function f, hence refuting the log approximate rank conjecture of Lee and Shraibman [2009]. We provide an alternate proof of their randomized communication complexity lower bound using the information complexity approach. Using the intuition developed there, we derive a polynomially-related quantum communication complexity lower bound using the quantum information complexity approach, thus providing an exponential separation between the log approximate rank and quantum communication complexity of f. Previously, the best known separation between these two measures was (almost) quadratic, due to Anshu, Ben-David, Garg, Jain, Kothari and Lee [CCC, 2017]. This settles one of the main question left open by Chattopadhyay, Mande and Sherif, and refutes the quantum log approximate rank conjecture of Lee and Shraibman [2009]. Along the way, we develop a Shearer-type protocol embedding for product input distributions that might be of independent interest.

## Authors

• 12 publications
• 6 publications
• 6 publications
• ### Exponential Separation between Quantum Communication and Logarithm of Approximate Rank

Chattopadhyay, Mande and Sherif (ECCC 2018) recently exhibited a total B...
11/25/2018 ∙ by Makrand Sinha, et al. ∙ 0

• ### Tight Bounds for the Randomized and Quantum Communication Complexities of Equality with Small Error

We investigate the randomized and quantum communication complexities of ...
07/25/2021 ∙ by Nikhil S. Mande, et al. ∙ 0

• ### New Separations Results for External Information

We obtain new separation results for the two-party external information ...
03/06/2021 ∙ by Mark Braverman, et al. ∙ 0

• ### Towards Stronger Counterexamples to the Log-Approximate-Rank Conjecture

We give improved separations for the query complexity analogue of the lo...

• ### New techniques for bounding stabilizer rank

In this work, we present number-theoretic and algebraic-geometric techni...
10/15/2021 ∙ by Benjamin Lovitz, et al. ∙ 0

• ### Matrix Discrepancy from Quantum Communication

We develop a novel connection between discrepancy minimization and (quan...
10/19/2021 ∙ by Samuel B. Hopkins, et al. ∙ 0

• ### Degree vs. Approximate Degree and Quantum Implications of Huang's Sensitivity Theorem

Based on the recent breakthrough of Huang (2019), we show that for any t...
10/23/2020 ∙ by Scott Aaronson, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Communication complexity concerns itself with characterizing the minimum number of bits that distributed parties need to exchange in order to accomplish a given task (such as computing a function ). Over the years, it has established striking connections with various areas of complexity theory and information theory, providing tools for solving central problems in such domains. Since it is in general hard to pin down precisely the communication cost of a task, various lower bound methods have been developed over the years. One such method is the logarithm of the rank of the matrix that encodes the values the function takes on various inputs. More precisely, this matrix is defined as . The following well known conjecture posits that this lower bound is polynomially tight for the deterministic communication complexity of .

###### Conjecture 1 (Log-Rank Conjecture, [Ls88]).

There exists a universal constant such that the deterministic communication complexity of every total Boolean function is .

See Ref. [CMS18] and reference therein for more details about this and the other conjectures discussed in this work. A natural randomized analogue of Conjecture 1 is the following, comparing randomized communication complexity to the logarithm of the approximate rank rather than actual rank of . (See Section 2.1 for definitions.)

###### Conjecture 2 (Log-Approximate-Rank Conjecture, [Ls09]).

There exists a universal constant such that the randomized communication complexity (with error ) of every total Boolean function is .

In a recent breakthrough work [CMS18], Chattopadhyay, Mande and Sherif establish that Conjecture 2 is false by exhibiting a function with an exponential separation between the randomized communication complexity (with constant error) and Log-Approximate-Rank. Their function is a composition of the -bit Xor function and a function that they call Sink. The work [CMS18] asked if their function had implications for the following quantum version of Conjecture 2.

###### Conjecture 3 (Quantum Log-Approximate-Rank Conjecture, [Ls09]).

There exists a universal constant , such that the quantum communication complexity of every total Boolean function is .

Here we prove that Conjecture 3 is false as well. Before proceeding to the statement of our main result, we define the Sink function.

###### Definition 4 (Sink[Cms18]).

Sink function is defined on a complete directed graph of vertices, using variables , in the following way. Let if there is a directed edge from vertex to and if there is a directed edge from vertex to . The function Sink computes whether or not there is a sink in the graph. In other words, iff such that all edges adjacent to are incoming.

The function of interest for communication complexity is , where each Xor takes as input one bit from Alice and one from Bob. For simplicity of notation, we will denote this function as . Our main theorem is as follows, which lower bounds the quantum information complexity () of .

###### Theorem 5.

Any t-round entanglement assisted protocol for achieving error satisfies , with

being the uniform distribution on 1+1 bits

444 bits takes values over bits on Alice’s side and bits on Bob’s side..

The desired lower bound on entanglement assisted quantum communication complexity () of follows by optimizing over the number of round .

###### Corollary 6.

It holds that .

Hence, combining with the following upper bound on the log-approximate-rank due to Ref. [CMS18], the function witnesses an exponential separation between log-approximate-rank and quantum communication, and refutes the quantum log-approximate-rank conjecture of Lee and Shraibman [LS09].

###### Theorem 7 ( [Cms18]).

It holds that

1. .

In a subsequent version of [CMS18], Chattopadhyay et. al. improved the upper bound on to .

### 1.1 Independent work

Sinha and de Wolf [SdW18] used the fooling distribution method, in independent and simultaneous work, to obtain the same lower bound on the quantum communication complexity of . This differs from our techniques which we describe below.

### 1.2 Proof overview

At a high-level, our argument follows the well-established information complexity approach [KNTZ07, CSWY01, BJKS04, JRS03a, BBCR10]. We view a given function as some composition of many instances of a simpler component function , and argue through a direct sum property a reduction from to . This is achieved by embedding inputs to into inputs to , where the remaining inputs to are sampled from some suitable distribution in order to achieve the desired direct sum property. Following this, we show a lower bound on the information complexity for g.

In the present context, is a composition of many instances of the Equality function, in a way that the input bits are shared across the instances. In Ref. [CMS18], the authors use Shearer’s lemma to handle such overlap between the inputs across the instances and derive a corruption lower bound. For the reduction from to Equality, we also wish to use a Shearer-type inequality. We further argue that a lower bound on information complexity of Equality (for protocols that make small error in the worst case) under uniform distribution implies a lower bound on information complexity of . But it is not clear, a priori, that Equality should have high information cost under that distribution, as this function has trivial communication complexity under the uniform distribution. It turns out that the cut-and-paste argument of Anshu, Belovs, Ben-David, Göös, Jain, Kothari, Lee and Santha [ABB16] yields a constant lower bound on information complexity of good protocols for Equality, even under the uniform distribution.

Broadly, our quantum lower bound proceeds along lines similar to above. The quantum cut-and-paste argument of Anshu, Ben-David, Garg, Jain, Kothari and Lee [ABDG17] in the quantum setting yields a round dependent lower bound on the quantum information complexity (QIC) [KNTZ07, JRS03b, JN14, Tou15, KLLGR16] of good protocols for Equality, even under the uniform distribution. But the quantum version of the embedding argument requires new methods. In the classical setting, using classical information cost , as soon as we have Alice and Bob privately sample the remaining inputs, the Shearer-type embedding follows almost directly from a Shearer like inequality for information [GKR15]. In the quantum setting, we would similarly like to use a Shearer-type inequality for quantum information [ATYY17]. However, it is not immediately clear how to make the protocol embedding work for quantum information cost . We instead settle on an alternate notion of quantum information cost (variants of which have appeared before [JRS05, JN14, LT17, ATYY17]) that works well only for product input distributions. The argument then goes through by carefully using this notion, and it is equivalent to up to a round-dependent factor. What we get is a Shearer-type embedding protocol for product input distributions that allows some specific pre-processing of the inputs. We provide such a general version in Section 4.1 in the quantum setting, while we give a more direct proof in the classical setting.

Hence, overall we get a round dependent lower bound on the quantum information complexity of , and the round independent lower bound on quantum communication complexity follows by optimizing over the number of rounds in any good protocol.

## 2 Preliminaries and notation

For integer , let represent the set Let and be finite sets and be a natural number. Let be the set , the cross product of , times. Let

be a probability distribution on

. Let represent the probability of according to . We write to denote that the random variable is distributed according to . We use the same symbol to represent a random variable and its distribution whenever it is clear from the context. The expectation value of function on is defined as where means that is drawn according to the distribution of . We say and are independent iff for each . For joint random variables , will denote the distribution of .

We now introduce some quantum information theoretic notation. We assume the reader is familiar with standard concepts in quantum computing [NC00, Wil12, Wat18].

Let be a finite-dimensional complex Euclidean space, i.e., for some positive integer with the usual complex inner product , which is defined as . We will also refer to

as an Hilbert space. We will usually denote vectors in

using bra-ket notation, e.g., .

The norm (also called the trace norm) of an operator on is , which is also equal to (vector)

norm of the vector of singular values of

. A quantum state (or a density matrix or simply a state) is a positive semidefinite matrix on with . The state is said to be a pure state if its rank is , or equivalently if , and otherwise it is called a mixed state. Let be a unit vector on , that is . With some abuse of notation, we use to represent the vector and also the density matrix , associated with . Given a quantum state on , the support of , denoted , is the subspace of

spanned by all eigenvectors of

with nonzero eigenvalues.

A quantum register is associated with some Hilbert space . Define . Let represent the set of all linear operators on . We denote by the set of density matrices on the Hilbert space . We use subscripts (or superscripts according to whichever is convenient) to denote the space to which a state belongs, e.g, with subscript indicates . If two registers and are associated with the same Hilbert space, we represent this relation by . For two registers and , we denote the combined register as , which is associated with Hilbert space . For two quantum states and ,

represents the tensor product (or Kronecker product) of

and . The identity operator on is denoted .

Let . We define the partial trace with respect to of as

 ρB\coloneqqTrA(ρAB)\coloneqq∑i(⟨i|⊗I\mathnormalB)ρAB(|i⟩⊗I\mathnormalB),

where is an orthonormal basis for the Hilbert space . The state is referred to as a reduced density matrix or a marginal state. Unless otherwise stated, a missing register from subscript in a state will represent partial trace over that register. Given , a purification of is a pure state such that . Any quantum state has a purification using a register with . The purification of a state, even for a fixed , is not unique as any unitary applied on register alone does not change .

An important class of states that we will consider are the classical quantum states. They are of the form , where is a probability distribution. In this case, can be viewed as a probability distribution and we shall continue to use the notations that we have introduced for probability distribution, for example, to denote the average .

A quantum super-operator (or a quantum channel or a quantum operation) is a completely positive and trace preserving (CPTP) linear map (mapping states from to states in ). The identity operator in Hilbert space (and associated register ) is denoted . A unitary operator is such that . The set of all unitary operations on register is denoted by .

A -outcome quantum measurement is defined by a collection , where is a positive semidefinite operator, where means is positive semidefinite. Given a quantum state , the probability of getting outcome corresponding to is and getting outcome corresponding to is .

#### 2.0.1 Distance measures for quantum states

We now define the distance measures we use and some properties of these measures. Before defining the distance measures, we introduce the concept of fidelity between two states, which is not a distance measure but a similarity measure. Note that all the notions introduced below also apply to classical random variables, when viewed as diagonal quantum states in some basis.

###### Definition 8 (Fidelity).

Let be quantum states. The fidelity between and is defined as

 F(ρA,σA)\coloneqq∥∥√ρA√σA∥∥1.

For two pure states and , we have . We now introduce the two distance measures we use.

###### Definition 9 (Distance measures).

Let be quantum states. We define the following distance measures between these states.

 Trace distance: Δ(ρA,σA)\coloneqq12∥ρA−σA∥1 Bures metric: B(ρA,σA)\coloneqq√1−F(ρA,σA).

Note that for any two quantum states and , these distance measures lie in . The distance measures are if and only if the states are equal, and the distance measures are if and only if the states have orthogonal support, i.e., if .

Conveniently, these measures are closely related.

###### Fact 10.

For all quantum states , we have

 B2(ρA,σA)≤Δ(ρA,σA)≤√2⋅B(ρA,σA).
###### Proof.

The Fuchs-van de Graaf inequalities [FvdG99, Wat18] state that

 1−F(ρA,σA)≤Δ(ρA,σA)≤√1−F2(ρA,σA).

Our fact follows from this and the relation . ∎

We now review some properties of the Bures metric.

###### Fact 11.A (Triangle inequality [Bur69]).

The following triangle inequality and a weak triangle inequality hold for the Bures metric and the square of the Bures metric.

###### Fact 11.B (Averaging over classical registers).

For classical-quantum states with , we have

 B2(θXB,θ′XB)=Ex←X[B2(θxB,θ′xB)].

Finally, an important property of both these distance measures is monotonicity under quantum operations [Lin75, BCF96].

###### Fact 12 (Monotonicity under quantum operations).

For quantum states , , and a quantum operation , it holds that

 Δ(E(ρA),E(σA))≤Δ(ρA,σA)andB(E(ρA),E(σA))≤B(ρA,σA),

with equality if is unitary. In particular, for bipartite states , it holds that

 Δ(ρAB,σAB)≥Δ(ρA,σA)andB(ρAB,σAB)≥B(ρA,σA).

#### 2.0.2 Mutual information

We start with the following fundamental information theoretic quantities. We refer the reader to the excellent sources for quantum information theory [Wil12, Wat18] for further study.

###### Definition 13.

Let be a quantum state. We then define the following.

 von Neumann entropy: H(ρA)\coloneqq−Tr(ρAlogρA).

We now define mutual information and conditional mutual information.

###### Definition 14 (Mutual information).

Let be a quantum state. We define the following measures.

 Mutual information: I(A:B)ρ\coloneqqH(ρA)+H(ρB)−H(ρAB). Conditional mutual information: I(A:B | C)ρ\coloneqqI(A:BC)ρ−I(A:C)ρ.

We will need the following basic properties.

###### Fact 15 (Properties of I).

Let be a quantum state. We have the following.

###### Fact 15.A (Nonnegativity).
 I(A:B)ρ≥0 and I(A:B | C)ρ≥0.

If is a product state, then

 I(A:B)=0.

###### Fact 15.C (Monotonicity).

For a quantum operation , with equality when is unitary. In particular

###### Fact 15.D (Averaging over conditioning register).

For classical-quantum state (register is classical) :

 I(A:B|X)ρ =Ex←XI(A:B)ρx.

The following lemma, known as the Average Encoding Theorem [KNTZ07], formalizes the intuition that if a classical and a quantum registers are weakly correlated, then they are nearly independent.

###### Lemma 16.

For any with a classical system and states ,

 ∑xpX(x)⋅B2(ρxA,ρA) ≤I(X:A)ρ. (1)

The following Shearer-type inequality for quantum information was shown in Ref. [ATYY17]. Classical variants appeared in [GKR15, RS15].

###### Lemma 17.

Consider registers and define . Consider a quantum state such that . Let be a random set picked independently of satisfying for all and . Then it holds that

 I(US:V | S)Ψ≤I(U:V)Ψk,

### 2.1 Classical communication complexity

Let be a total function (that is, its value is defined on every input) and . In a two-party communication task, Alice is given an input , Bob is given and the task is to compute by exchanging as few bits as possible. The parties are allowed to possess pre-shared randomness () and private randomness (, ). Without loss of generality, we can assume that Alice communicates first and also gives the final output. The communication cost of a protocol , denoted by , is the maximum number of bits the parties have to communicate over all possible inputs and values of the shared and private randomness. Let represent the two-party randomized communication complexity of with worst case error , i.e., the communication of the best two-party randomized protocol for with error at most over any input . Worst-case error of the protocol over the inputs is denoted by .

###### Definition 18 (XOR function).

A function is called an XOR function if there exists a function such that for all . We denote .

###### Definition 19 (Rank).

The rank of a matrix , denoted by is the minimum integer for which there exist rank 1 matrices such that .

###### Definition 20 (Non-negative Rank).

The non-negative rank of a matrix , denoted by is the minimum integer for which there exist rank 1 matrices with non-negative entries such that .

###### Definition 21 (Approximate rank).

Let and be an matrix. The -approximate rank of is defined as

 rkε(M)=min~M {rk(~M):∀x∈X,y∈Y,|~M(x,y)−M(x,y)|≤ε}.
###### Definition 22 (Approximate non-negative rank).

Let and be an matrix. The -approximate non-negative rank of is defined as

 rk+ε(M)=min~M {rk+(~M):∀x∈X,y∈Y,|~M(x,y)−M(x,y)|≤ε}.
###### Definition 23 (Distributional Information Complexity).

Distributional information complexity of a randomized protocol with respect to a distribution is defined as

 IC(Π,μ)=I(X:Π|YRRB)+I(Y:Π|XRRA).
###### Definition 24 (Max Distributional Information Complexity).

Max-distributional information complexity of a randomized protocol is defined as

 IC(Π)=maxμIC(Π,μ).
###### Definition 25 (Information Complexity of a function).

Information complexity of a function is defined as

 IC(f)=infΠ:err(Π)≤εIC(Π).

Note that since one bit of communication can hold at most one bit of information, for any protocol and distribution we have . This implies that information complexity of a function is a lower bound on the randomized communication complexity of a function.

###### Lemma 26 (Cut-and-paste lemma (Lemma 6.3 in  [Bjks04])).

Let and be two inputs to a randomized protocol . Then

 B(Π(x,y),Π(x′,y′))=B(Π(x,y′),Π(x′,y)).
###### Fact 27 (Pythagorean property (Lemma 6.4 in  [Bjks04])).

Let and be two inputs to a randomized protocol . Then

 B2(Π(x,y′),Π(x′,y′))+B2(Π(x,y),Π(x′,y))≤2B2(Π(x′,y′),Π(x,y)).

### 2.2 Quantum communication complexity

In quantum communication complexity, two players wish to compute a classical function for some finite sets and . The inputs and are given to two players Alice and Bob, and the goal is to minimize the quantum communication between them required to compute the function.

While the players have classical inputs, the players are allowed to exchange quantum messages. Depending on whether or not we allow the players arbitrary shared entanglement, we get , bounded-error quantum communication complexity without shared entanglement and , for the same measure with shared entanglement. Obviously . In this paper we will only work with , which makes our results stronger since we prove lower bounds in this work.

An entanglement assisted quantum communication protocol for a function is as follows. Alice and Bob start with preshared entanglement . Upon receiving inputs , where Alice gets and Bob gets

, they exchange quantum messages. At the end of the protocol, Alice applies a two outcome measurement on her qubits and correspondingly outputs

or . Let be the random variable corresponding to the output produced by Alice in , given input .

Let be a distribution over . Let inputs to Alice and Bob be given in registers and in the state

 ρμ:=∑x,yμ(x,y)|x⟩⟨x|X⊗|y⟩⟨y|Y. (2)

Let these registers be purified by and respectively, which are not accessible to either players. Denote

 |μ⟩XRXYRY:=∑x,y√μ(x,y)|xxyy⟩XRXYRY. (3)

Let Alice and Bob initially hold register with shared entanglement . Then the initial state is

 |Ψ0⟩XYRXRYA0B0\coloneqq|μ⟩XYRXRY|Θ0⟩A0B0. (4)

Alice applies a unitary such that the unitary acts on conditioned on . She sends to Bob. Let be a relabeling of Bob’s register . He applies such that the unitary acts on conditioned on . He sends to Alice. Players proceed in this fashion for messages, for even, until the end of the protocol. At any round , let the registers be , where is the message register, is Alice’s register and is Bob’s register. If

is odd, then

and if is even, then . On input , let the joint state in registers be . Then the global state at round is

 |Ψr⟩XYRXRYArCrBr\coloneqq∑x,y√μ(x,y)|xxyy⟩XRXYRY∣∣Θx,yr⟩ArCrBr. (5)

We define the following quantities.

 Worst-case error: err(Π)\coloneqqmax(x,y){Pr[O(x,y)≠F(x,y)]}. Quantum CC of a protocol: QCC(Π)\coloneqq∑i|Ci|. Quantum CC of F: Q∗ε(F)\coloneqqminΠ:err(Π)≤εQCC(Π).

Our first fact links with the distance between a pair of final states corresponding to inputs with different outputs.

###### Fact 28 (Error vs. distance).

Consider a non-constant function , and let and be inputs such that . For any protocol with rounds, it holds that

 Δ(Θx,yt,AtCt,Θx,y′t,AtCt)≥1−2err(Π).

In below, let represent Alice and Bob’s registers after reception of the message at round . That is, at even round , and at odd , . We will need the following version of the quantum-cut-and-paste lemma from [NT17] (also see [JRS03b, JN14] for similar arguments). This is a special case of [NT17, Lemma 7] and we have rephrased it using our notation.

###### Lemma 29 (Quantum cut-and-paste).

Let be a quantum protocol with classical inputs and consider distinct inputs for Alice and for Bob. Let be the initial shared state between Alice and Bob. Also let be the shared state after round of the protocol when the inputs to Alice and Bob are respectively. For odd, let

 hk=B(Ψu,vk,B′k,Ψu′,vk,B′k)

and for even , let

 hk=B(Ψu,vk,A′k,Ψu,v′k,A′k).

Then

 B(Ψu′,vr,A′r,Ψu′,v′r,A′r)≤2r∑k=1hk.

As discussed in the introduction, approximate rank lower bounds bounded-error quantum communication complexity with shared entanglement [LS08]:

###### Fact 30.

For any two-party function and , we have .

### 2.3 Quantum information complexity

###### Definition 31.

Given a quantum protocol with classical inputs distributed as , the quantum information cost is defined as

 QIC(Π,μ)= ∑i oddI(RXRY:Ci|YBi)+∑i evenI(RXRY:Ci|XAi). (6)
###### Definition 32.

Given a quantum protocol with classical inputs distributed as , the cumulative Holevo information cost is defined as

 HQIC(Π,μ)= ∑i oddI(X:BiCi|Y)+∑i evenI(Y:AiCi|X).
###### Definition 33.

Given a quantum protocol and a product distribution over the classical inputs, the cumulative superposed-Holevo information cost is defined as

 SQIC(Π,μ):= ∑i oddI(X:YRYBiCi)ρi+∑i evenI(Y:XRXAiCi)ρi.

Note that for product input distributions on and for each ,

 I(X:BiCi|Y)ρi=I(X:YBiCi)ρi≤I(X:YRYBiCi)ρi, (7) I(Y:AiCi|X)ρi=I(Y:XAiCi)ρi≤I(Y:XRXAiCi)ρi. (8)

Combining with other results in Ref. [LT17], we get the following for any round protocol and any product distribution :

 2QCC(Π) ≥QIC(Π,μ) (9) ≥1tSQIC(Π,μ) (10) ≥1tHQIC(Π,μ) (11) ≥12tQIC(Π,μ). (12)

.

## 3 Lower bound on the information complexity of Sink∘Xor

### 3.1 Reducing Equality to Sink∘Xor

We define the Equality function as

 \textscEQ(x,y)={1 if x=y,0 otherwise.

Recall the Sink function from Definition 4. Following [CMS18] we use projections of the inputs in our proof to analyze the input of the Sink function. Let . Let be the set of input coordinates that correspond to the edges incident to . We use the notation to denote the input projected to the coordinates in . Note that decides whether or not is a sink. By , we refer to the bit string such that is a sink if and only if . Sink can be written as

 Sink(w)=∨mi=1\textscEQ(wvi,zvi)

since only one of the vertex can be a sink in the complete directed graph. Our communication function is . Similar to Sink, can be represented as

 Sink∘Xor(x,y)=∨mi=1\textscEQ(xvi,yvi⊕zvi).

Our first result is as follows.

###### Theorem 34.

Suppose . Let be a protocol for which makes a worst case error of at most . There exists a protocol for EQ that makes a worst case error of at most . Furthermore, it holds that

 IC(Π′,ν)≤2mIC(Π,μ),

where is the uniform distribution over inputs to EQ and is uniform over the inputs to .

###### Proof.

We have

 IC(Π,μ)=I(X:Π|YRRB)+I(Y:Π|XRRA)=I(X:ΠYRRB)+I(Y:ΠXRRA),

where the information quantities are evaluated on and the associated . Let be a random variable which takes values in with uniform probability. Let (similarly ) be the restriction of (similarly ) to coordinates in . Since each coordinate appears in exactly two sets in , we have . Thus, from Lemma 17, we have

 2mIC(Π,μ) ≥Es[I(XS:YΠRRB|S=s)+I(YS:XΠRRA|S=s)] (13) =Es[I(XS:Π|YRRB,S=s)+I(YS:Π|XRRA,S=s)]. (14)

The protocol for EQ is now as follows, for inputs (we use as inputs here to avoid confusion with for ).

• Alice and Bob take a sample from using shared randomness. Let be such that .

• They set and . Alice samples uniformly at random from private randomness and Bob samples uniformly at random from private randomness. Here is the complement of . This specifies the input for .

• They run the protocol and output accordingly.

Observe that and are distributed uniformly if and are. Thus,

 IC(Π′,ν) =Es[I(XS:Π|YSRY¯SRB,S=s)+I(YS:Π|XSRX¯SRA,S=s)] =Es[I(XS:Π|YRRB,S=s)+I(YS:Π|XRRA,S=s)],

where the information quantities are evaluated on and the associated , and the desired information bound follows by (13).

To bound the worst case error of , we argue as follows. Fix some input to . If , then which implies that error of on this input is same as the error of on the corresponding , hence at most . Now consider the case where . The function evaluates to 1 only if for some . Since, , we conclude that (if it exists) cannot be equal to . Moreover, the edge adjacent to is already fixed by , and if it is not consistent with the corresponding value in , then