1 Introduction
One of the possible origins of quantum computers’ power is the exponential size of the Hilbert space: a
qubit quantum state is a unit vector in a
dimensional complex vector space. On the other hand, one of the fundamental results in quantum information theory – Holevo’s theorem [29] – states that no more than bits of classical information can be transmitted by qubits without entanglement. Nonetheless, interesting scenarios arise when allowing a small chance of transmitting the wrong message or/and obtaining partial information at the expense of losing information about the rest of the system. One of these scenarios is the concept of quantum random access codes (QRACs), where a number of bits are encoded into a smaller number of qubits such that any one of the initial bits can be recovered with some probability of success. A QRAC is normally denoted by , meaning that bits are encoded into qubits such that any initial bit can be recovered with probability at least (greater than since can be achieved by pure guessing), and a classical version, called simply random access code (RAC), is similarly defined, with the encoding message being bits. The idea of QRACs first appeared in a paper by Stephen Wiesner [64] in 1983 under the name of conjugate coding, and was later rediscovered by Ambainis et al. in 1999 [4].Quantum random access codes found application in many different contexts, e.g. quantum finite automata [4, 43], network coding [26, 27], quantum communication complexity [10, 22, 35], locally decodable codes [8, 33, 34, 62], nonlocal games [42, 60], cryptography [47], quantum state learning [1], deviceindependent dimension witnessing [2, 3, 63], selftesting measurements [20, 19], randomness expansion [38], studies of nosignaling resources [23], and characterization of quantum correlations from information theory [49]. The and QRACs were first experimentally demonstrated in [56]. See [21, 25, 42, 59, 61] for subsequent demonstrations.
In this paper we further generalize the idea of (quantum) random access codes to recovering not just an initial bit, but the value of a fixed Boolean function on any subset of the initial bits with fixed size. We call them random access codes. The case of the Parity function was already considered in [8], and here we generalize to arbitrary Boolean functions .
1.1 Related Work
An (Q)RAC is an encoding of bits into (qu)bits such that any initial bit can be recovered with probability at least . This probability is the worst case success probability over all possible pairs of input string and recoverable bit . Many different resources can be used during the encoding and decoding, e.g. private randomness (PR), shared randomness (SR), shared entanglement, and even superquantum correlations like PopescuRohrlich boxes [50].
Regarding the classical RAC, Ambainis et al. [4] proved that there is no RAC (and RAC by extension) with PR and worst case success probability . On the other hand, Ambainis et al. [5] showed that RACs with SR can achieve success probability .
Theorem 1 ([5, Equation (25)]).
The optimal RAC with SR has success probability
For a general number of encoded bits, Ambainis et al. [4] developed a RAC with PR using a specific code from [14] which matches their classical lower bound up to an additive logarithmic term, where is the binary entropy function.
Theorem 2 ([4, Theorem 2.2]).
There is an RAC with PR and for any .
As for QRACs, Ambainis et al. [4] showed the existence of a QRAC with PR^{1}^{1}1Usually private randomness is already assumed in QRACs under the encoding onto density matrices. and , and the existence of a QRAC with PR and (the second attributed to Chuang). Later Hayashi et al. [26] showed the impossibility of a QRAC (and QRAC by extension) with PR and success probability . Similarly to the classical case, Ambainis et al. [5] showed that QRACs can also benefit from SR.
Theorem 3 ([5, Theorem 6]).
The specific case of encoding qubits was explored in [26, 31, 39]. For the general case of , Iwama et al. [32] constructed an QRAC with PR and (such construction also works for all ). On the other hand, Ambainis et al. [4] proved that if an QRAC with PR and exists, then , which was later improved to by Nayak [43], thus matching the same classical lower bound from [4].
The idea of decoding a function of the initial bits instead of a single bit was already considered by BenAroya, Regev and de Wolf [8] (who also considered recovering multiple bits rather than just one). More specifically, they defined an QRAC, where bits are encoded into qubits such that the parity of any initial bits can be recovered with success probability at least .^{3}^{3}3In their definition the success probability is the average over random subsets and random inputs, which, in our context, is equivalent to using SR. Using their hypercontractive inequality for matrixvalued functions, they proved the following upper bound on the success probability.
Theorem 4 ([8, Theorem 7]).
For any there is a constant such that, for any QRAC with SR and ,
(1) 
They conjectured that the factor can be dropped from the above bound, and thus extended to , although it might require a strengthening of their hypercontractive inequality.
The use of shared entanglement in random access codes was first considered by Klauck [35, 36]. Here the encoding and decoding parties are allowed to use an arbitrary amount of shared entangled states (note that shared entanglement can be used to obtain both private and shared randomness). The figure of merit in this generalization is the relation between , and , while the amount of shared entanglement is not taken into account. Klauck [35, 36] considered an QRAC with shared entanglement and, by its equivalence to the quantum oneway communication complexity for the index function, proved the lower bound , similar to Nayak’s bound. Later Pawłowski and Żukowski [48] coined the term entanglementassisted random access code (EARAC), which is a RAC with shared entanglement, and studied the case when , giving protocols with better decoding probabilities compared to the usual QRAC with SR. Recently Tănăsescu et al. [58] expanded the idea of EARACs to recovering an initial bit under a specific request distribution.
Theorem 5 ([48] and [58, Corollary 2 and Theorem 5]).
The optimal EARAC with SR has success probability
The idea of (Q)RAC was generalized in other ways, e.g. parityoblivious [7, 13, 56] and multiparty [53] versions, encoding on valued qubits (qudits) [6, 12, 19, 39, 59], a wider range of information retrieval tasks [18] and a connection to PopescuRohrlich boxes. It was shown [66] that a PopescuRohrlich box can simulate a RAC by means of just one bit of communication, while in [23] the converse was proven. An object called racbox [23] was defined, which is a box that implements a RAC when supported with one bit of communication, and it was shown that a nonsignaling racbox is equivalent to a PopescuRohrlich box. A quantum version of a racbox was later proposed in [24]. Finally, we mention that RACs were also studied within “theories” that violate the uncertainty relation for anticommuting observables and present strongerthanquantum correlations [57].
1.2 Our Results
This paper focuses on generalizing the classical, quantum and entanglementassisted random access codes. Instead of recovering a single bit from the initial string , we are interested in evaluating a Boolean function on any sequence of bits from . We generically call them random access codes. Let be the set of sequences of different elements from with length and let denote the substring of specified by . Alice gets and she needs to encode her data and send it to Bob, so that he can decode for any with probability . Such problem was already considered by Sherstov in a twoway communication complexity setting [54] and later used in his pattern matrix method [55] in order to prove other communication complexity lower bounds. Even though our results are expressed in a random access code language, they can also be seen as in a oneway communication complexity setting. If twoway communication is allowed, Bob can send the identity of his sequence to Alice with bits of communication, whereas (as we will see) significantly more communication may be required in the oneway scenario.
In the following,
will refer to a sample space with some probability distribution. As before, PR and SR stand for private and shared randomness, respectively. Moreover, since we require the success probability to always be greater than
, given that one can always guess the correct result with probability , from now on it will be convenient to use the bias of the prediction, defined as , instead of its success probability .We start with RAC, the classical random access code on bits with bias .
Definition 6.
An RAC with PR is an encoding map satisfying the following: for every there is a decoding map such that for all .
Definition 7.
An RAC with SR is an encoding map satisfying the following: for every there is a decoding map such that for all .
We define QRAC, the quantum random access code on qubits with bias .
Definition 8.
An QRAC with PR is an encoding map that assigns an qubit density matrix to every and satisfies the following: for every there is a POVM such that for all .
Definition 9.
An QRAC with SR is an encoding map that assigns an qubit pure state to every and satisfies the following: for every there is a set of POVMs , with , such that for all .
Similarly, we define EARAC, the entanglementassisted random access code on bits with bias .
Definition 10.
An EARAC is an RAC with SR where the encoding and decoding parties share an unlimited amount of entangled quantum states.
Due to shared entanglement being a source of SR, we already include SR in EARACs. We note that [48] focused on EARACs without SR.
Finally, we define PRRAC, the PopescuRohrlich random access code on bits with bias . A PopescuRohrlich box [50] is a bipartite system shared by two parties with two inputs and two outputs and is defined by the joint probability distribution
Definition 11.
An PRRAC is an RAC with SR where the encoding and decoding parties share an unlimited amount of PopescuRohrlich boxes.
In Section 3 we devise encodingdecoding strategies for all the random access codes just defined, thus deriving lower bounds on their biases given the encoding/decoding parameters and . These random access codes are built based on previous ideas from [5, 4, 48]. The Boolean function that needs to be evaluated directly influences the final bias and such influence in our results is captured by the single quantity called noise stability [9, 45]. Informally it is a measure of how resilient to noise a Boolean function is. Given a uniformly random input , one might imagine a process that flips each bit of independently with some probability , where , which leads to some final string . The noise stability of with parameter is the correlation between and (see Section 2 for a formal definition).
Our positive results can be summarized by the following theorem.
Theorem 12.
Let be a Boolean function and its noise stability with parameter .

Let . If and , there is an RAC with PR and bias with .

If , there is an RAC with SR and with .

If , there is an QRAC with SR and with .

If , there is an EARAC with and .

For any , there is an PRRAC.
Results (a), (b), (c) and (d) use an encoding scheme reminiscent of the concatenation idea from [49, 48, 58] (and suggested to us by Ronald de Wolf). The underlying idea is to randomly break the initial string into different ‘blocks’ and encode them via a standard RAC/QRAC/EARAC. Result (a) breaks into blocks and employs the RAC from Theorem 2 on every block, each with elements, while in results (b)/(c)/(d) we employ the RAC/QRAC/EARAC from Theorems 1/3/5 in order to encode blocks, each with elements, into a single (qu)bit each, resulting in encoded (qu)bits. With high probability all the bits from the needed string will be encoded into different blocks and therefore can be decoded and evaluated. The decoded string can be viewed as a ‘noisy’ , to which the noise stability framework can be applied. The bias of the base RAC/QRAC/EARAC thus becomes the parameter in the noise stability of the corresponding random access code. As a quick remark, since we opted to lowerbound the parameters in Theorem 12, in result (b) does not exactly equal the bias from Theorem 1. One could write, though, .
Result (a) is our strongest bound, since it also applies to all other random access codes. Moreover, there is some freedom in setting the number of blocks , since the number of encoded bits in Theorem 2 is not fixed to a single number (as opposed to Theorems 1, 3 and 5). The result is a tradeoff between the number of bits of the Boolean function and the number of encoded bits . However, the number of encoded bits in result (a) is limited to , a characteristic inherited from the RAC in Theorem 2. It is possible to go below this limit by using SR, as demonstrated by results (b), (c) and (d).
The above results show that quantum resources offer a modest advantage over the classical random access code. On the other hand, result (e) demonstrates that strongerthanquantum resources like PopescuRohrlich boxes can lead to extremely powerful random access codes. This is a consequence of violating Information Causality [49], since one bit transfer allows the access to any bit in a database via PopescuRohrlich boxes. From a long bitstring , where , can be constructed with the values for all . All bits from are readable with the aid of PopescuRohrlich boxes, with nonsignaling constraining the readout to just one bit. The protocol for PRRACs is taken from [49] and uses a pyramid of PopescuRohrlich boxes and nests a van Dam’s protocol [16].
In Section 4 we prove an upper bound on the bias of any QRAC with SR (and RAC) using the same method of the hypercontractive inequality for matrixvalued functions from [8].
Theorem 13.
Let be a Boolean function. For any QRAC with SR and the following holds: for any there is a constant such that
(2) 
where is the norm of the
th level of the Fourier transform of
.One can see that the above result is a generalization of Theorem 4. Indeed, for Parity on bits, iff , and so Eq. (1) is recovered. The following corollary from Theorem 13 helps to compare the bias upper bound to the bias lower bounds from Theorem 12.
Corollary 14.
Let be a Boolean function. For any QRAC with SR and the following holds: for any there is a constant such that
where is the degree of and .
Taking to be upperbounded by a constant (for example, if ), our bias upper bound matches our bias lower bounds for RAC/QRAC with SR up to a global multiplicative constant and a multiplicative constant in the parameter . We conjecture that the parameter can be improved to , which might require a stronger version of the hypercontractive inequality. Other corollaries from Theorem 13 are derived in Section 4 and compared to our bias lower bounds.
Upper bound (2) does not apply to EARACs. Previously, it was known that for the special case of standard EARACs (), the bias is upperbounded by (Theorem 5). This upper bound can generalised to EARACs with assuming an independence condition (Section 3.4). The resulting bound is . We view this as evidence that the bias lower bound for the general case of EARACs given in Theorem 12 should actually be tight.
Regarding the quantity itself, it can be nicely related to the Fourier coefficients of (see Theorem 17 in the next section). We briefly mention the noise stability for a few functions. For Parity (), , and, more generally, for any function , . As for the Majority function (), one can show that [46, Theorem 5.18] . Other examples can be found in [41]. Moreover, a randomized algorithm for approximating the noise stability of monotone Boolean functions up to relative error was proposed in [52].
2 Preliminaries
We shall briefly revise some results from Boolean analysis that are going to be useful. For an introduction to the analysis of Boolean functions, see O’Donnell’s book [46] or de Wolf’s paper [65]. In the following, we write and is the set of all permutations of . As before, let be the set of sequences of different elements from with length and let denote the substring of specified by .
The inner product on the vector space of all functions is defined by
Every function can be uniquely expressed as a multilinear polynomial, its Fourier expansion, as
where, for , is defined by
The real number is called the Fourier coefficient of on and is given by
An important and useful concept for Boolean functions is noise stability. As previously mentioned, it is a measure of how resilient to noise a Boolean function is, and is defined from the concept of correlated pairs of random strings given below.
Definition 15 ([46, Definitions 2.40 and 2.41]).
Let . For fixed we write to denote that the random string is drawn as follows: for each independently,
We say that is correlated to . If is drawn uniformly at random and then , we say that is a correlated pair of random strings.
Given these definitions, we can formally define the concept of noise stability, which measures the correlation between and when is a correlated pair.
Definition 16 ([46, Definition 2.42]).
For and , the noise stability of at is
The noise stability of is nicely related to ’s Fourier coefficients as stated in the following theorem.
Theorem 17 ([46, Theorem 2.49]).
For and ,
where is the Fourier weight of at degree .
The above result makes it clear that is an increasing function of for .
Theorem 17 is obtained from one of the most important operators in analysis of Boolean functions: the noise operator .
Definition 18 ([46, Definition 2.46]).
For , the noise operator with parameter is the linear operator on functions defined by
Proposition 19 ([46, Proposition 2.47]).
For , the Fourier expansion of is
It is not hard to prove from the above results that .
Some of the above concepts can be generalized to matrixvalued functions. The Fourier transform of a matrixvalued function is defined similarly as for scalar functions: it is the function defined by
Here the Fourier coefficients are also complex matrices. Moreover, given
with singular values
, its trace norm is defined as .We shall make use of the following result from BenAroya, Regev and de Wolf [8], which stems from their hypercontractive inequality for matrixvalued functions.
Theorem 20 ([8, Lemma 6]).
For every and ,
Finally, given by is the binary entropy function. The following bounds hold.
Theorem 21 ([11, Theorem 2.2]).
, .
3 Bias Lower Bounds
3.1 RAC with PR
We start by studying the RAC with PR. The following result is based on Ambainis et al. [4] and uses a procedure reminiscent of the concatenation idea from [49, 48, 58]: the initial string is broken in blocks, which in turn are encoded using the code from [14]. First we state a slightly modified version of Newman’s Theorem [44] (see also [37, Theorem 3.14] and [51, Theorem 3.5]) which is going to be useful to us.
Theorem 22 ([44]).
Let be an event depending on and such that
for all , with . Let . Then there is with size at most such that
holds for all .
Theorem 23.
Let , , and . Let be a Boolean function. For sufficiently large and , there is an RAC with PR and bias with
Proof.
Consider a code such that, for every , there is a within Hamming distance , with (the extra term will be used to counterbalance the decrease in probability from Newman’s theorem). It is known [14] that there is such a code of size
Let denote the closest codeword to . Hence at least out of bits of are the same as , and the probability over a uniformly random that is at least .
Let such that divides . Our protocol involves breaking up into parts and encoding each part with the above code . Define the map
that applies to the first bits of , and to next bits of and so on. Hence the probability that over a uniformly random is at least . In order to consider this probability for every bit instead of just an average over all bits, we employ the following randomization process. Let and , both taken uniformly at random. Given , denote . We define the encoding , where denotes the bitwise product of and . Let be the event that all indices in are encoded in different codes , i.e., in different blocks from . There are blocks, each with elements. The probability that specific elements fall into different blocks is
(3) 
where inequality (a) can easily be proven by induction or the union bound.
We shall first present a protocol using shared randomness, and at the end we shall transform it into a protocol with private randomness by using Newman’s theorem. The protocol is the following. Select and uniformly at random. Encode as . To decode , first we check if all the indices of were encoded into different blocks. If no, the value for is drawn uniformly at random. If yes, just consider and evaluate . Conditioned on the event happening, the probability that is at least independently for all , meaning that
where , and the inequality follows from monotonicity of the noise stability of . With these considerations, the success probability of the protocol is
where we used that .
We now transform the shared randomness into private randomness. By Newman’s theorem (Theorem 22) there is a small set of permutationstring pairs (note that Alice’s input is size bits and Bob’s input is at most bits) with size
(we have used that ) such that continues to hold with probability at least if are chosen uniformly at random from this set. Hence the randomization can be encoded together with at the cost of a small overhead. The final protocol chooses uniformly at random, encodes as and then proceeds like the protocol with shared randomness. Fix . The result follows by using the first inequality from Theorem 21 to observe that
for sufficiently large , where we used again. ∎
Remark.
The parameter in Theorem 23 controls the number of encoding blocks in the protocol. By tweaking it, we can adjust the range of and , e.g. if , then and . If , then and .
In the protocol from Theorem 23 we broke the initial string into different blocks and used different copies of . This was done in order to guarantee the independence of the ’s and hence analyse the influence of the code on the function . Interestingly enough, for the special case of the Parity function this is not required and a single copy of can be used.
Theorem 24.
Let be the Parity function and let . There is an RAC with PR and bias
(4) 
where is the Krawtchouk polynomial.
Proof.
Consider the encoding , where is the code described in Theorem 23. Let be the Hamming distance between and , with by the properties of . Then
where we used on the second equality and on the final inequality, which can be obtained via the recurrence relation and (see e.g. [15]).
By Newman’s theorem (Theorem 22) there is a small set of permutationstring pairs with size such that continues to hold with bias at least for any and if are chosen uniformly at random from this set.
Our protocol is the following. Select uniformly at random. Encode as . To decode , just consider and evaluate . Now fix . The result follows by using the first inequality from Theorem 21 to observe that
Comments
There are no comments yet.