Resource-Efficient Common Randomness and Secret-Key Schemes

07/25/2017
by   Badih Ghazi, et al.
MIT
ibm
0

We study common randomness where two parties have access to i.i.d. samples from a known random source, and wish to generate a shared random key using limited (or no) communication with the largest possible probability of agreement. This problem is at the core of secret key generation in cryptography, with connections to communication under uncertainty and locality sensitive hashing. We take the approach of treating correlated sources as a critical resource, and ask whether common randomness can be generated resource-efficiently. We consider two notable sources in this setup arising from correlated bits and correlated Gaussians. We design the first explicit schemes that use only a polynomial number of samples (in the key length) so that the players can generate shared keys that agree with constant probability using optimal communication. The best previously known schemes were both non-constructive and used an exponential number of samples. In the amortized setting, we characterize the largest achievable ratio of key length to communication in terms of the external and internal information costs, two well-studied quantities in theoretical computer science. In the relaxed setting where the two parties merely wish to improve the correlation between the generated keys of length k, we show that there are no interactive protocols using o(k) bits of communication having agreement probability even as small as 2^-o(k). For the related communication problem where the players wish to compute a joint function f of their inputs using i.i.d. samples from a known source, we give a zero-communication protocol using 2^O(c) bits where c is the interactive randomized public-coin communication complexity of f. This matches the lower bound shown previously while the best previously known upper bound was doubly exponential in c.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

08/27/2018

Communication-Rounds Tradeoffs for Common Randomness and Secret Key Generation

We study the role of interaction in the Common Randomness Generation (CR...
09/01/2019

Round Complexity of Common Randomness Generation: The Amortized Setting

We study the effect of rounds of interaction on the common randomness ge...
10/16/2017

An operational characterization of mutual information in algorithmic information theory

We show that the mutual information, in the sense of Kolmogorov complexi...
10/09/2019

Secret key agreement from correlated data, with no prior information

A fundamental question that has been studied in cryptography and in info...
08/09/2019

Physical Layer Secret Key Generation in Static Environments

We consider the problem of symmetric secret key generation in quasi-stat...
06/15/2021

Efficient Asynchronous Byzantine Agreement without Private Setups

For asynchronous binary agreement (ABA) with optimal resilience, prior p...
01/25/2019

Communication Complexity of Estimating Correlations

We characterize the communication complexity of the following distribute...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Common randomness plays a fundamental role in various problems of cryptography and information theory. We study this problem in the basic two-party communication setting in which Alice and Bob wish to agree on a (random) key by drawing i.i.d. samples from a known source such as correlated bits or correlated Gaussians. If we further require that an eavesdropper, upon seeing the communication only, gains no information about the shared key, then this defines a secret key scheme. This information-theoretic approach to security was introduced in the seminal works of Mauer [Mau93] and Ahlswede and Csiszár [AC93]. Both common randomness and secret-key generation have been extensively studied in information theory [AC98, CN00, GK73, Wyn75, CN04, ZC11, Tya13, LCV15, LCV16]. Common randomness has applications to identification capacity [AD89] and hardware-based procedures for extracting a unique random ID from process variations [LLG05, SHO08, YLH09] that can be used in authentication [LLG05, SD07].

Randomness is a powerful tool as well in the algorithm designer’s arsenal. Shared keys (aka public randomness) are used crucially in the design of efficient communication protocols with immediate applications to diverse problems in streaming, sketching, data structures and property testing. Common randomness is thus a natural model for studying how shared keys can be generated in settings where it is not available directly [MO04, MOR06, BM11, CMN14, CGMS14, GR16]. In this paper, we take the approach of treating correlated sources as a critical algorithmic resource, and ask whether common randomness can be generated efficiently.111Notably, the schemes that we design can also be easily transformed into secret key schemes, as shown later.

For , we say that (doubly symmetric binary source) if are both uniform over and their correlation (and covariance) (i.e., a binary symmetric channel with uniform input). We say that (bivariate Gaussian source) if

, the standard normal distribution, and their correlation is again

.

Bogdanov and Mossel [BM11] gave a common randomness scheme for with zero-communication to generate -bit keys that agree with probability , up to lower order inverse factors (which we suppress henceforth). Using the hypercontractive properties of the noise operator [Bon70, Bec75], they also proved the “converse” that the bound for agreement (probability) is essentially the best possible. In followup work, Guruswami and Radhakrishnan [GR16] recently gave a one-way scheme that achieves an optimal tradeoff between communication and agreement.222They also use hypercontractivity to prove the converse, which extends to other sources including . Note that a simple scheme in which Alice just sends her input requires bits of communication for constant agreement. In contrast, their scheme can guarantee the same agreement using only bits of communication. This is a nontrivial amortized bound since for , the ratio of entropy to communication (=) is strictly bounded away from 1 as . On the other hand, the above schemes are non-explicit (i.e., proved using the probabilistic method) and use an exponential number of samples in . Bogdanov and Mossel [BM11] asked whether an explicit and efficient scheme can be designed, motivating the definition below.

We say that a common randomness scheme to generate -bit keys (with as input) is resource-efficient, if it (i) is explicitly defined, (ii) uses samples, (iii) has constant agreement probability, and (iv) achieves an amortized ratio of entropy to communication bounded away from 1. We give the first efficient scheme for correlated bits and Gaussians, answering the question of [BM11].

Theorem 1.

There exist resource-efficient one-way common randomness schemes for and using bits of communication. For zero-communication, there exist explicit schemes for and using samples with agreement probability , up to polynomial factors.

More generally, we obtain one-way schemes with optimal tradeoff between communication and agreement, matching [GR16], while using only samples. Below is the formal statement.

Theorem 2.

Let and be arbitrary. Set . Then there exist explicit one-way common randomness schemes for and using samples such that:

  1. [topsep=3pt,partopsep=0pt,parsep=0pt,itemsep=3pt]

  2. the entropy of the key is at least ;333We follow [GR16] who actually consider the min-entropy of Alice’s output, which is justifiable on technical grounds.

  3. the agreement probability is at least , up to polynomial factors; and

  4. the communication is bits.

We point out that our schemes are resource efficient but computationally inefficient. One representative challenge that arises here is in decoding dual-BCH codes, which are an explicit algebraic family of error-correcting codes, from a very large number of errors.

The above schemes follow a template that generalizes the approach taken by [BM11, GR16]. It relies on a carefully constructed codebook of size , where is the number of samples. Alice outputs the codeword in with the largest projection while Bob does the same on a subcode of

based on Alice’s message. The analysis of the template reduces it to the problem of obtaining good tail bounds on the joint distribution induced by these projections. For

, we use a codebook consisting of an explicitly defined large family of nearly-orthogonal vectors in

due to Tao [Tao13], who showed their near-orthogonality property using the Weil bound for curves. The novel part of the analysis involves getting precise conditional probability tail bounds on trivariate Gaussians induced by the projections, whose covariance matrix has a special structure. Standard methods only give asymptotic bounds on such tails which is inadequate in the low-communication regime. Here, the best possible agreement is exponentially small in . Our analysis determines the exact constant in the exponent by carefully evaluating the underlying triple integrals.

The resource-efficient scheme for is based on Dual-BCH codes that can be seen as an -analogue of Tao’s construction. The Weil bound for curves implies that dual-BCH codes are “unbiased”, in the sense that any two distinct codewords are at distance (with being the block length)444For more on unbiased codes, we refer the reader to the work of Kopparty and Saraf [KS13].. Analogous to the Gaussian case, the analysis involves getting precise bounds on the (conditional) tail probabilities of various correlated binomial sums. Since , we cannot handle these binomial sums using the (two-dimensional) Berry-Esseen theorem, since the incurred additive error of would overwhelm target agreement. Moreover, crude concentration and anti-concentration bounds cannot be used since they do not determine the exact constant in the exponent. We directly handle these correlated binomial sums, which turns out to involve some tedious calculations related to the binary entropy function.

Interactive Common Randomness and Information Complexity.

Ahlswede and Csiszár [AC93, AC98] studied common randomness in their seminal work using an amortized communication model. They defined it as the maximum achievable ratio , such that for every large enough number of samples , Alice and Bob can agree on a key of bits using bits of communication, where the agreement probability tends to (as tends to infinity). This more stringent linear relationship between the quantities is not obeyed by our explicit schemes. For one-way communication, they characterized this ratio in terms of the Strong Data Processing Constant of the source, which is intimately related to its hypercontractive properties [AG76, AGKN13]. More recently, Liu, Cuff and Verdu [LCV15, LCV16, Liu16] extended this beyond one-way communication. In particular, [Liu16] derives the “rate region” for -round amortized common randomness.

In this work, we show that -round amortized common randomness can be alternatively characterized in terms of two well-studied notions in theoretical computer science: the internal and external information costs of communication protocols. Recall that the internal information cost [BJKS04, BBCR13] of a two-party randomized protocol is the total amount of information that each of the two players learns about the other player’s input, whereas its external information cost [CSWY01] is the amount of information that an external observer learns about the inputs (see Section 5 for formal definitions). These measures have been extensively studied within the context of communication complexity. While being interesting measures in their own rights, they have also been the central tool in tackling direct-sum problems, with numerous applications, e.g., in data streams and distributed computation.

Theorem 3 (Informal Statement).

Given an arbitrary distribution , let denote the supremum over all -round randomized protocols of the ratio of the external information cost to the internal information cost of with respect to . Then, for -round amortized common randomness, equals the largest achievable ratio such that using as the source, for every large enough , Alice and Bob, can agree on a key of bits with probability using rounds and bits of communication.

For the proof, we use a direct-sum approach, a classical staple of information complexity arguments. Our setup is slightly different from the known direct-sum results because we need to lower bound the internal information cost of the -input protocol as well as upper bound its external information cost (which is non-standard) simultaneously. The essential ingredients are the same: embed the input on a judiciously chosen coordinate but the argument works an round-by-round basis so as to keep the mutual information expressions intact. To prove the other direction, we use the rate region of [LCV16, Liu16] to get a lower bound on .

Finally, we outline various settings where common randomness plays an important role.

[wide=0]

Secret Key Generation:

While secret key generation requires common randomness, in the amortized setting they are known to imply each other [LCV16, Liu16]: the rate pair , using the notation of Theorem 3, is achievable for common randomness if and only if is achievable for secret key generation. In particular, using the Strong Data Processing Constant for , the rate ratio is achievable for common randomness and the rate ratio for secret key generation, but using non-explicit schemes. Our resource-efficient but non-amortized schemes given in Theorem 1 can be easily transformed into secret key schemes. See Remark 8.

General Sources:

Theorem 2 also implies an explicit scheme for an arbitrary source in terms of its maximal correlation  [Hir35, Geb41, Rén59]. For , recall that over all real-valued functions and with and . This uses the idea (implicit in [Wit75]) that given i.i.d. samples from any source of maximal correlation , there is a explicit strategy via CLT that allows Alice and Bob to use these samples in order to generate standard -correlated Gaussians. The resulting scheme however is not resource-efficient.

Correlated Randomness Generation:

In this relaxation proposed by [CGMS14], Alice and Bob are given access to and wish to generate bits that are jointly distributed i.i.d. according to where ? Note that the corresponds to the the common randomness setup studied above. We partially answer a question of [CGMS14] that even a modest improvement in the correlation requires substantial communication. Let be fixed. We show that for Alice and Bob to produce samples according to using as the source requires bits of communication (even for interactive protocols and even when the agreement probability is as small as ). See Appendix C for a detailed description.

Communication with Imperfect Shared Randomness:

In this framework [BGI14, CGMS14] (see also [GKS16]), Alice and Bob wish to compute a joint function of their inputs and have access to i.i.d samples from a known source. For example, with

this setup interpolates between the well-studied public randomness (

) and private randomness () models. Communication complexity lower bounds for imperfect shared randomness give one approach to rule out low-communication common randomness schemes. In particular, [BGI14] exhibit a (partial) function whose zero-communication complexity using for all is exponentially larger than the one using public randomness. We prove that this separation is tight. We show a stronger result that every function having interactive communication bits using public randomness has a zero-communication protocol with bits using for every . This answers a question of Sudan [Sud14]. See Appendix D for a detailed description.

Locality Sensitive Hashing (LSH):

A surprising “universality” feature of our schemes (as well as previous ones) for and using zero-communication is that their definition is oblivious to ; only the analysis for every fixed shows that they have near-optimal agreement. This has a close resemblance to schemes used in LSH. Indeed, we show that our common randomness scheme leads to an improvement in the “-parameter” [IM98] that governs one aspect of the performance of an LSH scheme. While this is mathematically interesting, we caution the reader that this does not lead to better nearest-neighbor data structures since the improvement is only qualitatively better and our scheme is computationally inefficient. See Appendix E.

Organization.

Section 2 describes the template used for the one-way schemes and sets up the structure of the analysis. Section 3 and  Section 4 describe the schemes for and and their analysis. In Section 5, we show the connection between amortized common randomness and information complexity. In Section 6, we conclude with some very intriguing open questions.

1.1 Preliminaries

Notation.

For a tuple , let , when , and empty otherwise; we may drop the subscript when . For a distribution , let be obtained by taking i.i.d. samples from . Abusing notation, we say that . Let denote the standard inner product and let denote the Euclidean norm over . For any positive integer , let . Let denote for some positive global constant .

Bivariate Gaussians. Let . Let denote the Gaussian tail probability and denote the (asymmetric) orthant probability. In Appendix A, we prove the following, which also uses some seemingly new properties of .

Proposition 4.

Let . Set and . Then:

Proposition 5 (Elliptical symmetry).

For with unit norms, .

2 Template One-Way Scheme and its Analysis

The one-way schemes (including zero-communication as a special case) have the following template. Let denote the source on . Alice and Bob will generate iid samples from and use them to output bit keys. This is achieved by the players using a special codebook of points in where each codeword has a -bit encoding. For , the players also agree on a coloring of using colors such that each color class has size at most . In addition, let denote an auxiliary color. Thus, each color can be specified using bits. For the special case of zero-communication, we assume wlog that all codewords are colored and we set .

Let and be parameters that govern the achievable min-entropy and agreement probability. Let and be mappings such that and

are each uniformly distributed over

.

1:procedure CR(;) Generate -bit common random key using source .
2:     Let . Let and . Alice gets and Bob gets .
3:     if  unique such that  then Alice outputs and sends .
4:     else Alice outputs and sends .      
5:     Bob receives the color .
6:     if  unique such that and  then Bob outputs .
7:     else Bob outputs .      
Algorithm 1 One-way scheme for source

The pseudocode is given in Algorithm 1. For the analysis, define the following quantities:

1. Univariate tail: ;        2. Bivariate tail: