Secret Key Generation from Vector Gaussian Sources with Public and Private Communications

04/09/2020 ∙ by Yinfei Xu, et al. ∙ National University of Singapore 0

In this paper, we consider the problem of secret key generation with one-way communication through both a rate-limited public channel and a rate-limited secure channels where the public channel is from Alice to Bob and Eve and the secure channel is from Alice to Bob. In this model, we do not pose any constraints on the sources, i.e. Bob is not degraded to or less noisy than Eve. We obtain the optimal secret key rate in this problem, both for the discrete memoryless sources and vector Gaussian sources. The vector Gaussian characterization is derived by suitably applying the enhancement argument, and Proving a new extremal inequality. The extremal inequality can be seen as coupling of two extremal inequalities, which are related to the degraded compound MIMO Gaussian broadcast channel, and the vector generalization of Costa's entropy power inequality, accordingly.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The problem of secret key generation was introduced by Ahlswede and Csiszár [2], and by Maurer [3], where two separate terminals, named Alice and Bob, observe the outcomes of a pair of correlated sources separately and want to generate a common secret key, which is concealed from an eavesdropper Eve, given that the terminals can communicate through a noiseless public channel which the eavesdropper has complete access to. In [2], the secret key capacity of correlated sources was characterized when Alice and Bob are allowed to communicate once over a channel with unlimited capacity. The secrecy key capacity was found by Csiszár and Narayan [4] when there is a constraint on the rate of the public channel.

From then on, the secret key generation from correlated sources problem has been attracted considerable attention, both in limited rate constraints setting[5, 6, 7, 8, 9], and unlimited rate constraints setting [10, 11, 12, 13, 14, 15]. However, for many models of interest in practise, the key capacity problem still remains unsolved. In [16], the secret key generation problem through rate limited noiseless public channel was extended to source with continuous alphabets. The fundamental limits for vector Gaussian sources, which are natural models of multiple input multiple output (MIMO) systems, was characterized. In [17], a water filling solution was further derived for the product vector Gaussian sources.

In this paper, we consider the problem of secret key generation with one-way communication, where in addition to the rate limited public channel, which can be observed by both Bob and Eve, we add a secure channel, which only connects Alice and Bob. One of the motivations of this problem comes from wireless sensor network with fading channels, where the nodes want to share a secret key to encrypt their communication. In this scenario, the frequency selectivity of the fading channels will create both public and secure channels. More specifically, in some frequency bands, the links from Alice to both Bob and Eve are of good qualities, which constitute the public channel. In some other frequency bands, the link from Alice to Bob is of good quality, but the link from Alice to Eve is basically broken. These frequency bands can be viewed as a secure channel. Another motivating example comes from [18], where Alice and Bob are nodes equipped with multiple antennas and they want to communicate to share a secret key with the help of multiple single-antenna relays employing the amplify-and-forward strategy. We assume that some relays are “nice but curious”[19] which can be viewed as Eve, while the other relays are simply nice. Therefore, the links through the curious relays are public, while the links through the nice relays are secure.

Our problem can be viewed as a special case of the problem of secret key generation from correlated sources with the broadcast channel introduced in [20], where Alice, Bob and Eve are connected by a one-way broadcast channel and they observe the outcomes of correlated sources separately which are independent of the channel. This problem in general is very difficult, because it is hard to identify the optimal strategies to combine the two resources, the channel and the sources, to generate a secret key between Alice and Bob. Therefore, the secret key capacity in this problem remains unknown in its general form. Achievability schemes and converses have been proposed in [21, 22]. However, these achievabilities and converses in general do not match.

For the vector Gaussian sources problem, one of difficulties to show the fundamental tradeoff between the key capacity and the communication constraints is that vector Gaussian sources are not in general degraded. This difficulty frequently appears in several vector Gaussian multi-terminal problem [23, 24, 25, 26, 27]. In [16], Watanabe and Oohama circumvent this difficulty by suitably applying the enhancement argument. Further invoking the so-called Costa’s type extremal inequality [26]

, they showed that one Gaussian auxiliary random variable suffices to characterise the rate region. However, for the private and public communication available in our setting, it will been seen that single auxiliary random variable fails to characterize the tradeoff between the key capacity and communication rate. As a consequence, applying Costa’s type extremal inequality alone is not sufficient when the public communication constraint is considered. The desired converse result can be eventually obtained by a suitable integration of the classical enhancement argument. With the enhanced source model, the corresponding extremal inequality should be decoupled into two enhanced extremal inequalities, in which one is related to the degraded compound MIMO Gaussian broadcast channel in

[28, 29], and the other one is the vector generalization of Costa’s entropy power inequality in [26, 16].

The rest of this paper is organized as follows. The problem setup is given in Section II, we first derive the optimal achievable rate region for the case of discrete memoryless sources case, and then we show the rate region characterization for the case of vector Gaussian sources considered in this paper. The achievablility and converse proof for the discrete memoryless sources are shown in Section III. Section IV is devoted to proof our new extremal inequality, and we show the Gaussian auxiliary random variables suffice to achieve the optimal rate region, for the vector Gaussian sources. In Section V, we conclude with a summary of our results and a remark on future research.

Ii Problem Statement and Main Result

Ii-a Discrete Memoryless Sources

Consider a network with three nodes, including a transmitter Alice, a receiver Bob and an eavesdropper Eve. We assume three discrete memoryless sources indicated by random variables , defined in the alphabets , respectively. We assume that Alice and Bob observe the -length source sequences and , respectively, and Eve observes -length source sequence . In order to generate a secret key , which is shared by Alice and Bob and concealed from Eve, Alice can send two messages and , where is through a noiseless public channel, which can be observed by Eve, and is through a noiseless secure channel, to which Eve has no access.

A code consists of

  • a public encoding function that finds a codeword to each -length source sequence , and sends it to both Bob and Eve,

  • a private encoding function that finds a codeword to each -length source sequence , and sends it to both Bob only,

  • a key generation function that assigns a random mapping by giving Alice’s -length source sequence ,

  • a key generation function that assigns a random mapping by giving Bob’s -length source sequence and all received indices and .

Then the secret key is generated by Alice and Bob from the functions and

, respectively, which should agree with probability

and be concealed from Eve. The probability of error for the key generation code is defined as

(1)

The key leakage rate at Eve is defined as

(2)
Definition 1.

A secret key rate with constraint communication rate pair is achievable if there exists a sequence of code such that

(3)
(4)

For the discrete memoryless source setting, we have the following single-letter expression on the largest achievable secret key rate with public and private communication constraints and .

Theorem 1.

For the discrete memoryless source secret key generation problem with public and private communication constraints, the rate tuple is achievable if and only if

(5)
(6)
(7)

where random variables

satisfy the following Markov chain

(8)
Proof.

See Section III. ∎

Ii-B Vector Gaussian Sources

EncoderDecoderEavesdropper
Fig. 1: Secret key generation with rate constrained public and private communications

Now we study the same communication constrained secret key generation problem, for the vector Gaussian sources setting (see Fig.1). Let be i.i.d. vector-valued discrete time Gaussian sources, where across the time index

, each tuple is drawn from the same jointly vector Gaussian distribution. The encoder, the legitimate decoder and the eavesdropper decoder observe

, and , respectively. The vector Gaussian source can be written as

(9)
(10)

where each is a -dimensional Gaussian random vector with mean zero and covariance , each is a -dimensional Gaussian random vector with mean zero and covariance , and is a -dimensional Gaussian random vector with mean zero and covariance , respectively. We shall point out that and are independent from expressions (9) and (10). However, no additional independence relationship is imposed between and .

In [16], the authors showed that a single layer code suffices, and characterize the optimal trade-off on and for the vector Gaussian sources setting. Their converse method is motivated by the enhancement argument [23], [24] for vector Gaussian wiretap channel in [25]. In this paper, we show a similar enhancement on two-layer superposition codes can be applied to establish the converse proof on the optimal trade-off problem.

According to Theorem 1 for discrete memoryless sources , a single-letter description of the optimal trade-off for the vector Gaussian secret key generation problem can be given as follows.

Theorem 2.

For the vector Gaussian secret key generation problem with public and private communication constraints, the rate tuple is achievable if and only if

(11)
(12)
(13)

for some positive semi-definite matrices , .

Proof.

The achievable part of Theorem 2 is based on constructing Gaussian test channels to maintain the Markov chain , and the details can be found in Appendix A. The converse part of Theorem 2 relies on a careful combination of the usage of channel enhancement argument and extremal inequalities, and the details can be found in Section IV. ∎

Remark 1.

It should be pointed out that the jointly vector Gaussian sources given by (9) and (10) are not in the most general form. In the general case, we have

(14)
(15)

for some matrices , . The extension of Theorem 2 can be obtained by following the lines of [16, Sec. V], in a similar manner.

Iii Proof of Theorem 1

Iii-a Principles of the Achievability

The achievability scheme that we propose in this paper can be viewed as a combination of the scheme in secret key generation from correlated sources [4], consisting of a codebook of the superposition structure, and secret key distribution through secret channel. Moreover, this scheme can be viewed as a special case of the separated achievable scheme in [22, Th.2].

The rate of the public channel and the secure channel are used for the transmission of the following:

  1. the inner code .

  2. the outer code .

  3. key distribution.

Our proposed scheme follows the principles below:

  • The public channel is used to transmit the inner code . Since the rate of the inner code is less than the rate of the public channel, then the leftover rate of the public channel will be used to transmit the outer code .

  • The secure channel is used to transmit the outer code . Since there is still extra rate leftover in the secure channel, then the leftover rate of the secure channel can be used for key distribution.

  • The public channel can not be used for key distribution and the secure channel can not be used to transmit the inner code .

We show that the proposed scheme achieves the rate in Theorem 1 and is thus, optimal. For the completeness, the details of the proof can be found in Appendix B.

Iii-B The Converse

We begin the proof of the converse with

(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)

where

  1. follows from Fano’s inequality;

  2. follows from the secrecy constraint in (4);

  3. follows by the non-negativity of discrete entropy and mutual information.

By applying the key identity [30, Lemma 17.12], it follows that

(27)

where

(28)
(29)

Here,

is uniformly distributed on

and independent of . Since , and is a function of , the Markov Chain is satisfied. Because of the fact that are i.i.d., we can replace and by and . Then (5) is proved.

Next, we consider the sum rate

(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)

where

  1. follows from Fano’s inequality;

  2. follows from the key identity [30, Lemma 17.12];

  3. follows because are i.i.d.;

  4. follows from the Markov Chain ;

  5. follows from the Markov Chain ;

  6. follows from the Markov Chain .

We thereby have

(39)
(40)
(41)

with as defined in (29) and is uniform on .

Finally, for the public rate , we have

(42)
(43)
(44)
(45)

By the similar argument as in the sum rate derivation, we can get

(46)
(47)
(48)
(49)

where as defined in (28) and is uniform on , which concludes the converse proof of Theorem 1.

Iv The Converse of Theorem 2

Iv-a The Extremal Inequality

As in [16], the achievable rate region of for vector Gaussian sources is defined as

(50)

Due to the convexity of , to characterize the optimal trade-off of for the vector Gaussian model, we can write the following -sum problem, alternatively,

(51)
(52)

for any . To prove the converse part of Theorem 2, it is equivalent to show the following inequality folds for any ,

(53)

where is a Gaussian optimization problem shown as follows

(54)

Let be one (non-unique) minimizer of the optimization problem . The necessary Karush-Kuhn-Tucker (KKT) conditions are given in the following lemma.

Lemma 1.

The minimizer of need to satisfy

(55)
(56)

for some positive semi-definite matrices such that

(57)
(58)
Proof.

See Appendix C. ∎

Now Starting from the single-letter expressions in Theorem 1, the -sum problem for any rate tuple in should be lower bounded by

(59)
(60)
(61)
(62)
(63)
(64)

where

  1. follows from Makov Chain ,

  2. follows from Makov Chain .

By comparing (64) with optimization problem in (IV-A), it can be shown that to prove Theorem 2, it is sufficient to prove the following extremal inequality

Theorem 3.

There exist two positive semi-definite matrices and , which satisfy KKT conditions (55)-(58) in lemma 1 to minimize optimization problem , then for some real numbers , we have

(65)

for any such that forms a Markov chain.

The proof of (65) depends on the enhancement argument introduced in[23], [24], which can be divided into two steps. In the first step, we enhance the source to such that the Markov chain holds. In the second step, we decouple to extremal inequality (64) to two new ones, associated with enhanced sources , respectively. The proof of extremal inequality (64) in provided by involving the two enhanced extremal inequalities, which is related to the degraded compound MIMO Gaussian broadcast channel in [28, 29], and the vector generalization of Costa’s entropy power inequality in [26, 16].

Iv-B Some Lemmas

In order to reduce the non-degraded sources to the degraded case, we introduce a new covariance matrix such that

(66)

Then has useful properties listed in the following lemma.

Lemma 2.

has the following properties:

Proof.

The proof of Lemma 2 is left in Appendix D. ∎

Moreover, to decouple our extremal inequality (65), we need the vector generalization of Costa’s entropy power inequality [26], and the generalized extremal inequality related the degraded compound MIMO Gaussian broadcast channel in [28, 29], as two auxiliary lemmas.

Lemma 3 ( [26, Corollary 2]).

Let , and be Gaussian random vectors with positive definite covariance matrices , and , respectively. Furthermore, , satisfy . If there exists a positive semi-definite covariance such that

(67)

where , then

(68)

for any independent of .

Lemma 4 ( [28, Corollary 4]).

Let and be real Gaussian random vectors with positive definite covariance matrices and , respectively. We assume that there exits a covariance matrix such that

(69)

Furthermore, let be a positive semi-definite covariance matrix such that

(70)

where , , and

(71)

The for any distribution independent of and , such that , we have

(72)

Iv-C Proof of Theorem 3

We proceed to proof the extremal inequality (65). At the beginning, we rewrite the l.h.s. of (65) by involving the enhancement source .

(73a)
(73b)