I Introduction
The problem of secret key generation was introduced by Ahlswede and Csiszár [2], and by Maurer [3], where two separate terminals, named Alice and Bob, observe the outcomes of a pair of correlated sources separately and want to generate a common secret key, which is concealed from an eavesdropper Eve, given that the terminals can communicate through a noiseless public channel which the eavesdropper has complete access to. In [2], the secret key capacity of correlated sources was characterized when Alice and Bob are allowed to communicate once over a channel with unlimited capacity. The secrecy key capacity was found by Csiszár and Narayan [4] when there is a constraint on the rate of the public channel.
From then on, the secret key generation from correlated sources problem has been attracted considerable attention, both in limited rate constraints setting[5, 6, 7, 8, 9], and unlimited rate constraints setting [10, 11, 12, 13, 14, 15]. However, for many models of interest in practise, the key capacity problem still remains unsolved. In [16], the secret key generation problem through rate limited noiseless public channel was extended to source with continuous alphabets. The fundamental limits for vector Gaussian sources, which are natural models of multiple input multiple output (MIMO) systems, was characterized. In [17], a water filling solution was further derived for the product vector Gaussian sources.
In this paper, we consider the problem of secret key generation with oneway communication, where in addition to the rate limited public channel, which can be observed by both Bob and Eve, we add a secure channel, which only connects Alice and Bob. One of the motivations of this problem comes from wireless sensor network with fading channels, where the nodes want to share a secret key to encrypt their communication. In this scenario, the frequency selectivity of the fading channels will create both public and secure channels. More specifically, in some frequency bands, the links from Alice to both Bob and Eve are of good qualities, which constitute the public channel. In some other frequency bands, the link from Alice to Bob is of good quality, but the link from Alice to Eve is basically broken. These frequency bands can be viewed as a secure channel. Another motivating example comes from [18], where Alice and Bob are nodes equipped with multiple antennas and they want to communicate to share a secret key with the help of multiple singleantenna relays employing the amplifyandforward strategy. We assume that some relays are “nice but curious”[19] which can be viewed as Eve, while the other relays are simply nice. Therefore, the links through the curious relays are public, while the links through the nice relays are secure.
Our problem can be viewed as a special case of the problem of secret key generation from correlated sources with the broadcast channel introduced in [20], where Alice, Bob and Eve are connected by a oneway broadcast channel and they observe the outcomes of correlated sources separately which are independent of the channel. This problem in general is very difficult, because it is hard to identify the optimal strategies to combine the two resources, the channel and the sources, to generate a secret key between Alice and Bob. Therefore, the secret key capacity in this problem remains unknown in its general form. Achievability schemes and converses have been proposed in [21, 22]. However, these achievabilities and converses in general do not match.
For the vector Gaussian sources problem, one of difficulties to show the fundamental tradeoff between the key capacity and the communication constraints is that vector Gaussian sources are not in general degraded. This difficulty frequently appears in several vector Gaussian multiterminal problem [23, 24, 25, 26, 27]. In [16], Watanabe and Oohama circumvent this difficulty by suitably applying the enhancement argument. Further invoking the socalled Costa’s type extremal inequality [26]
, they showed that one Gaussian auxiliary random variable suffices to characterise the rate region. However, for the private and public communication available in our setting, it will been seen that single auxiliary random variable fails to characterize the tradeoff between the key capacity and communication rate. As a consequence, applying Costa’s type extremal inequality alone is not sufficient when the public communication constraint is considered. The desired converse result can be eventually obtained by a suitable integration of the classical enhancement argument. With the enhanced source model, the corresponding extremal inequality should be decoupled into two enhanced extremal inequalities, in which one is related to the degraded compound MIMO Gaussian broadcast channel in
[28, 29], and the other one is the vector generalization of Costa’s entropy power inequality in [26, 16].The rest of this paper is organized as follows. The problem setup is given in Section II, we first derive the optimal achievable rate region for the case of discrete memoryless sources case, and then we show the rate region characterization for the case of vector Gaussian sources considered in this paper. The achievablility and converse proof for the discrete memoryless sources are shown in Section III. Section IV is devoted to proof our new extremal inequality, and we show the Gaussian auxiliary random variables suffice to achieve the optimal rate region, for the vector Gaussian sources. In Section V, we conclude with a summary of our results and a remark on future research.
Ii Problem Statement and Main Result
Iia Discrete Memoryless Sources
Consider a network with three nodes, including a transmitter Alice, a receiver Bob and an eavesdropper Eve. We assume three discrete memoryless sources indicated by random variables , defined in the alphabets , respectively. We assume that Alice and Bob observe the length source sequences and , respectively, and Eve observes length source sequence . In order to generate a secret key , which is shared by Alice and Bob and concealed from Eve, Alice can send two messages and , where is through a noiseless public channel, which can be observed by Eve, and is through a noiseless secure channel, to which Eve has no access.
A code consists of

a public encoding function that finds a codeword to each length source sequence , and sends it to both Bob and Eve,

a private encoding function that finds a codeword to each length source sequence , and sends it to both Bob only,

a key generation function that assigns a random mapping by giving Alice’s length source sequence ,

a key generation function that assigns a random mapping by giving Bob’s length source sequence and all received indices and .
Then the secret key is generated by Alice and Bob from the functions and
, respectively, which should agree with probability
and be concealed from Eve. The probability of error for the key generation code is defined as(1) 
The key leakage rate at Eve is defined as
(2) 
Definition 1.
A secret key rate with constraint communication rate pair is achievable if there exists a sequence of code such that
(3)  
(4) 
For the discrete memoryless source setting, we have the following singleletter expression on the largest achievable secret key rate with public and private communication constraints and .
Theorem 1.
For the discrete memoryless source secret key generation problem with public and private communication constraints, the rate tuple is achievable if and only if
(5)  
(6)  
(7) 
where random variables
satisfy the following Markov chain
(8) 
Proof.
See Section III. ∎
IiB Vector Gaussian Sources
Now we study the same communication constrained secret key generation problem, for the vector Gaussian sources setting (see Fig.1). Let be i.i.d. vectorvalued discrete time Gaussian sources, where across the time index
, each tuple is drawn from the same jointly vector Gaussian distribution. The encoder, the legitimate decoder and the eavesdropper decoder observe
, and , respectively. The vector Gaussian source can be written as(9)  
(10) 
where each is a dimensional Gaussian random vector with mean zero and covariance , each is a dimensional Gaussian random vector with mean zero and covariance , and is a dimensional Gaussian random vector with mean zero and covariance , respectively. We shall point out that and are independent from expressions (9) and (10). However, no additional independence relationship is imposed between and .
In [16], the authors showed that a single layer code suffices, and characterize the optimal tradeoff on and for the vector Gaussian sources setting. Their converse method is motivated by the enhancement argument [23], [24] for vector Gaussian wiretap channel in [25]. In this paper, we show a similar enhancement on twolayer superposition codes can be applied to establish the converse proof on the optimal tradeoff problem.
According to Theorem 1 for discrete memoryless sources , a singleletter description of the optimal tradeoff for the vector Gaussian secret key generation problem can be given as follows.
Theorem 2.
For the vector Gaussian secret key generation problem with public and private communication constraints, the rate tuple is achievable if and only if
(11)  
(12)  
(13) 
for some positive semidefinite matrices , .
Proof.
The achievable part of Theorem 2 is based on constructing Gaussian test channels to maintain the Markov chain , and the details can be found in Appendix A. The converse part of Theorem 2 relies on a careful combination of the usage of channel enhancement argument and extremal inequalities, and the details can be found in Section IV. ∎
Iii Proof of Theorem 1
Iiia Principles of the Achievability
The achievability scheme that we propose in this paper can be viewed as a combination of the scheme in secret key generation from correlated sources [4], consisting of a codebook of the superposition structure, and secret key distribution through secret channel. Moreover, this scheme can be viewed as a special case of the separated achievable scheme in [22, Th.2].
The rate of the public channel and the secure channel are used for the transmission of the following:

the inner code .

the outer code .

key distribution.
Our proposed scheme follows the principles below:

The public channel is used to transmit the inner code . Since the rate of the inner code is less than the rate of the public channel, then the leftover rate of the public channel will be used to transmit the outer code .

The secure channel is used to transmit the outer code . Since there is still extra rate leftover in the secure channel, then the leftover rate of the secure channel can be used for key distribution.

The public channel can not be used for key distribution and the secure channel can not be used to transmit the inner code .
IiiB The Converse
We begin the proof of the converse with
(16)  
(17)  
(18)  
(19)  
(20)  
(21)  
(22)  
(23)  
(24)  
(25)  
(26) 
where

follows from Fano’s inequality;

follows from the secrecy constraint in (4);

follows by the nonnegativity of discrete entropy and mutual information.
By applying the key identity [30, Lemma 17.12], it follows that
(27) 
where
(28)  
(29) 
Here,
is uniformly distributed on
and independent of . Since , and is a function of , the Markov Chain is satisfied. Because of the fact that are i.i.d., we can replace and by and . Then (5) is proved.Next, we consider the sum rate
(30)  
(31)  
(32)  
(33)  
(34)  
(35)  
(36)  
(37)  
(38) 
where

follows from Fano’s inequality;

follows from the key identity [30, Lemma 17.12];

follows because are i.i.d.;

follows from the Markov Chain ;

follows from the Markov Chain ;

follows from the Markov Chain .
We thereby have
(39)  
(40)  
(41) 
with as defined in (29) and is uniform on .
Iv The Converse of Theorem 2
Iva The Extremal Inequality
As in [16], the achievable rate region of for vector Gaussian sources is defined as
(50) 
Due to the convexity of , to characterize the optimal tradeoff of for the vector Gaussian model, we can write the following sum problem, alternatively,
(51)  
(52) 
for any . To prove the converse part of Theorem 2, it is equivalent to show the following inequality folds for any ,
(53) 
where is a Gaussian optimization problem shown as follows
(54) 
Let be one (nonunique) minimizer of the optimization problem . The necessary KarushKuhnTucker (KKT) conditions are given in the following lemma.
Lemma 1.
The minimizer of need to satisfy
(55)  
(56) 
for some positive semidefinite matrices such that
(57)  
(58) 
Proof.
See Appendix C. ∎
Now Starting from the singleletter expressions in Theorem 1, the sum problem for any rate tuple in should be lower bounded by
(59)  
(60)  
(61)  
(62)  
(63)  
(64) 
where

follows from Makov Chain ,

follows from Makov Chain .
By comparing (64) with optimization problem in (IVA), it can be shown that to prove Theorem 2, it is sufficient to prove the following extremal inequality
Theorem 3.
The proof of (65) depends on the enhancement argument introduced in[23], [24], which can be divided into two steps. In the first step, we enhance the source to such that the Markov chain holds. In the second step, we decouple to extremal inequality (64) to two new ones, associated with enhanced sources , respectively. The proof of extremal inequality (64) in provided by involving the two enhanced extremal inequalities, which is related to the degraded compound MIMO Gaussian broadcast channel in [28, 29], and the vector generalization of Costa’s entropy power inequality in [26, 16].
IvB Some Lemmas
In order to reduce the nondegraded sources to the degraded case, we introduce a new covariance matrix such that
(66) 
Then has useful properties listed in the following lemma.
Lemma 2.
has the following properties:
Moreover, to decouple our extremal inequality (65), we need the vector generalization of Costa’s entropy power inequality [26], and the generalized extremal inequality related the degraded compound MIMO Gaussian broadcast channel in [28, 29], as two auxiliary lemmas.
Lemma 3 ( [26, Corollary 2]).
Let , and be Gaussian random vectors with positive definite covariance matrices , and , respectively. Furthermore, , satisfy . If there exists a positive semidefinite covariance such that
(67) 
where , then
(68) 
for any independent of .
Lemma 4 ( [28, Corollary 4]).
Let and be real Gaussian random vectors with positive definite covariance matrices and , respectively. We assume that there exits a covariance matrix such that
(69) 
Furthermore, let be a positive semidefinite covariance matrix such that
(70) 
where , , and
(71) 
The for any distribution independent of and , such that , we have
(72) 
Comments
There are no comments yet.