Message transmission over classical quantum channels with a Jammer with side information; correlation as resource and common randomness generating

02/05/2019 ∙ by Holger Boche, et al. ∙ Technische Universität München 0

In this paper we analyze the capacity of a general model for arbitrarily varying classical-quantum channels when the sender and the receiver use a weak resource. In this model a jammer has side information about the channel input. We determine the correlation assisted capacity of AVCQCs with a jammer knowing the channel input. We deliver a single letter formula for the correlation assisted capacity. This formula as a function of the channel parameters is Turing computable. The single letter characterization is surprising, on the one hand because correlation is the weakest resource in the hierarchy of resources, on the other hand because the deterministic capacity formula for arbitrarily varying channels with informed jammer is still an open problem, even for classical arbitrarily varying channels, where the well-know Shannon's zero-error capacity is contained as a special case of this scenario. As an application,we determine the correlation assisted common randomness capacity. We also analyze these both capacities when only a small amount of correlation is available. For the correlation assisted common randomness capacity we show a further interesting aspect: For a sufficient amount of "public communication", common randomness capacity is Turing computable, however without this public communication's constrain, the correlation assisted common randomnesscapacity is in general not Banach-Mazur computable and thus not Turing computable.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Quantum information theory is a new field that allow us to exploit new possibilities while at the same time imposing fundamental limitations. We consider the capacity of classical quantum channels (AVQCs). The capacity of classical quantum channels has been determined in [23], [26], and [27].

An arbitrarily varying channel (AVC) describes communication including a jammer who tries to disturb the legal parties’ communication by changing his input in every channel use. This model completely captures all possible jamming attacks depending on the knowledge of the jammer. The arbitrarily varying channel was introduced in [12]. In the model of message transmission over arbitrarily varying channels it is understood that the sender and the receiver have to select their coding scheme first. In the conventional model it is assumed that this coding scheme is known by the jammer, and he may choose the most advantaged jamming attacking strategy depending on his knowledge, but the jammer has neither knowledge about the transmitted codeword nor knowledge about the message.

To share resources is a well-studied assistance for the transmitters. For example, in wireless communication, the communication service may send some signals via satellite to its users. In 1978 Ahlswede demonstrated in [2] the importance of the resources (of shared randomness) in a very clear form by showing the surprising result that either the deterministic capacity of an arbitrarily varying channel is zero or it is equal to its randomness assisted capacity (Ahlswede dichotomy). After this discovery, it has remained an open question as to exactly when the deterministic capacity is nonzero. In 1985 a necessary condition for this has been delivered in [22], and in 1988 [20] proved that this condition is also sufficient. In [16] it has been shown that the resource must be only known by the legal channel users, since otherwise it will be completely useless.

In [18] a classification of various resources is given. A distinction is made between two extremal cases: randomness and correlation. Randomness is the strongest resource, it requires a perfect copy of the outcome of a random experiment, and thus we should assume an additional perfect channel to generate this kind of resources. On the other hand, correlation is the weakest resource. The work [18] showed that common randomness is a stronger resource than correlation in the following sense: a sufficiently large amount of common randomness allows the sender and receiver to asymptotically simulate the statistics of any correlation. On the contrary, an example is given when not even a finite amount of common randomness can be extracted from a given correlation without further communication.

In all the above-mentioned works it is assumed that the jammer knows the coding scheme, but has no side information about the codeword which the legal transmitters send. In many applications, especially for secure communications, it is too optimistic to assume this. In [13] it has been show that the jammer can benefice from his knowledge about the sending codeword, i.e, he may have a better jamming strategy. Thus in our previous paper [13] we considered the scenario when the jammer knows both the coding scheme and the input codeword.

This work is a extension of our previous paper [13], where we determined the randomness assisted capacity of AVCQCs with a jammer knowing the channel input. However, as [18] showed, common randomness is a very “costly” resource. A promising result of this work is that the much “cheaper” resource, the correlation, is also an equally powerful resource. Furthermore, a correlation does not have to be “very good” to be helpful in achieving a positive secrecy capacity, since is a helpful resource even if is only slightly larger than zero. We also show that the same capacity can been archived using a smaller amount (as compared to the number of channel uses) of correlation.

As an application of our results, we turn to the question: How much common randomness an AVCQC with an informed jammer can generate using correlation as resource. Capacities of common randomness generation over classical perfect channels and over classical noisy channels have been determined in [8]. In this work, we deliver the common randomness generation capacity with informed jammer using correlation as resource. We also analyze the case when only a smaller amount (as compared to the number of channel uses) of correlation is used.

In [19]

the concept of a Turing machine has been analyzed. The authors have considered secret key capacities and secure authentication capacities over several classical channel network models and have determined whether they are computable, i.e., if they can be algorithmically solve with the help of Turing machines. As an application of our results, we extend the objectives of

[19] to some capacity formulas of quantum networks and determine whether they are Turing computable.

Ii Definitions and Communication Models

Ii-a Basic Notations

Throughout the paper random variables will be denoted by capital letters e. g.,

and their realizations (or values) and domains (or alphabets) will be denoted by corresponding lower case letters e. g., and script letters e.g., , respectively. Random sequences will be denoted a by capital bold-face letters, whose lengths are understood by the context, e. g., and , and deterministic sequences are written as lower case bold-face letters e. g., .

is the distribution of random variable

. Joint distributions and conditional distributions of random variables

and will be written as , etc. and etc., respectively and and are their product distributions i. e., , and . Moreover, and are sets of (strongly) typical sequences of the type , joint type and conditional type , respectively. The cardinality of a set will be denoted by . For a positive integer , . “

is a classical channel, or a conditional probability distribution, from set

to set ” is abbreviated to “”. “Random variables and

form a Markov chain” is abbreviated to “

”. will stand for the operator of mathematical expectation.

Throughout the paper dimensions of all Hilbert spaces are finite. For a finite-dimensional complex Hilbert space , we denote the (convex) set of density operators on by

where is the set of linear operators on , and is the null matrix on . Note that any operator in is bounded.

Throughout the paper the logarithm base is 2. For a discrete random variable

on a finite set and a discrete random variable on a finite set , we denote the Shannon entropy of by and the mutual information between and by . Here is the joint probability distribution function of and , and and are the marginal probability distribution functions of and respectively.

Let and be quantum systems. We denote the Hilbert space of and by and , respectively. Let be a bipartite quantum state in . We present the partial trace over by

where is an orthonormal basis of . We present the conditional entropy by

Here .

If the sender wants to transmit a classical message set to the receiver using a quantum channel, his encoding procedure will include a classical-to-quantum encoder to prepare a quantum state suitable as an input for the channel. In view of this, we have the following definition.

Definition 1

Let be a finite-dimensional complex Hilbert space. A classical-quantum channel is a mapping , specified by a set of quantum states , indexed by “input letters” in a finite set . and are called input alphabet and output space respectively. We define the -th extension of classical-quantum channel as follows. The channel outputs a quantum state , in the

th tensor power

of the output space , when an input codeword of length is input into the channel.

Let : be a classical-quantum channel. For , the conditional entropy of the channel for with input distribution is presented by

Let be a classical-quantum channel, i.e., a set of quantum states labeled by elements of . For a probability distribution on , the Holevo quantity is defined as

For a probability distribution on a finite set and a positive constant , we present the set of typical sequences by

where is the number of occurrences of the symbol in the sequence .

Let be a finite-dimensional complex Hilbert space. Let and . We suppose has the spectral decomposition , its -typical subspace is the subspace spanned by , where . The orthogonal subspace projector which projects onto this typical subspace is

Similarly, let be a finite set, and be a finite-dimensional complex Hilbert space. Let : be a classical-quantum channel. For , suppose has the spectral decomposition

for a stochastic matrix

. The -conditional typical subspace of for a typical sequence is the subspace spanned by . Here is an indicator set that selects the indices in the sequence for which the -th symbol is equal to . The subspace is often referred to as the -conditional typical subspace of the state . The orthogonal subspace projector which projects onto it is defined as

The typical subspace has the following properties:

For and there are positive constants , , and , depending on and tending to zero when such that

(1)
(2)
(3)

For there are positive constants , , and , depending on and tending to zero when such that

(4)
(5)
(6)

For the classical-quantum channel and a probability distribution on we define a quantum state on . For we define an orthogonal subspace projector fulfilling (1), (2), and (3). Let . For there is a positive constant such that following inequality holds:

(7)

(1) holds because . (2) holds because . (3) holds because for and a positive . (4), (II-A), and (6) can be obtained in a similar way. (7) follows from the permutation-invariance of .

Ii-B Code Concepts and Resources

Fig. 1: Conventional model: AVCQC when the jammer has no further knowledge about the channel input and the legal channel users have no access to any resource: In this scenario the jammer’s inputs do not depend on
Definition 2

A arbitrarily varying classical-quantum channel (AVCQC) is specified by a set of classical quantum channels with a common input alphabet and output space , which are indexed by elements in a finite set . Elements usually are called the states of the channel. outputs a quantum state

(8)

if an input codeword is input into the channel, and the channel is governed by a state sequence , while the state varies from symbol to symbol in an arbitrary manner.

We assume that the channel state is in control of the jammer. Without loss of generality we also assume that the jammer always chooses the most advantageous attacking strategy according to his knowledge. This is important for the applications of our result to other channel models, e.g. compound channels

Definition 3

An code consists of a encoder , and a set of collections of positive-semidefinite operators on which fulfills .

Definition 4

A non-negative number is an achievable rate for a classical-quantum channel if for every , , and sufficiently large there exists an code such that , and

The supremum on achievable deterministic rates of is called the capacity of , denoted by .

Fig. 2: AVCQC with randomness as coordination resource when the jammer has no further knowledge about the channel input

In the context of arbitrarily varying channels, randomness can be an important resource for reliable communication over an arbitrarily varying channel. Ahlswede showed in [2] (cf. also [3] and [4]), the surprising result that either the deterministic capacity of an arbitrarily varying channel is zero, or it equals its randomness assisted capacity (Ahlswede Dichotomy). [14] shows there are indeed arbitrarily varying classical-quantum channels which have zero deterministic capacity and positive random capacity. Therefore randomness is indeed a very helpful resource for message transmission (and secure message transmission) through an arbitrarily varying classical-quantum channel. Having resource is particularly essential for the scenario we consider in this work (see the discussion below).

Fig. 3: AVCQC when the jammer knows the code word having randomness as coordination resource: The sender and the receiver share the outcome of a random experiment, i.e., they share common randomness

Most of the previous works in AVCQCs consider the case when the jammer knows the coding scheme, but has no side information about the codeword of the transmitters. However for secure communications this assumption may be too optimistic, since [13] shows that the jammer really can archive a better jamming strategy when he knows the codeword. Thus we concentrate on message transmission over classical quantum channels with a jammer with additional side information about the codeword. We assume that the jammer chooses the most advantageous attacking strategy according to his side information.

Definition 5

Let be an AVCQC. A non-negative number is an achievable deterministic rate with informed jammer under the average error criterion for if for every , , and every sufficiently large there exists a code such that , and

where is defined as

Here the maximum is taken over all functions .

The supremum on achievable deterministic rates of with informed jammer under the average error criterion is called the deterministic capacity of with informed jammer, denoted by .

Our scenario (when the jammer knows the input codeword) is already a challenging topic for classical arbitrarily varying channels. This has been analyzed by Sarwate in [25], where only random assisted capacity has been determined. The deterministic capacity formula, i.e., without additional resource, is even in the classical case an open problem. It has been shown by Ahlswede in [1] that the classical capacity under maximal error criterion in this scenario contains the zero-error capacity of related discrete memoryless channels as a special case. A deterministic capacity formula for this is still unknown. In particular, [13] shows a violation of Ahlswede dichotomy in our scenario.

Coding for AVCQC is even much harder. Due to the non-commutativity of quantum operators, many techniques, concepts and methods of classical information theory, for instance, non-standard decoder and list decoding (which has been used in [25]’s proof), may not be extended to quantum information theory. In [13] we determined the random assisted capacities of AVCQCs when the jammer has access to the channel input.

Definition 6

A random assisted code for an AVCQC

is a uniformly distributed random variable taking values in a set of codes

with a common message set , where and are the code book and decoding measurement of the th code in the set respectively. is here a function of , the length of the codes in this set, i.e., for a fixed , is finite.

Definition 7

By assuming that the random message is uniformly distributed, we define the average probability of error by

(9)

This can be also rewritten as

(10)

A non-negative number is an achievable rate for the arbitrarily varying classical-quantum channel under random assisted coding with informed jammer using the average error criterion if for every and and every sufficiently large , there is a random assisted code of length such that and .

The supremum on achievable rate under random assisted coding of with informed jammer using the average error criterion is called the random assisted capacity of with informed jammer using the average error criterion, denoted by .

Fig. 4: AVCQC when the jammer knows the coding scheme having merely correlation as resource

A correlated source is a discrete memoryless source (DMS) , observed by the sender and receiver, modeled by independent copies of a random variable with values in some finite set . The sender accesses to the random variable , and the receiver to . We call a correlated source, or a correlation. Since is memoryless we also say instead of . Without loss of generality we assume that is binary (since one can easily reduce a non-binary with to some with ). The only exception is Section IV, where may be not binary. It has been shown in [6] that this is a helpful resource for information transmission through an arbitrarily varying classical channel: The use of mere correlation does already allow one to transmit messages at any rate that is achievable using the optimal form of shared randomness. The capacity of an arbitrarily varying quantum channel assisted by correlated shared randomness as resource has been discussed in [18], where equivalent results were found.

Our previous work [13] has determined the random correlated capacity with informed jammer, where we used randomness as a resource. However, as [18] showed, common randomness is a very “costly” resource, we have to require that the sender and the receiver each obtains a perfect copy of a random experiment’s output. Thus we consider in this work the correlation as resource, which is a much “cheaper” resource in the sense that we can simulate any correlation by common randomness asymptotically, but there exists a class of sequences of bipartite distributions which cannot model common randomness (cf. [18]).

Now we consider the correlation assisted code.

Definition 8

We assume that the transmitters have access to an arbitrary correlated source with alphabets . A -correlation assisted code for the arbitrarily varying classical-quantum channel consists of a set of encoders , and a set of collections of positive-semidefinite operators on which fulfills for every .

Definition 9

Let with alphabets , be an arbitrary correlated source. A non-negative number is an achievable -correlation assisted rate with informed jammer under the average error criterion for the AVCQC if for every , , and sufficiently large there exists a -correlation assisted code such that , and

where is defined as

For a given correlated source , the supremum on achievable -correlation assisted rates of with informed jammer under the average error criterion is called the -correlation assisted capacity with informed jammer, denoted by . Notice that by definition, is a function of .

Definition 10

Let with alphabets , be an arbitrary correlated source. For a sequence of natural numbers a -correlation assisted code for the arbitrarily varying classical-quantum channel consists of a set of encoders , and a set of collections of positive-semidefinite operators on which fulfills for every .

Definition 11

Let with alphabets , be an arbitrary correlated source. A non-negative number is an achievable -correlation assisted rate with informed jammer under the average error criterion for the AVCQC if for every , , and sufficiently large there exists a -correlation assisted code such that , and

where is defined as

The supremum on achievable -correlation assisted rates of with informed jammer under the average error criterion is called the -correlation assisted capacity with informed jammer, denoted by .

Iii Main Results and Proofs

Iii-a Quantum Version of Kiefer and Wolfowitz’s Results for Classical Channels

[6] showed for classical AVCs the equality of correlation assisted capacity and random assisted capacity (under average error criterion) for any correlated source (with ) when the jammer has no side information. The idea of the proof was at first to show the correlation assisted capacity satisfies the positivity conditions of [20]. Then the channel uses can create a sufficient amount common randomness using a negligible amount of bits. For this proof it is essential that the the randomness is uniformly distributed.

However when the jammer has side information about the channel input, the results of [20] can not be applied since there is no Ahlswede Dichotomy (cf. [13]). For classical AVCs with informed jammer, the equality of correlation assisted capacity and random assisted capacity can be proved in the following way: At first we show that the classical correlation assisted capacity for any correlated source satisfies the positivity condition of [24]

, i.e., there is a hyperspace separating the classical channel outputs into two parts in their vector space. Then similar to the proof of

[6], the channel uses can create common randomness using a negligible amount of bits. Kiefer and Wolfowitz showed the in [24] the positivity, if their condition is fulfilled, by constructing a classical binary point to point channel.

With this approach we can show that the -correlation assisted capacity of a classical AVC is equal to

(11)

when . Here . In this paper we skip the proof of the coding theorem (11) for analogous arbitrarily varying classical channel and directly prove it for arbitrarily varying classical channel, because the former is a special case contained by latter, although we have a proof for the former.

One of the main difficulties is that we can not apply the classical results of Kiefer and Wolfowitz for correlations directly on the set of quantum states since they do not form a real vector space. Thus we have to find a new approach to show a quantum version of the classical results of [24] by Lemma 2 below. Furthermore we showed that the the correlation assisted capacity an AVCQC for any correlated source of satisfies this positivity condition by Lemma 1 below (cf. also Remark 1 for an alternative proof). The last step is creating a sufficient amount common randomness using a negligible amount of bits similar to the technique in [11] and [15]. In our previous work [13] we delivered the random assisted capacity when the jammer has side information about the channel input. For this proof only a negligible amount of randomness was needed. This together with our last step demonstrate the equality of correlation assisted capacity and random assisted capacity for AVCQCs. We show the last step in Theorem 1 below. For our proof it is essential that the the randomness is uniformly distributed.

Let with alphabets , be an arbitrary correlated source and be an AVCQC with input alphabet and output space . For a mapping and a conditional probability distribution , where is the set of conditional probability distributions from to the state set of . we define

(12)
(13)

and

(14)

For a given AVCQC with set of state , let

(15)
Lemma 1

If and then we can find and such that

(16)

Proof: Let , , and . We label the input letters in as . Our proof is based on constructions of two functions and (for a properly defined in the next paragraph), satisfying (16) and

(17)

for all . Notice that we will need property (17) for the proof of Theorem 1)

Let be the smallest integer such that . We shall construct and satisfying (16). To this end we shall group the sequences in . But, first we need to label them in following way.

For , divide the sequences in with Hamming weight to two parts with equal sizes and label them as and respectively. When

is odd, we denote the remaining sequence by

.

Order labels , by lexicographic order, as and rewrite and to and respectively, if is the th label in the order. That is, for and ( and ), if and only if or and .

Next we assign values of and to the sequences in according to three groups:

The group 1: For , we let , and , respectively. Notice that, as and have the same Hamming weight for all , for every we have

(18)

for (i. e. for all ), in the assignment to the members in group 1.

The group 2: For all , we arbitrarily choose and let and . Again, because and have the same Hamming weight, (18) holds too for the assignment to the members of the group 2.

The group 3: Finally for each , we arbitrarily choose letter in the alphabet , say and let