# On the Common Randomness Capacity of a Special Class of Two-way Channels

In this paper, we would like to study the common randomness (CR) capacity of intertwined two-way channels, namely those whose marginal channel transition probabilities depends also on the signal they transmit. We bring a few special settings and provide constructive schemes with which the two nodes can agree upon a common randomness. We then provide an outer bound on the CR capacity of intertwined receiver-decomposable (RD) two-way channel and will provide a bound on the cardinality of the available auxiliary variables. We will also show this outer bound is bounded above by Venkatesan-Anantharam CR capacity which makes it tight for decomposing two-way setting.

## Authors

• 2 publications
• ### On the Non-Adaptive Zero-Error Capacity of the Discrete Memoryless Two-Way Channel

We study the problem of communicating over a discrete memoryless two-way...
09/09/2019 ∙ by Yujie Gu, et al. ∙ 0

• ### A Simple Capacity Outer Bound for Two-Way Channels and Capacity Approximation Results

Channel symmetry properties that imply the tightness of Shannon's random...
04/08/2020 ∙ by Jian-Jian Weng, et al. ∙ 0

• ### A Capacity Region Outer Bound for the Two-User Dispersive Nonlinear Fiber Optical Channel

We study a nonlinear fiber optical channel impaired by cross-phase modul...
05/31/2021 ∙ by Viswanathan Ramachandran, et al. ∙ 0

• ### Message transmission over classical quantum channels with a Jammer with side information; correlation as resource and common randomness generating

In this paper we analyze the capacity of a general model for arbitrarily...
02/05/2019 ∙ by Holger Boche, et al. ∙ 0

• ### The Gaussian Interference Channel in the Presence of Malicious Jammers

This paper considers the two-user Gaussian interference channel in the p...
12/12/2017 ∙ by Fatemeh Hosseinigoki, et al. ∙ 0

• ### Capacity of Two-Way Channels with Symmetry Properties

In this paper, we make use of channel symmetry properties to determine t...
07/09/2018 ∙ by Jian-Jia Weng, et al. ∙ 0

• ### Finite Blocklength and Dispersion Bounds for the Arbitrarily-Varying Channel

Finite blocklength and second-order (dispersion) results are presented f...
01/11/2018 ∙ by Oliver Kosut, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Abstratc

In this paper, we would like to study the common randomness (CR) capacity of intertwined two-way channels, namely those whose marginal channel transition probabilities depends also on the signal they transmit. We bring a few special settings and provide constructive schemes with which the two nodes can agree upon a common randomness. We then provide an outer bound on the CR capacity of intertwined receiver-decomposable (RD) two-way channel and will provide a bound on the cardinality of the available auxiliary variables. We will also show this outer bound is bounded above by Venkatesan-Anantharam’s CR capacity which makes it tight for decomposing two-way setting.

## Ii Introduction

The concept of CR capacity of different settings has been introduced and studied by Ahlswede and Csiszar [1]. The CR capacity of decoupling two-way settings, namely those with general transition function factored as , has been solved by Venkatesan and Anantharam [2]. Recently, two-way channel transmission capacities have been studied [3], [4], [5]. We would like to study the CR capacity of intertwined two-way settings, namely those with . To get a better understanding of what one means by the maximum rate of generating common randomness in this setting, and also observe the difference of the CR capacity of a setting with its transmission capacity, we look at an example which at this point it is unknown how complicated finding its CR capacity can be. Consider two terminals, e.g. Alice and Bob, communicate over a special intertwined two-way setting in the absence of any external random sources. In other words, each symbol transmitted through the channel is only a function of the past symbols received. The setting is as follows: Assume Alice’s transmitted and received symbols are and and Bob’s transmitted and received symbols are and , respectively. Assuming , the transition functions are defined as follows,

 Y1|X1=0,X2∼BSC(p1)Y1|X1=1,X2∼BSC(p2) (1) Y2|X1,X2=0∼BSC(q1)Y2|X1,X2=1∼BSC(q2) (2)

We now explain different case,

• and : In this case, at the end of each block, Bob can receive some randomness, depending on whether he has sent or not at the beginning of that block, while Alice receives no randomness. The channel from Bob to Alice is always a so set at all blocks. Also to start, set in the first block and in blocks Bob sends whatever he receives back to Alice using the-always- backward channel. With this coding scheme it is clear that the first random bit received by Bob stops the random communication from thereon. It can be seen that the average number of random bits per step generated this way is

 ∑ni=1i(12)in (3)

which vanishes as . However, we can use an even more adaptive coding scheme. In this scheme, Alice always sends and Bob starts the first block by sending . If Bob receives , he sends back in the next block to agree on that bit with Alice and can still receive a random bit at the end of second block from Alice. This scheme continues until Bob receives . When he receives in block , he sends it back to Alice in block and starts a new block of communication by sending . Notice that , i.e. the first time Bob receives , is distributed geometrically and bits per step is transmitted in the first blocks. This scheme continues and in the next set of blocks random bits per step are generated and agreed upon. Call each blocks of communication a stage. Communication takes place in stages and the number of common random bits per step generated in stages is

 Z1+Z2+...+Znn (4)

where . Since clearly ’s are

, if the number of stages is large, the law of large numbers states that

common random bits per step are generated and it is easy to see that,

 E(Zi) = E(NiNi+1) (5) (a)= limn→∞n∑j=1jj+1(12)j = ∞∑j=1jj+1(12)j=0.613706.

where follows from the fact that each is distributed geometrically. Notice that the sum-rate transmission capacity of this setting is at least bits per channel use.

• and : In this case, set and as a result, after each block of communication is a random bit distributed according to and can send it back noiselessly to terminal 2, Bob. Therefore, after symbol transmissions, Alice and Bob can agree upon uniformly common bits and thus bits per step of communication is achievable. Clearly, bit per step of communication is the maximum amount of CR that Alice and Bob can agree upon and thus bits/step.

• and : In this case, Alice and Bob communicate in blocks. On the first block sends to set the channel as a and sends to set as a . In the next block, Alice who has receives the bit from randomly retransmitts that bit back to Bob and Bob sends to open up a noiseless medium to receive the random bit generated in the first block. What sends in the first block is irrelevant. With this scheme, after transmissions, Alice and Bob can generate and agree upon bits of randomness which achieves a CR rate of bits per step.

A clear defect in this schemes is Bob’s inactiveness in the first block of each transmission cycle. Another approach could be this: In the first block, Alice and Bob each send symbol and each receive a (possibly different) random bit. To make the generated random bit common, in the next two blocks, Alice and Bob take turn in sending to open up the lane for its partner to deliver the random bit noiselessly. With this scheme, bits are generated and agreed upon in steps of communication and thus the achievable CR rate is improved to be bits per step. Notice that an outer bound on the CR capacity of this setting is bit per step. I did not understand how you got 0.75 bit per step. Could you please write it with more detail?

• : In this case, at all times thus set, say, . In every block, set , . To begin the show, set . As a result, w.p. and w.p. hence in the second block, the channel switches to w.p. and stays at w.p. . Similarly, the channel transition function wavers when it is a . The whole setting is depicted in Fig. 1. So

bits of randomness are generated in this case and can be agreed upon in the other direction. But are they uniformly distributed? Constructively, it is not clear how we can achieve CR in this setting. It turns out that the entropy rate of the output might be the CR of this setting.

Now let us define the problem formally.

## Iii Definition

We begin with the definition of a special class of intertwined two-way channels,

###### Definition 1

A special class of two-way channels, depicted as in Fig. 2, which we call the receiver-decomposable (RD) two-way channel is described as those in which the channel transition matrix is factored as follows,

 p(y1,y2|x1,x2)=p(y1|x1,x2)p(y2|x1,x2)

We desire to find the maximum amount of common randomness the two terminals can agree upon after a block of uses of the channel. In other words, let the terminals communicate through the channel in blocks and after gathering the corresponding outputs, each of them can compute a common random output. This random output will take its values from a set, say . The supremum of the cardinality of devided by the number of communication steps, i.e. , is the common randomness capacity of RD two-way channel. To make it more precise, let , , , and take values from finite sets , , , and , respectively. Also let be a strategy for transmission by the terminals defined as follows,

 f=(f1,f2,...,fn)

and

 g=(g1,g2,...,gn)

The terminals would then try to communicate over a certain number of steps, say . In the first step, terminal 1 (T1) sends and T2 sends . In step , T1 constructs from the received symbols in the previous steps, i.e. , the symbol . Similarly, T2 sends . Come the end of the communication steps, the terminals have the sequence of their received signals, namely and . T1 and T2 map and via and for some and with being the error event in the common randomness generation. We would like to design the strategy such that for every

 1−λK≤Pr{Φ(Yn1)=Ψ(Yn2)=l}≤1+λK   for each l=1,2,...,K. (6)

This is called an strategy to generate common randomness. We claim that a non-negative number is an achievable common randomness if there exists a sequence of strategy such that,

 liminfn→∞logKnn≥R   and   limn→∞λn=0. (7)

We also claim that a non-negative number is an outer bound on the common randomness capacity if for every strategy we have,

 limλ→0limsupn→∞logKnn≤R. (8)

The rate is said to be an strong outer bound on the common randomness capacity of a setting if for every strategy satisfying (6) we have

 limsupn→∞logKnn≤R.for every λ≥0 (9)

The strong converse, from the definition, only assures strong convergence of the inner bound and the outer bound but by no means, can necessarily give us a better outer bound.

Notice that in an protocol, the decoding functions and could be taken as the following mappings,

 Φ: Xn1×Yn1 ↦[1:K]∪{e}         and         Ψ: Xn2×Yn2 ↦[1:K]∪{e} (10)

However, it is clear that since each , , is a function of , mapping from the sequence of received output is sufficient.

Now to find an outer bound on the CR capacity of the RD two-way setting introduced in Definition 1, let us first bound some entropies. Starting with the joint entropy of the outputs, we have,

 H(Yn1,Yn2) = n∑i=1H(Y1,i,Y2,i|Yi−11,Yi−12) (11) (a)= n∑i=1H(Y1,i,Y2,i|Yi−11,Yi−12,X1,i,X2,i) (b)= n∑i=1H(Y1,i,Y2,i|X1,i,X2,i) (c)= n∑i=1H(Y1,i|X1,i,X2,i)+H(Y2,i|X1,i,X2,i) = n(H(Y1|X1,X2)+H(Y2|X1,X2))

where follows since is a function of , , follows from the memorylessness of the setting, and follows from the RD (receiver decompose) property of the channel. We now bound the marginal entropy of each output,

 H(Yn1) = n∑i=1H(Y1,i|Yi−11) (12) (a)= n∑i=1H(Y1,i|Yi−11,X1,i) = n∑i=1H(Y1,i|Yi−11,X1,i)+H(Y1,i|X1,i,X2,i)−H(Y1,i|X1,i,X2,i) = n∑i=1H(Y1,i|X1,i,X2,i)+I(X2,i;Y1,i|X1,i,Yi−11) = n∑i=1H(Y1,i|X1,i,X2,i)+I(X2,i;Y1,i|X1,i,Ui) = n(H(Y1|X1,X2)+I(X2;Y1|X1,U))

where follows since is a function of , and in the penultimate equality we have defined . Similarly,

 H(Yn2) = n∑i=1H(Y2,i|Yi−12) (13) = n∑i=1H(Y2,i|Yi−12,X2,i) = n∑i=1H(Y2,i|X1,i,X2,i)+I(X1,i;Y2,i|X2,i,Yi−12) = n∑i=1H(Y2,i|X1,i,X2,i)+I(X1,i;Y2,i|X2,i,Vi) = n(H(Y2|X1,X2)+I(X1;Y2|X2,V))

where . Notice that this selection of the auxiliary RVs induces the distribution of

and thus the Markov chains

and .

We also have,

 I(Yn1;Yn2) = H(Yn1)+H(Yn2)−H(Yn1,Yn2) (14) = n(I(X2;Y1|X1,U)+I(X1;Y2|X2,V))

To bound the cardinality of and , fix and . Take the following continuous functions of ,

 gj(p(v|u))=⎧⎪ ⎪⎨⎪ ⎪⎩p(x1,x2|u)=∑vp(v|u)p(x1|u)p(x2|v)j=1,2,…,|X1|.|X2|−1H(Y1|X1,U=u)j=|X1|.|X2|p(v,x2,y2|u)=∑x1p(v|u)p(x1|u)p(x2|v)p(y2|x1,x2)j=|X1|.|X2|+1

The first set of functions preserve which for the fixed channel transition, preserves . The second function preserves and the last function preserves and thus . Therefore suffices to be taken . Let the corresponding RV of , after replacing by be denoted by . Now for fixed take the following functions of ,

 hj(p(x1,x2|u′,v′))={p(x1,x2|u′,v′)j=1,2,…,|X1|.|X2|−1H(Y2|X2,V′=v′)j=|X1|.|X2|

Similarly, suffices to be taken to satisfy . However, the aforementioned Markov chains are still not necessarily satisfied by these RVs. An option to construct new RVs to satisfy the aforementioned Markov chains, which may not be necessary, can be to define,

 V′′′≜(V′′,X2)U′′≜(U′,X1)

and the Markov chains are also satisfied and also,

 I(X2;Y1|X1,U′′) = I(X2;Y1|X1,U′)=I(X2;Y1|X1,U) I(X1;Y2|X2,V′′′) = I(X1;Y2|X2,V′′)=I(X1;Y2|X2,V′)=I(X1;Y2|X2,V)

Thus we can take

 |U|≤|X1|.(|X1|.|X2|+1)and|V|≤|X1|.|X2|2 (15)

Now notice that from (6) we have,

 p(S=T) ≥ kPr{Φ=Ψ=l} (16) = Pr{∪l∈[1:K]{Φ=Ψ=l}}≥1−λ

Therefore,

 P(Φ≠Ψ)≤λ (17)

Now from Fano’s inequality we have,

 max{H(Ψ|Φ),H(Φ|Ψ)}≤1+λlog(K) (18)

We also have,

 H(Φ,Ψ) (a)≥ −K∑l=1Pr{Φ=Ψ=l}log(Pr{Φ=Ψ=l}) (19) (b)≥ K∑l=11−λKlog(K1+λ) ≥ (1−λ)log(K)−1

where follows from conditioning on the event and follows from (6). From (18) and (19) we have

 min{H(Ψ),H(Φ)}≥(1−2λ)log(K)−2 (20)

Thus from (18) and (20) we have for every

 −3+(1−3λ)log(K) ≤ H(Φ,Ψ)≤H(Yn1,Yn2) (21) −2+(1−2λ)log(K) ≤ min{H(Φ),H(Ψ)}≤min{H(Yn1),H(Yn2)} (22) −1+(1−λ)log(K) ≤ I(Yn1;Yn2) (23)

and thus we have the following Theorem,

For instance for every we have,

 −3+(1−3λ)log(K) ≤ H(Yn1,Yn2)=n(H(Y1|X1,X2)+H(Y2|X1,X2)) (24)

and thus,

 −3n+(1−3λ)nlog(K) ≤ H(Y1|X1,X2)+H(Y2|X1,X2) (25)
###### Theorem 2

The CR capacity of the intertwined two-way setting as described in Definition 1 is dominated by,

 R≤min{A,B,C,D}

where

 A ≜ H(Y1|X1,X2)+H(Y2|X1,X2) B ≜ H(Y1|X1,X2)+I(X2;Y1|X1,U) C ≜ H(Y2|X1,X2)+I(X1;Y2|X2,V) D ≜ I(X2;Y1|X1,U)+I(X1;Y2|X2,V)

for some distribution with and .

Notice that time-sharing RV is used in the achievable rates to convexify (and possibly enlarge) the region. Here, however, we do not need to make the outer bound convex and enlarge it. Hence, there is no point in using a time-sharing RV in the outer bound. Now let us compare this outer bound with Venkatesan-Anantharam’s CR capacity,

 A = H(Y1|X1,X2)+H(Y2|X1,X2)=H(Y1|X2)+H(Y2|X1) (26) B = H(Y1|X1,X2)+I(X2;Y1|X1,U)≤H(Y1|X2)+I(X2;Y1) (27) C = H(Y2|X1,X2)+I(X1;Y2|X2,V)≤H(Y2|X1)+I(X1;Y2) (28) D = I(X2;Y1|X1,U)+I(X1;Y2|X2,V)≤I(X2;Y1)+I(X1;Y2) (29)

Therefore, for the case of , our outer bound is bounded above as follows,
From (26) and (27) we have

 R ≤ H(Y1|X2)+min{H(Y2|X1),I(X2;Y1)} (30)

and from (28) and (29) we have,

 R ≤ I(X1;Y2)+min{H(Y2|X1),I(X2;Y1)} (31)

Therefore, from (30) and (31) we have,

 R ≤ min{H(Y1|X2),I(X1;Y2)}+min{H(Y2|X1),I(X2;Y1)} (32)

which is the CR capacity of decomposing two-way setting of Anantharam. Therefore, our outer bound is tight in that case and the CR capacity of RD decomposing setting is no more than CR capacity of decomposing setting .

## References

• [1] R. Ahlswede and I. Csiszar, “Common randomness in information theory and cryptography. ii. cr capacity,” IEEE Trans. on Info. Theory, vol. 44, no. 1, Jan. 1998.
• [2] S. Venkatesan and V. Anantharam, “The common randomness capacity of a pair of independent discrete memoryless channels,” IEEE Trans. on Info. Theory, vol. 44, no. 1, Jan. 1998.
• [3] J. J. Weng, L. Song, F. Alajaji, and T. Linder, “Capacity of two-way channels with symmetry properties,” IEEE Trans. on Info. Theory, to appear.
• [4] A. Naghizadeh, S. Berenjian, B. Razeghi, S. Shahanggar, and N. R. Pour, “Preserving receiver’s anonymity for circular structured p2p networks,” in Annual IEEE Consumer Communications and Networking Conference (CCNC), July 2015.
• [5] S. Hajizadeh and N. Devroye, “Dependence balance outer bounds for the discrete memoryless two-way multiple access broadcast channel,” in Proc. Allerton Conf. Commun., Control and Comp., Sep.-Oct. 2014.