I Introduction
The relay channel was introduced by van der Meulen [1] and it represents the simplest network model: a source wants to communicate with a destination with the help of a relay. As schematized in Figure 1, the source sends to the relay and to the destination, the relay receives and sends to the destination, and the destination receives from the source and from the relay. Hence, the relay channel has a broadcast component that goes from the source to the relay and to the destination, and a multiple access component that goes from the source and from the relay to the destination.
In their seminal work [2], Cover and El Gamal introduced two basic achievability lower bounds, namely, decodeandforward and compressandforward, and a general upper bound, namely, the cutset bound. Since then, many other lower bounds have been discovered, namely, amplifyandforward, quantizemapandforward, computeandforward, noisy network coding, hybrid coding, etc [3, 4, 5, 6, 7]. Furthermore, in most of the cases where the capacity is known, the converse is given by the cutset bound [2, 8, 9, 10]. However, the cutset bound has been shown not be tight for some specific examples of relay channels [11, 12]. New upper bounds tighter than the cutset bound were recently developed in [13, 14, 15, 16]. For a literature review on the relay channel, the interested reader is referred to [17, Chapter 16] and [18, Chapter 9].
Recent works have been aimed at providing practical schemes using the paradigm of polar coding, introduced by Arıkan in the seminal paper [19]. In particular, polar coding schemes for decodeandforward are proposed in [20, 21, 22, 23] for degraded relay channels, where
forms a Markov chain. Furthermore, a polar coding scheme for compressandforward is presented in
[22] for relay channels with orthogonal receiver components. Eventually, polar coding schemes for decodeandforward and compressandforward are described in [24] for general relay channels.Soft decodeandforward relaying strategies using LDPC codes are developed and analyzed in [25].
In this paper, we focus on relay channels with orthogonal receiver components, also known as primitive relay channels. As schematized in Figure 2, the destination receives separately from the source and from the relay. In other words, the multiple access component that for the general relay channel goes from the source and from the relay to the destination is substituted by two parallel channels. Consequently, without loss of generality, we can assume that the link between relay and destination is a noiseless channel of capacity . Indeed, the relay can always use an optimal pointtopoint scheme to communicate with the destination. We also assume that the relay is fullduplex, in the sense that it can listen and transmit simultaneously. Despite this simplification, the capacity of the primitive relay channel is not known in general. For a review on coding techniques for the primitive relay channel, the interested reader is referred to [26].
Our main contribution consists in presenting a new coding scheme that, by combining compressandforward and decodeandforward, strictly improves on both these lower bounds. To do so, we code over pairs of blocks and employ a chaining construction: in the first block, we perform a variant of compressandforward in which the relay is not required to send the whole compressed sequence to the destination; in the second block, we perform decodeandforward and the relay sends to the destination the new information bits plus the remaining part of the compressed sequence from the previous block. The idea of chaining was originally introduced in [27] to construct universal codes and in [28] to achieve strong security guarantees on degraded wiretap channels. Since then, it has been exploited in several nonstandard scenarios (e.g., broadcast channels [29, 30], asymmetric channels [31, 32], wiretap channels [33]). Our coding paradigm can be implemented with codes that are suitable for compressandforward and decodeandforward. As such, polar codes represent an appealing choice [24]: their encoding/decoding complexity is and their error probability scales roughly as , where is the block length.
The reminder of this paper is organized as follows. In Section II, we review the existing upper bounds (cutset and its improvements) and lower bounds (direct transmission, decodeandforward, and compressandforward) for the primitive relay channel and we specialize them to the case of the erasure relay channel. In Section III, we state our new lower bound and we show how to achieve it. In Section IV, we compare the rates achievable by our new strategy with the existing upper and lower bounds for the special case of the erasure relay channel. In Section V, we provide some concluding remarks.
Ii Existing Upper and Lower Bounds
We assume that all channels are binary memoryless and symmetric (BMS). We denote by the binary entropy function and by , , and the alphabets associated to , , and , respectively. We define for any .
Throughout the paper, we will use as a running example the special case of the erasure relay channel. As schematized in Figure 3, in the erasure relay channel the links between source and destination and between source and relay are binary erasure channels (BECs) with erasure probabilities and , respectively.
Iia CutSet Upper Bound
For the general relay channel, the cutset upper bound on the achievable rate is given by [17, Theorem 16.1]
(1) 
For the case of the primitive relay channel, the cutset bound specializes to [26, Proposition 1]
(2) 
For the special case of the erasure relay channel, the cutset bound can be rewritten as
(3) 
IiB Improvements on CutSet Upper Bound
For the case of the primitive relay channel, an upper bound demonstrating an explicit gap to the cutset bound was presented in [13]. Furthermore, two new upper bounds that are generally tighter than cutset are proposed in [14] for the symmetric primitive relay channel, in which and are conditionally identically distributed given . The results of [14] are extended to the nonsymmetric case and to the Gaussian case in [15] and [16], respectively.
Let us now state the result in [15, Theorem 3.1], which provides an extension of the first bound of [14]. If a rate is achievable, then there exists some and such that
(4) 
for any random variable
with the same conditional distribution as given . The evaluation of the term that gives the tightest bound is simple in the following special cases:
Symmetric ( and conditionally identically distributed given ): .

Degraded ( stochastically degraded version of ): .

Reversely degraded ( stochastically degraded version of ): .
For the special case of the erasure relay channel, the bound can be rewritten as
(5) 
In order to present the second bound of [14], we need some preliminary definitions. Given a channel transition probability , for any and , we define as
(6) 
subject to the condition
(7) 
where is the conditional relative entropy defined as
(8) 
is the conditional entropy defined with respect to the joint distribution
, i.e.,(9) 
and is the conditional entropy similarly defined with respect to . At this point, we can state the result in [14, Theorem 4.2]. If a rate is achievable, then there exists some and such that
(10) 
As pointed out at the end of Section IV.C of [14], for the special case of the symmetric erasure relay channel, we have that for all and . Thus, (10) reduces to the cutset bound (3).
IiC Direct Transmission Lower Bound
In the direct transmission, the source communicates with the destination by using an optimal pointtopoint code. The relay transmission is fixed at the most favorable symbol for the channel from the source to the destination.
For the general relay channel, direct transmission allows to achieve the following rate [17, Section 16.3]:
(11) 
For the case of the primitive relay channel, the direct transmission lower bound specializes to
(12) 
Note that the direct transmission lower bound (12) meets the cutset upper bound (2) and it equals the capacity of the primitive relay channel when either of the following two conditions holds:

the primitive relay channel is reversely degraded, which implies that ;

.
IiD DecodeandForward Lower Bound
In decodeandforward, the relay completely decodes the received sequence and cooperates with the source to communicate the message to the destination.
For the general relay channel, decodeandforward allows to achieve the following rate [17, Theorem 16.2]:
(14) 
For the case of the primitive relay channel, the decodeandforward lower bound specializes to [26, Proposition 2]
(15) 
Note that the decodeandforward lower bound (15) meets the cutset upper bound (2) and it equals the capacity of the primitive relay channel when either of the following two conditions holds:

the primitive relay channel is degraded, which implies that ;

.
IiE CompressandForward Lower Bound
In compressandforward, the relay does not attempt to decode the received sequence, but it sends a (possibly compressed) description of it, denoted by , to the destination. Since this description is correlated with the sequence received by the destination from the source, WynerZiv coding is used to reduce the rate needed to communicate it to the destination.
For the general relay channel, compressandforward allows to achieve the following rate [17, Theorem 16.4]:
(17) 
where the cardinality of the alphabet associated to can be bounded as . This expression can be equivalently rewritten as [17, Remark 16.3]
(18) 
The bound is in general not convex, therefore it can be improved via time sharing.
For the case of the primitive relay channel, the compressandforward lower bound specializes to [26, Proposition 3]
(19) 
with .
Note that the compressandforward lower bound (19) meets the cutset upper bound (2) and it equals the capacity of the primitive relay channel when . Indeed, in this case, we can pick , namely, the relay performs SlepianWolf source coding. Therefore, , which is one of the two terms in the cutset bound.
On the contrary, if , then we degrade into , namely, the relay performs a step of lossy source coding. The relay transmits this lossy description to the destination that can decode it successfully since requires less bits than . However, after that the destination has recovered , there is a penalty loss: we can achieve rates up to , instead of up to .
For the case of the erasure relay channel, we have that
(20) 
Hence, if , then the compressandforward lower bound meets the cutset upper bound and it equals the capacity of the erasure relay channel.
On the contrary, if , it is not easy to find the best choice of even for this simple scenario. Following [25], let us assume that is the output of an erasureerasure channel (EEC) with erasure probability and input . This means that if , then with probability ; if , then with probability and with probability . Consequently,
(21) 
Clearly, is maximized by setting
to the uniform distribution. Furthermore,
(22) 
As a result, the rate (19) can be rewritten as
(23) 
Iii Main Result
We are now ready to state our new lower bound for the primitive relay channel.
Theorem 1.
Consider the transmission over a primitive relay channel, where the source sends to the relay and the destination, the relay receives from the source, the destination receives from the source, and relay and destination are connected via a noiseless link with capacity . Furthermore, denote by the compressed description of transmitted by the relay. Then, the following rate is achievable:
(24) 
for any joint distribution such that
(25)  
(26) 
and where . Furthermore, the rate (24) can be achieved by a polar coding scheme with encoding/decoding complexity and error probability for any , where is the block length.
Remark 2.
(27) 
The special case of the erasure relay channel is handled by the corollary below.
Corollary 3.
Consider the transmission over the erasure relay channel, where is obtained from via a BEC, is obtained from via a BEC, is obtained from via an EEC, and the relay is connected to the destination via a noiseless link with capacity . Then, the rate (27) is achievable for any such that
(27)  
(28) 
Furthermore, the rate (27) can be achieved by a polar coding scheme with encoding/decoding complexity and error probability for any , where is the block length.
The proof of Corollary 3 easily follows from the application of Theorem 1 and of formulas (21)(22).
We can now proceed with the proof of our main result.
Proof of Theorem 1.
We start by presenting the main idea of our scheme. We split the transmission into two blocks. In the first block, we perform a variant of compressandforward: the relay does not decode the received sequence, but it sends a compressed description of it to the destination. However, differently from standard compressandforward, we require that (26) holds. Hence, we cannot transmit all the compressed description during the first block. In the second block, we perform decodeandforward: the relay completely decodes the received sequence. Furthermore, we choose the length of the second block so that the relay can transmit the part of that was not sent in the previous block plus the new information needed to decode the second block.
Let us now describe this scheme more in detail and provide the achievability proof of the rate (24). First, we deal with the case .
Consider the transmission of the first block. Denote by and the block length and the rate of the message transmitted by the source, and let approach from below . The relay receives and constructs the compressed description . Recall that the destination receives the side information from the source. Hence, by using WynerZiv coding, the destination needs from the relay a number of bits approaching from above , in order to decode the message sent by the source. As , the relay transmits right away a number of these bits approaching from below . The number of remaining bits approaches from above and it is stored by the relay. The destination stores the message received from the relay and the observation obtained from the source.
Consider the transmission of the second block and define
(29) 
Denote by and the block length and the rate of the message transmitted by the source. Let and let approach from below . The relay receives and successfully decodes the message. Again, the destination receives the side information from the source. Hence, it needs from the relay a number of bits approaching from above , in order to decode the message sent by the source. The relay transmits to the destination these information bits plus the bits remaining from the previous block. This transmission is reliable as (29) implies that
(30) 
At this point, the destination can reconstruct the second block by using the side information received from the source and the extra bits received from the relay. Furthermore, it can also reconstruct the first block by using the side information previously received from the source and the extra bits received from the relay (partly in the first and partly in the second block).
The overall block length is and the achievable rate is
(31) 
which approaches from below
(32) 
The case is handled in a similar way. As concerns the transmission of the first block, nothing changes. Denote by and the block length and the rate of the message transmitted by the source, and let approach from below . The relay receives and constructs the compressed description . By using WynerZiv coding, the destination needs from the relay a number of bits approaching from above , in order to decode the message sent by the source. As , the relay transmits right away a number of these bits approaching from below . The number of remaining bits approaches from above and it is stored by the relay. The destination stores the message received from the relay and the observation obtained from the source.
As concerns the transmission of the second block, define
(33) 
and denote by and the block length and the rate of the message transmitted by the source. Let and let approach from below . The relay discards the received message and transmits to the destination the bits remaining from the previous block. This transmission is reliable as (33) implies that
(34) 
At this point, the destination can reconstruct the second block by using the message received from the source. Furthermore, it can also reconstruct the first block by using the side information previously received from the source and the extra bits received from the relay (partly in the first and partly in the second block).
The overall block length is and the achievable rate is
(35) 
which approaches from below
(36) 
Clearly, the coding scheme described so far can be implemented with codes that are suitable for compressandforward and for decodeandforward. Hence, we can employ the polar coding schemes for compressandforward and for decodeandforward presented in [24]. However, polar codes require block lengths and (or and ) that are powers of two, which puts a constraint on the possible values for (or ). To remove this constraint and achieve the rate (24) for any , it suffices to use the punctured polar codes described in [34, Theorem 1].
∎
Iv Numerical Results
Let us consider the special case of the erasure relay channel. In Figure 4, we compare the achievable rate (27) of our scheme with the existing upper and lower bounds, i.e., the cutset upper bound (3) (which coincides with the improved bound (5)), the decodeandforward lower bound (16) and the compressandforward lower bound (23). We consider two pairs of choices for and : for the plot on the left, and for the plot on the right. We plot the various bounds as functions of . Note that our scheme outperforms both decodeandforward and compressandforward for an interval of values of in both settings.
In [25], for , and , the proposed soft decodeandforward strategy based on LDPC codes achieves a rate of , while both decodeandforward and compressandforward achieve a rate of . Our new coding strategy is reliable for rates up to , hence it outperforms all existing lower bounds. As a reference, note that in this setting the cutset bound is .
V Concluding Remarks
We have proposed a new coding paradigm for the primitive relay channel that combines compressandforward and decodeandforward by means a chaining construction. The achievable rates obtained by our scheme surpass the stateoftheart coding approaches (compressandforward, decodeandforward, and the soft decodeandforward strategy of [25]). Furthermore, our paradigm can be implemented with a lowcomplexity polar coding scheme that has the typical attractive features of polar codes, i.e., quasilinear encoding/decoding complexity and superpolynomial decay of the error probability.
Acknowledgment
M. Mondelli was supported by an Early Postdoc.Mobility fellowship from the Swiss NSF. R. Urbanke was supported by grant No. 200021_156672/1 of the Swiss NSF.
References
 [1] E. C. van der Meulen, “Threeterminal communication channels,” Adv. Appl. Prob., vol. 3, pp. 120–154, 1971.
 [2] T. Cover and A. E. Gamal, “Capacity theorems for the relay channel,” IEEE Trans. Inform. Theory, vol. 25, pp. 572–584, 1979.
 [3] B. Schein and R. Gallager, “The Gaussian parallel relay network,” in Proc. of the IEEE Int. Symposium on Inform. Theory, June 2000, p. 22.
 [4] A. S. Avestimehr, S. N. Diggavi, and D. N. C. Tse, “Wireless network information flow: A deterministic approach,” IEEE Trans. Inform. Theory, vol. 57, no. 4, pp. 1872–1905, Apr. 2011.
 [5] B. Nazer and M. Gastpar, “Computeandforward: Harnessing interference through structured codes,” IEEE Trans. Inform. Theory, vol. 57, no. 10, pp. 6463–6486, Oct. 2011.
 [6] S. H. Lim, Y.H. Kim, A. E. Gamal, and S.Y. Chung, “Noisy network coding,” IEEE Trans. Inform. Theory, vol. 57, no. 5, pp. 3132–3152, May 2011.
 [7] P. Minero, S. H. Lim, and Y.H. Kim, “A unified approach to hybrid coding,” IEEE Trans. Inform. Theory, vol. 61, no. 4, pp. 1509–1523, Apr. 2015.
 [8] S. Zahedi, “On reliable communication over relay channels,” Ph.D. dissertation, Stanford University, Stanford, CA, 2005.
 [9] A. E. Gamal and M. Aref, “The capacity of the semideterministic relay channel,” IEEE Trans. Inform. Theory, vol. 28, no. 3, p. 536, May 1982.
 [10] Y.H. Kim, “Capacity of a class of deterministic relay channels,” IEEE Trans. Inform. Theory, vol. 54, no. 3, pp. 1328–1329, Mar. 2008.
 [11] Z. Zhang, “Partial converse for a relay channel,” IEEE Trans. Inform. Theory, vol. 34, no. 5, pp. 1106–1110, Sept. 1988.
 [12] M. Aleksic, P. Razaghi, and W. Yu, “Capacity of a class of modulosum relay channels,” IEEE Trans. Inform. Theory, vol. 55, no. 3, pp. 921–930, Mar. 2009.
 [13] F. Xue, “A new upper bound on the capacity of a primitive relay channel based on channel simulation,” IEEE Trans. Inform. Theory, vol. 60, no. 8, pp. 4786–4798, Aug. 2014.
 [14] X. Wu, A. Özgür, and L.L. Xie, “Improving on the cutset bound via geometric analysis of typical sets,” IEEE Trans. Inform. Theory, vol. 63, no. 4, pp. 2254–2277, Apr. 2017.
 [15] X. Wu and A. Özgür, “Improving on the cutset bound for general primitive relay channels,” in Proc. of the IEEE Int. Symposium on Inform. Theory, Barcelona, Spain, Jul. 2016, pp. 1675–1679.
 [16] ——, “Cutset bound is loose for gaussian relay networks,” in Proc. of the Allerton Conf. on Commun., Control, and Computing, Monticello, IL, USA, Oct. 2015, pp. 1135–1142.
 [17] A. E. Gamal and Y.H. Kim, Network Information Theory. Cambridge University Press, 2011.
 [18] G. Kramer, “Topics in multiuser information theory,” Found. Trends Commun. Inf. Theory, vol. 4, no. 45, pp. 265–444, Apr. 2007.
 [19] E. Arıkan, “Channel polarization: A method for constructing capacityachieving codes for symmetric binaryinput memoryless channels,” IEEE Trans. Inform. Theory, vol. 55, no. 7, pp. 3051–3073, Jul. 2009.
 [20] M. Andersson, V. Rathi, R. Thobaben, J. Kliewer, and M. Skoglund, “Nested polar codes for wiretap and relay channels,” IEEE Commun. Lett., vol. 14, no. 8, pp. 752–754, Aug. 2010.
 [21] M. Karzand, “Polar codes for degraded relay channels,” in Proc. Intern. Zurich Seminar on Comm., Feb. 2012, pp. 59–62.
 [22] R. BlascoSerrano, R. Thobaben, M. Andersson, V. Rathi, and M. Skoglund, “Polar codes for cooperative relaying,” IEEE Trans. Commun., vol. 60, no. 11, pp. 3263–3273, Nov. 2012.
 [23] D. Karas, K. Pappi, and G. Karagiannidis, “Smart decodeandforward relaying with polar codes,” IEEE Wireless Comm. Letters, vol. 3, no. 1, pp. 62–65, Feb. 2014.
 [24] L. Wang, “Polar coding for relay channels,” in Proc. of the IEEE Int. Symposium on Inform. Theory, Hong Kong, June 2015, pp. 1532–1536.
 [25] A. Bennatan, S. Shamai, and A. R. Calderbank, “Softdecodingbased strategies for relay and interference channels: Analysis and achievable rates using LDPC codes,” IEEE Trans. Inform. Theory, vol. 60, no. 4, pp. 1977–2009, Apr. 2014.
 [26] Y.H. Kim, “Coding techniques for primitive relay channels,” in Proc. of the Allerton Conf. on Commun., Control, and Computing, Monticello, IL, USA, Sept. 2007, pp. 26–28.
 [27] S. H. Hassani and R. Urbanke, “Universal polar codes,” Dec. 2013, [Online]. Available: http://arxiv.org/abs/1307.7223.
 [28] E. Şaşoğlu and A. Vardy, “A new polar coding scheme for strong security on wiretap channels,” in Proc. of the IEEE Int. Symposium on Inform. Theory, Istanbul, Turkey, July 2013, pp. 1117–1121.
 [29] M. Mondelli, S. H. Hassani, I. Sason, and R. Urbanke, “Achieving Marton’s region for broadcast channels using polar codes,” IEEE Trans. Inform. Theory, vol. 61, no. 2, pp. 783–800, Feb. 2015.
 [30] R. A. Chou and M. R. Bloch, “Polar coding for the broadcast channel with confidential messages: A random binning analogy,” IEEE Trans. Inform. Theory, vol. 62, no. 5, pp. 2410–2429, May 2016.
 [31] M. Mondelli, S. H. Hassani, and R. Urbanke, “How to achieve the capacity of asymmetric channels,” accepted in IEEE Trans. Inform. Theory, Jan. 2018. [Online]. Available: http://arxiv.org/abs/1406.7373.
 [32] E. E. Gad, Y. Li, J. Kliewer, M. Langberg, A. Jiang, and J. Bruck, “Asymmetric error correction and flashmemory rewriting using polar codes,” IEEE Trans. Inform. Theory, vol. 62, no. 7, pp. 4024–4038, July 2016.
 [33] Y.P. Wei and S. Ulukus, “Polar coding for the general wiretap channel with extensions to multiuser scenarios,” IEEE J. Select. Areas Commun., vol. 34, no. 2, pp. 278–291, Feb. 2016.
 [34] S.N. Hong, D. Hui, and I. Marić, “Capacityachieving ratecompatible polar codes,” IEEE Trans. Inform. Theory, vol. 63, no. 12, pp. 7620–7632, Dec. 2017.
Comments
There are no comments yet.