# A New Coding Paradigm for the Primitive Relay Channel

We present a coding paradigm that provides a new achievable rate for the primitive relay channel by combining compress-and-forward and decode-and-forward with a chaining construction. In the primitive relay channel model, the source broadcasts a message to the relay and to the destination; and the relay facilitates this communication by sending an additional message to the destination through a separate channel. Two well-known coding approaches for this setting are decode-and-forward and compress-and-forward: in the former, the relay decodes the message and sends some of the information to the destination; in the latter, the relay does not attempt to decode, but it sends a compressed description of the received sequence to the destination via Wyner-Ziv coding. In our scheme, we transmit over pairs of blocks and we use compress-and-forward for the first block and decode-and-forward for the second. In particular, in the first block, the relay does not attempt to decode and it sends only a part of the compressed description of the received sequence; in the second block, the relay decodes the message and sends this information plus the remaining part of the compressed sequence relative to the first block. As a result, we strictly outperform both compress-and-forward and decode-and-forward. Furthermore, this paradigm can be implemented with a low-complexity polar coding scheme that has the typical attractive features of polar codes, i.e., quasi-linear encoding/decoding complexity and super-polynomial decay of the error probability. Throughout the paper we consider as a running example the special case of the erasure relay channel and we compare the rates achievable by our proposed scheme with the existing upper and lower bounds.

## Authors

• 10 publications
• 3 publications
• 5 publications
• ### Optimal Streaming Erasure Codes over the Three-Node Relay Network

This paper investigates low-latency streaming codes for a three-node rel...
06/26/2018 ∙ by Silas L. Fong, et al. ∙ 0

• ### Compute-and-Forward Network Coding Design over Multi-Source Multi-Relay Channels

Network coding is a new and promising paradigm for modern communication ...
03/01/2020 ∙ by Lili Wei, et al. ∙ 0

• ### Dispensing with Noise Forward in the "Weak" Relay-Eavesdropper Channel

The "weak" relay-eavesdropper channel was first studied by Lai and El Ga...
01/24/2019 ∙ by Krishnamoorthy Iyer, et al. ∙ 0

• ### Multi-destination Aggregation with Binary Symmetric Broadcast Channel Based Coding in 802.11 WLANs

In this paper we consider the potential benefits of adopting a binary sy...
12/07/2017 ∙ by Xiaomin Chen, et al. ∙ 0

• ### Broadcast Approach for the Information Bottleneck Channel

This work considers a layered coding approach for efficient transmission...
04/29/2020 ∙ by Avi Steiner, et al. ∙ 0

• ### Reliability of Relay Networks under Random Linear Network Coding

We consider a single-source, multiple-relay, single-destination lossy ne...
08/10/2018 ∙ by Evgeny Tsimbalo, et al. ∙ 0

• ### Capacity Upper Bounds for the Relay Channel via Reverse Hypercontractivity

The primitive relay channel, introduced by Cover in 1987, is the simples...
11/27/2018 ∙ by Jingbo Liu, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The relay channel was introduced by van der Meulen [1] and it represents the simplest network model: a source wants to communicate with a destination with the help of a relay. As schematized in Figure 1, the source sends to the relay and to the destination, the relay receives and sends to the destination, and the destination receives from the source and from the relay. Hence, the relay channel has a broadcast component that goes from the source to the relay and to the destination, and a multiple access component that goes from the source and from the relay to the destination.

In their seminal work [2], Cover and El Gamal introduced two basic achievability lower bounds, namely, decode-and-forward and compress-and-forward, and a general upper bound, namely, the cut-set bound. Since then, many other lower bounds have been discovered, namely, amplify-and-forward, quantize-map-and-forward, compute-and-forward, noisy network coding, hybrid coding, etc [3, 4, 5, 6, 7]. Furthermore, in most of the cases where the capacity is known, the converse is given by the cut-set bound [2, 8, 9, 10]. However, the cut-set bound has been shown not be tight for some specific examples of relay channels [11, 12]. New upper bounds tighter than the cut-set bound were recently developed in [13, 14, 15, 16]. For a literature review on the relay channel, the interested reader is referred to [17, Chapter 16] and [18, Chapter 9].

Recent works have been aimed at providing practical schemes using the paradigm of polar coding, introduced by Arıkan in the seminal paper [19]. In particular, polar coding schemes for decode-and-forward are proposed in [20, 21, 22, 23] for degraded relay channels, where

forms a Markov chain. Furthermore, a polar coding scheme for compress-and-forward is presented in

[22] for relay channels with orthogonal receiver components. Eventually, polar coding schemes for decode-and-forward and compress-and-forward are described in [24] for general relay channels.

Soft decode-and-forward relaying strategies using LDPC codes are developed and analyzed in [25].

In this paper, we focus on relay channels with orthogonal receiver components, also known as primitive relay channels. As schematized in Figure 2, the destination receives separately from the source and from the relay. In other words, the multiple access component that for the general relay channel goes from the source and from the relay to the destination is substituted by two parallel channels. Consequently, without loss of generality, we can assume that the link between relay and destination is a noiseless channel of capacity . Indeed, the relay can always use an optimal point-to-point scheme to communicate with the destination. We also assume that the relay is full-duplex, in the sense that it can listen and transmit simultaneously. Despite this simplification, the capacity of the primitive relay channel is not known in general. For a review on coding techniques for the primitive relay channel, the interested reader is referred to [26].

Our main contribution consists in presenting a new coding scheme that, by combining compress-and-forward and decode-and-forward, strictly improves on both these lower bounds. To do so, we code over pairs of blocks and employ a chaining construction: in the first block, we perform a variant of compress-and-forward in which the relay is not required to send the whole compressed sequence to the destination; in the second block, we perform decode-and-forward and the relay sends to the destination the new information bits plus the remaining part of the compressed sequence from the previous block. The idea of chaining was originally introduced in [27] to construct universal codes and in [28] to achieve strong security guarantees on degraded wiretap channels. Since then, it has been exploited in several non-standard scenarios (e.g., broadcast channels [29, 30], asymmetric channels [31, 32], wiretap channels [33]). Our coding paradigm can be implemented with codes that are suitable for compress-and-forward and decode-and-forward. As such, polar codes represent an appealing choice [24]: their encoding/decoding complexity is and their error probability scales roughly as , where is the block length.

The reminder of this paper is organized as follows. In Section II, we review the existing upper bounds (cut-set and its improvements) and lower bounds (direct transmission, decode-and-forward, and compress-and-forward) for the primitive relay channel and we specialize them to the case of the erasure relay channel. In Section III, we state our new lower bound and we show how to achieve it. In Section IV, we compare the rates achievable by our new strategy with the existing upper and lower bounds for the special case of the erasure relay channel. In Section V, we provide some concluding remarks.

## Ii Existing Upper and Lower Bounds

We assume that all channels are binary memoryless and symmetric (BMS). We denote by the binary entropy function and by , , and the alphabets associated to , , and , respectively. We define for any .

Throughout the paper, we will use as a running example the special case of the erasure relay channel. As schematized in Figure 3, in the erasure relay channel the links between source and destination and between source and relay are binary erasure channels (BECs) with erasure probabilities and , respectively.

### Ii-a Cut-Set Upper Bound

For the general relay channel, the cut-set upper bound on the achievable rate is given by [17, Theorem 16.1]

 R≤maxpX\tiny S,X\tiny Rmin{I(X\tiny S,X\tiny R;Y\tiny D);I(X\tiny S;Y\tiny SR% ,Y\tiny D|X\tiny R)}. (1)

For the case of the primitive relay channel, the cut-set bound specializes to [26, Proposition 1]

 R≤maxpX\tiny Smin{I(X\tiny S;Y\tiny SD% )+C\tiny RD;I(X\tiny S;Y\tiny SR,Y% \tiny SD)}. (2)

For the special case of the erasure relay channel, the cut-set bound can be rewritten as

 R≤min{1−ε\tiny SD+C\tiny RD;1−ε\tiny SRε\tiny SD}. (3)

### Ii-B Improvements on Cut-Set Upper Bound

For the case of the primitive relay channel, an upper bound demonstrating an explicit gap to the cut-set bound was presented in [13]. Furthermore, two new upper bounds that are generally tighter than cut-set are proposed in [14] for the symmetric primitive relay channel, in which and are conditionally identically distributed given . The results of [14] are extended to the non-symmetric case and to the Gaussian case in [15] and [16], respectively.

Let us now state the result in [15, Theorem 3.1], which provides an extension of the first bound of [14]. If a rate is achievable, then there exists some and such that

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩R≤I(X\tiny S;Y\tiny SR,Y\tiny SD),R≤I(X\tiny S;Y\tiny SD)+C\tiny RD−a,R≤I(X\tiny S;Y\tiny SD,~Y\tiny SR)+h2(√aln22)+√aln22log2(|Y%SR|−1)−a, (4)

for any random variable

with the same conditional distribution as given . The evaluation of the term that gives the tightest bound is simple in the following special cases:

1. Symmetric ( and conditionally identically distributed given ): .

For the special case of the erasure relay channel, the bound can be re-written as

 R≤maxa≥0min{1−ε\tiny SRε\tiny SD,1−ε\tiny SD+C\tiny RD−a,1−min{ε\tiny SR,ε\tiny SD% }+h2(√aln22)+√aln22−a}. (5)

In order to present the second bound of [14], we need some preliminary definitions. Given a channel transition probability , for any and , we define as

 Δ(p(x),d)=max~p(ω|x)(H(~p(ω|x)|p(x))+D(~p(ω|x)||p(ω|x)|p(x))−H(p(ω|x)|p(x))), (6)

subject to the condition

 12∑(x,ω)|p(x)~p(ω|x)−p(x)p(ω|x)|≤d, (7)

where is the conditional relative entropy defined as

 D(~p(ω|x)||p(ω|x)|p(x))=∑(x,ω)p(x)~p(ω|x)log2~p(ω|x)p(ω|x), (8)

is the conditional entropy defined with respect to the joint distribution

, i.e.,

 H(~p(ω|x)|p(x))=−∑(x,ω)p(x)~p(ω|x)log2~p(ω|x), (9)

and is the conditional entropy similarly defined with respect to . At this point, we can state the result in [14, Theorem 4.2]. If a rate is achievable, then there exists some and such that

 (10)

As pointed out at the end of Section IV.C of [14], for the special case of the symmetric erasure relay channel, we have that for all and . Thus, (10) reduces to the cut-set bound (3).

### Ii-C Direct Transmission Lower Bound

In the direct transmission, the source communicates with the destination by using an optimal point-to-point code. The relay transmission is fixed at the most favorable symbol for the channel from the source to the destination.

For the general relay channel, direct transmission allows to achieve the following rate [17, Section 16.3]:

 RDT=maxpX\tiny S,x\tiny RI(X\tiny S;Y\tiny D|X\tiny R=x\tiny R). (11)

For the case of the primitive relay channel, the direct transmission lower bound specializes to

 RDT=maxpX\tiny SI(X\tiny S;Y\tiny SD% ). (12)

Note that the direct transmission lower bound (12) meets the cut-set upper bound (2) and it equals the capacity of the primitive relay channel when either of the following two conditions holds:

1. the primitive relay channel is reversely degraded, which implies that ;

2. .

For the special case of the erasure relay channel, the direct transmission lower bound can be rewritten as

 RDT=1−ε\tiny SD. (13)

The direct transmission lower bound (13) meets the cut-set upper bound (3) and it equals the capacity of the erasure relay channel when either or .

### Ii-D Decode-and-Forward Lower Bound

In decode-and-forward, the relay completely decodes the received sequence and cooperates with the source to communicate the message to the destination.

For the general relay channel, decode-and-forward allows to achieve the following rate [17, Theorem 16.2]:

 RDF=maxpX\tiny S,X\tiny Rmin{I(X% \tiny S,X\tiny R;Y\tiny D),I(X\tiny S;Y%SR|X\tiny R)}. (14)

For the case of the primitive relay channel, the decode-and-forward lower bound specializes to [26, Proposition 2]

 R=maxpX\tiny Smin{I(X\tiny S;Y\tiny SD)+C\tiny RD;I(X\tiny S;Y\tiny SR)}. (15)

Note that the decode-and-forward lower bound (15) meets the cut-set upper bound (2) and it equals the capacity of the primitive relay channel when either of the following two conditions holds:

1. the primitive relay channel is degraded, which implies that ;

2. .

For the special case of the erasure relay channel, the decode-and-forward lower bound can be rewritten as

 R=min{1−ε\tiny SD+C\tiny RD;1−ε\tiny SR}. (16)

The decode-and-forward lower bound (16) meets the cut-set upper bound (3) and it equals the capacity of the erasure relay channel when either or .

### Ii-E Compress-and-Forward Lower Bound

In compress-and-forward, the relay does not attempt to decode the received sequence, but it sends a (possibly compressed) description of it, denoted by , to the destination. Since this description is correlated with the sequence received by the destination from the source, Wyner-Ziv coding is used to reduce the rate needed to communicate it to the destination.

For the general relay channel, compress-and-forward allows to achieve the following rate [17, Theorem 16.4]:

 RCF=maxpX\tiny SpX\tiny Rp^Y\tiny SR|X\tiny R,Y\tiny SRmin{I(X\tiny S,X\tiny R;Y\tiny D)−I(Y\tiny SR;^Y\tiny SR% |X\tiny S,X\tiny R,Y\tiny D),I(X\tiny S;^Y\tiny SR,Y\tiny D% |X\tiny R)}, (17)

where the cardinality of the alphabet associated to can be bounded as . This expression can be equivalently rewritten as [17, Remark 16.3]

 RCF=maxpX\tiny SpX\tiny Rp^Y\tiny SR|X\tiny R,Y\tiny SR{I(X\tiny S;^Y\tiny SR,Y\tiny D|X\tiny R):I(Y\tiny SR;^Y\tiny SR|X\tiny R% ,Y\tiny D)≤I(X\tiny R;Y\tiny D)}. (18)

The bound is in general not convex, therefore it can be improved via time sharing.

For the case of the primitive relay channel, the compress-and-forward lower bound specializes to [26, Proposition 3]

 RCF=maxpX\tiny Sp^Y\tiny SR% |Y\tiny SR{I(X\tiny S;^Y% \tiny SR,Y\tiny SD):I(Y\tiny SR;^Y\tiny SR|Y\tiny SD)≤C\tiny RD}, (19)

with .

Note that the compress-and-forward lower bound (19) meets the cut-set upper bound (2) and it equals the capacity of the primitive relay channel when . Indeed, in this case, we can pick , namely, the relay performs Slepian-Wolf source coding. Therefore, , which is one of the two terms in the cut-set bound.

On the contrary, if , then we degrade into , namely, the relay performs a step of lossy source coding. The relay transmits this lossy description to the destination that can decode it successfully since requires less bits than . However, after that the destination has recovered , there is a penalty loss: we can achieve rates up to , instead of up to .

For the case of the erasure relay channel, we have that

 H(Y\tiny SR|Y\tiny SD)=h2(ε\tiny SR)+ε\tiny SD(1−ε\tiny SR). (20)

Hence, if , then the compress-and-forward lower bound meets the cut-set upper bound and it equals the capacity of the erasure relay channel.

On the contrary, if , it is not easy to find the best choice of even for this simple scenario. Following [25], let us assume that is the output of an erasure-erasure channel (EEC) with erasure probability and input . This means that if , then with probability ; if , then with probability and with probability . Consequently,

 I(X\tiny S;^Y\tiny SR,Y\tiny SD)=H(X\tiny S)−H(X\tiny S|^Y\tiny SR,Y\tiny SD)=H(X\tiny S)(1−(^ε\tiny R∘ε\tiny SR)⋅ε\tiny SD). (21)

Clearly, is maximized by setting

to the uniform distribution. Furthermore,

 H(^Y\tiny SR|Y\tiny SR,Y\tiny SD)=H(^Y\tiny SR|Y\tiny SR)=(1−ε\tiny SR)h2(^ε\tiny R),H(^Y\tiny SR|Y\tiny SD)=h2(ε\tiny SR∘^ε\tiny R)+ε\tiny SD(1−ε\tiny SR∘^ε\tiny R). (22)

As a result, the rate (19) can be rewritten as

 RCF=max0≤^ε\tiny R≤1{1−(^ε\tiny R∘ε\tiny SR)⋅ε\tiny SD:h2(ε\tiny SR∘^ε\tiny R)+ε\tiny SD(1−ε\tiny SR∘^ε\tiny R)−(1−ε\tiny SR)h2(^ε\tiny R)≤C\tiny RD}. (23)

## Iii Main Result

We are now ready to state our new lower bound for the primitive relay channel.

###### Theorem 1.

Consider the transmission over a primitive relay channel, where the source sends to the relay and the destination, the relay receives from the source, the destination receives from the source, and relay and destination are connected via a noiseless link with capacity . Furthermore, denote by the compressed description of transmitted by the relay. Then, the following rate is achievable:

 (24)

for any joint distribution such that

 I(X\tiny S;Y\tiny SR)

and where . Furthermore, the rate (24) can be achieved by a polar coding scheme with encoding/decoding complexity and error probability for any , where is the block length.

###### Remark 2.

If (25) does not hold, then decode-and-forward achieves the cut-set bound and it is optimal. Furthermore, if (26) does not hold, then our scheme reduces to compress-and-forward and the achievable rate is given by (19).

The special case of the erasure relay channel is handled by the corollary below.

###### Corollary 3.

Consider the transmission over the erasure relay channel, where is obtained from via a BEC, is obtained from via a BEC, is obtained from via an EEC, and the relay is connected to the destination via a noiseless link with capacity . Then, the rate (27) is achievable for any such that

 1−ε\tiny SR <1−ε\tiny SD+C\tiny RD, (27) h2(ε\tiny SR∘^ε%R)+ε\tiny SD(1−ε\tiny SR ∘^ε\tiny R)−(1−ε% \tiny SR)h2(^ε\tiny R)≥C\tiny RD. (28)

Furthermore, the rate (27) can be achieved by a polar coding scheme with encoding/decoding complexity and error probability for any , where is the block length.

The proof of Corollary 3 easily follows from the application of Theorem 1 and of formulas (21)-(22).

We can now proceed with the proof of our main result.

###### Proof of Theorem 1.

We start by presenting the main idea of our scheme. We split the transmission into two blocks. In the first block, we perform a variant of compress-and-forward: the relay does not decode the received sequence, but it sends a compressed description of it to the destination. However, differently from standard compress-and-forward, we require that (26) holds. Hence, we cannot transmit all the compressed description during the first block. In the second block, we perform decode-and-forward: the relay completely decodes the received sequence. Furthermore, we choose the length of the second block so that the relay can transmit the part of that was not sent in the previous block plus the new information needed to decode the second block.

Let us now describe this scheme more in detail and provide the achievability proof of the rate (24). First, we deal with the case .

Consider the transmission of the first block. Denote by and the block length and the rate of the message transmitted by the source, and let approach from below . The relay receives and constructs the compressed description . Recall that the destination receives the side information from the source. Hence, by using Wyner-Ziv coding, the destination needs from the relay a number of bits approaching from above , in order to decode the message sent by the source. As , the relay transmits right away a number of these bits approaching from below . The number of remaining bits approaches from above and it is stored by the relay. The destination stores the message received from the relay and the observation obtained from the source.

Consider the transmission of the second block and define

 α=I(Y\tiny SR;^Y\tiny SR|Y\tiny SD% )−C\tiny RDC\tiny RD−(I(X\tiny S;Y\tiny SR)−I(X\tiny S;Y\tiny SD)). (29)

Denote by and the block length and the rate of the message transmitted by the source. Let and let approach from below . The relay receives and successfully decodes the message. Again, the destination receives the side information from the source. Hence, it needs from the relay a number of bits approaching from above , in order to decode the message sent by the source. The relay transmits to the destination these information bits plus the bits remaining from the previous block. This transmission is reliable as (29) implies that

 (I(X\tiny S;Y% \tiny SR)−I(X\tiny S;Y\tiny SD))⋅n1⋅α+(I(Y\tiny SR;^Y\tiny SR|Y%SD)−C\tiny RD)⋅n1=C\tiny RD⋅n2. (30)

At this point, the destination can reconstruct the second block by using the side information received from the source and the extra bits received from the relay. Furthermore, it can also reconstruct the first block by using the side information previously received from the source and the extra bits received from the relay (partly in the first and partly in the second block).

The overall block length is and the achievable rate is

 R=R1+αR21+α, (31)

which approaches from below

 (32)

Note that the expression (32) coincides with (24) when .

The case is handled in a similar way. As concerns the transmission of the first block, nothing changes. Denote by and the block length and the rate of the message transmitted by the source, and let approach from below . The relay receives and constructs the compressed description . By using Wyner-Ziv coding, the destination needs from the relay a number of bits approaching from above , in order to decode the message sent by the source. As , the relay transmits right away a number of these bits approaching from below . The number of remaining bits approaches from above and it is stored by the relay. The destination stores the message received from the relay and the observation obtained from the source.

As concerns the transmission of the second block, define

 α′=I(Y\tiny SR;^Y\tiny SR|Y%SD)−C\tiny RDC\tiny RD, (33)

and denote by and the block length and the rate of the message transmitted by the source. Let and let approach from below . The relay discards the received message and transmits to the destination the bits remaining from the previous block. This transmission is reliable as (33) implies that

 (I(Y\tiny SR;^Y\tiny SR|Y\tiny SD)−C\tiny RD)⋅n′1=C\tiny RD⋅n′2. (34)

At this point, the destination can reconstruct the second block by using the message received from the source. Furthermore, it can also reconstruct the first block by using the side information previously received from the source and the extra bits received from the relay (partly in the first and partly in the second block).

The overall block length is and the achievable rate is

 R′=R′1+α′R′21+α′, (35)

which approaches from below

 C\tiny RD⋅I(X\tiny S;^Y\tiny SR,Y\tiny SD)+(I(Y\tiny SR;^Y\tiny SR|Y\tiny SD)−C\tiny RD)I(X\tiny S;Y% \tiny SD)I(Y\tiny SR;^Y\tiny SR|Y\tiny SD% ). (36)

Note that the expression (36) coincides with (24) when .

Clearly, the coding scheme described so far can be implemented with codes that are suitable for compress-and-forward and for decode-and-forward. Hence, we can employ the polar coding schemes for compress-and-forward and for decode-and-forward presented in [24]. However, polar codes require block lengths and (or and ) that are powers of two, which puts a constraint on the possible values for (or ). To remove this constraint and achieve the rate (24) for any , it suffices to use the punctured polar codes described in [34, Theorem 1].

## Iv Numerical Results

Let us consider the special case of the erasure relay channel. In Figure 4, we compare the achievable rate (27) of our scheme with the existing upper and lower bounds, i.e., the cut-set upper bound (3) (which coincides with the improved bound (5)), the decode-and-forward lower bound (16) and the compress-and-forward lower bound (23). We consider two pairs of choices for and : for the plot on the left, and for the plot on the right. We plot the various bounds as functions of . Note that our scheme outperforms both decode-and-forward and compress-and-forward for an interval of values of in both settings.

In [25], for , and , the proposed soft decode-and-forward strategy based on LDPC codes achieves a rate of , while both decode-and-forward and compress-and-forward achieve a rate of . Our new coding strategy is reliable for rates up to , hence it outperforms all existing lower bounds. As a reference, note that in this setting the cut-set bound is .

## V Concluding Remarks

We have proposed a new coding paradigm for the primitive relay channel that combines compress-and-forward and decode-and-forward by means a chaining construction. The achievable rates obtained by our scheme surpass the state-of-the-art coding approaches (compress-and-forward, decode-and-forward, and the soft decode-and-forward strategy of [25]). Furthermore, our paradigm can be implemented with a low-complexity polar coding scheme that has the typical attractive features of polar codes, i.e., quasi-linear encoding/decoding complexity and super-polynomial decay of the error probability.

## Acknowledgment

M. Mondelli was supported by an Early Postdoc.Mobility fellowship from the Swiss NSF. R. Urbanke was supported by grant No. 200021_156672/1 of the Swiss NSF.

## References

• [1] E. C. van der Meulen, “Three-terminal communication channels,” Adv. Appl. Prob., vol. 3, pp. 120–154, 1971.
• [2] T. Cover and A. E. Gamal, “Capacity theorems for the relay channel,” IEEE Trans. Inform. Theory, vol. 25, pp. 572–584, 1979.
• [3] B. Schein and R. Gallager, “The Gaussian parallel relay network,” in Proc. of the IEEE Int. Symposium on Inform. Theory, June 2000, p. 22.
• [4] A. S. Avestimehr, S. N. Diggavi, and D. N. C. Tse, “Wireless network information flow: A deterministic approach,” IEEE Trans. Inform. Theory, vol. 57, no. 4, pp. 1872–1905, Apr. 2011.
• [5] B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Inform. Theory, vol. 57, no. 10, pp. 6463–6486, Oct. 2011.
• [6] S. H. Lim, Y.-H. Kim, A. E. Gamal, and S.-Y. Chung, “Noisy network coding,” IEEE Trans. Inform. Theory, vol. 57, no. 5, pp. 3132–3152, May 2011.
• [7] P. Minero, S. H. Lim, and Y.-H. Kim, “A unified approach to hybrid coding,” IEEE Trans. Inform. Theory, vol. 61, no. 4, pp. 1509–1523, Apr. 2015.
• [8] S. Zahedi, “On reliable communication over relay channels,” Ph.D. dissertation, Stanford University, Stanford, CA, 2005.
• [9] A. E. Gamal and M. Aref, “The capacity of the semideterministic relay channel,” IEEE Trans. Inform. Theory, vol. 28, no. 3, p. 536, May 1982.
• [10] Y.-H. Kim, “Capacity of a class of deterministic relay channels,” IEEE Trans. Inform. Theory, vol. 54, no. 3, pp. 1328–1329, Mar. 2008.
• [11] Z. Zhang, “Partial converse for a relay channel,” IEEE Trans. Inform. Theory, vol. 34, no. 5, pp. 1106–1110, Sept. 1988.
• [12] M. Aleksic, P. Razaghi, and W. Yu, “Capacity of a class of modulo-sum relay channels,” IEEE Trans. Inform. Theory, vol. 55, no. 3, pp. 921–930, Mar. 2009.
• [13] F. Xue, “A new upper bound on the capacity of a primitive relay channel based on channel simulation,” IEEE Trans. Inform. Theory, vol. 60, no. 8, pp. 4786–4798, Aug. 2014.
• [14] X. Wu, A. Özgür, and L.-L. Xie, “Improving on the cut-set bound via geometric analysis of typical sets,” IEEE Trans. Inform. Theory, vol. 63, no. 4, pp. 2254–2277, Apr. 2017.
• [15] X. Wu and A. Özgür, “Improving on the cut-set bound for general primitive relay channels,” in Proc. of the IEEE Int. Symposium on Inform. Theory, Barcelona, Spain, Jul. 2016, pp. 1675–1679.
• [16] ——, “Cut-set bound is loose for gaussian relay networks,” in Proc. of the Allerton Conf. on Commun., Control, and Computing, Monticello, IL, USA, Oct. 2015, pp. 1135–1142.
• [17] A. E. Gamal and Y.-H. Kim, Network Information Theory.   Cambridge University Press, 2011.
• [18] G. Kramer, “Topics in multi-user information theory,” Found. Trends Commun. Inf. Theory, vol. 4, no. 4-5, pp. 265–444, Apr. 2007.
• [19] E. Arıkan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,” IEEE Trans. Inform. Theory, vol. 55, no. 7, pp. 3051–3073, Jul. 2009.
• [20] M. Andersson, V. Rathi, R. Thobaben, J. Kliewer, and M. Skoglund, “Nested polar codes for wiretap and relay channels,” IEEE Commun. Lett., vol. 14, no. 8, pp. 752–754, Aug. 2010.
• [21] M. Karzand, “Polar codes for degraded relay channels,” in Proc. Intern. Zurich Seminar on Comm., Feb. 2012, pp. 59–62.
• [22] R. Blasco-Serrano, R. Thobaben, M. Andersson, V. Rathi, and M. Skoglund, “Polar codes for cooperative relaying,” IEEE Trans. Commun., vol. 60, no. 11, pp. 3263–3273, Nov. 2012.
• [23] D. Karas, K. Pappi, and G. Karagiannidis, “Smart decode-and-forward relaying with polar codes,” IEEE Wireless Comm. Letters, vol. 3, no. 1, pp. 62–65, Feb. 2014.
• [24] L. Wang, “Polar coding for relay channels,” in Proc. of the IEEE Int. Symposium on Inform. Theory, Hong Kong, June 2015, pp. 1532–1536.
• [25] A. Bennatan, S. Shamai, and A. R. Calderbank, “Soft-decoding-based strategies for relay and interference channels: Analysis and achievable rates using LDPC codes,” IEEE Trans. Inform. Theory, vol. 60, no. 4, pp. 1977–2009, Apr. 2014.
• [26] Y.-H. Kim, “Coding techniques for primitive relay channels,” in Proc. of the Allerton Conf. on Commun., Control, and Computing, Monticello, IL, USA, Sept. 2007, pp. 26–28.
• [27] S. H. Hassani and R. Urbanke, “Universal polar codes,” Dec. 2013, [Online]. Available: http://arxiv.org/abs/1307.7223.
• [28] E. Şaşoğlu and A. Vardy, “A new polar coding scheme for strong security on wiretap channels,” in Proc. of the IEEE Int. Symposium on Inform. Theory, Istanbul, Turkey, July 2013, pp. 1117–1121.
• [29] M. Mondelli, S. H. Hassani, I. Sason, and R. Urbanke, “Achieving Marton’s region for broadcast channels using polar codes,” IEEE Trans. Inform. Theory, vol. 61, no. 2, pp. 783–800, Feb. 2015.
• [30] R. A. Chou and M. R. Bloch, “Polar coding for the broadcast channel with confidential messages: A random binning analogy,” IEEE Trans. Inform. Theory, vol. 62, no. 5, pp. 2410–2429, May 2016.
• [31] M. Mondelli, S. H. Hassani, and R. Urbanke, “How to achieve the capacity of asymmetric channels,” accepted in IEEE Trans. Inform. Theory, Jan. 2018. [Online]. Available: http://arxiv.org/abs/1406.7373.
• [32] E. E. Gad, Y. Li, J. Kliewer, M. Langberg, A. Jiang, and J. Bruck, “Asymmetric error correction and flash-memory rewriting using polar codes,” IEEE Trans. Inform. Theory, vol. 62, no. 7, pp. 4024–4038, July 2016.
• [33] Y.-P. Wei and S. Ulukus, “Polar coding for the general wiretap channel with extensions to multiuser scenarios,” IEEE J. Select. Areas Commun., vol. 34, no. 2, pp. 278–291, Feb. 2016.
• [34] S.-N. Hong, D. Hui, and I. Marić, “Capacity-achieving rate-compatible polar codes,” IEEE Trans. Inform. Theory, vol. 63, no. 12, pp. 7620–7632, Dec. 2017.