# Negligible Cooperation: Contrasting the Maximal- and Average-Error Cases

In communication networks, cooperative strategies are coding schemes where network nodes work together to improve network performance metrics such as the total rate delivered across the network. This work studies encoder cooperation in the setting of a discrete multiple access channel (MAC) with two encoders and a single decoder. A network node, here called the cooperation facilitator (CF), that is connected to both encoders via rate-limited links, enables the cooperation strategy. Previous work by the authors presents two classes of MACs: (i) one class where the average-error sum-capacity has an infinite derivative in the limit where CF output link capacities approach zero, and (ii) a second class of MACs where the maximal-error sum-capacity is not continuous at the point where the output link capacities of the CF equal zero. This work contrasts the power of the CF in the maximal- and average-error cases, showing that a constant number of bits communicated over the CF output link can yield a positive gain in the maximal-error sum-capacity, while a far greater number of bits, even numbers that grow sublinearly in the blocklength, can never yield a non-negligible gain in the average-error sum-capacity.

## Authors

• 4 publications
• 13 publications
• 15 publications
• ### Can Negligible Cooperation Increase Network Capacity? The Average-Error Case

In communication networks, cooperative strategies are coding schemes whe...
01/11/2018 ∙ by Parham Noorzad, et al. ∙ 0

• ### Every Bit Counts: Second-Order Analysis of Cooperation in the Multiple-Access Channel

The work at hand presents a finite-blocklength analysis of the multiple ...
02/02/2021 ∙ by Oliver Kosut, et al. ∙ 0

• ### Gaussian Multiple and Random Access in the Finite Blocklength Regime

This paper presents finite-blocklength achievability bounds for the Gaus...
01/12/2020 ∙ by Recep Can Yavas, et al. ∙ 0

• ### Distributed Hypothesis Testing: Cooperation and Concurrent Detection

A single-sensor two-detectors system is considered where the sensor comm...
07/18/2019 ∙ by Pierre Escamilla, et al. ∙ 0

• ### Multiplexing Gain Region of Sectorized Cellular Networks with Mixed Delay Constraints

The sectorized hexagonal model with mixed delay constraints is considere...
02/28/2019 ∙ by Homa Nikbakht, et al. ∙ 0

• ### Free Ride on LDPC Coded Transmission

In this paper, we formulate a new problem to cope with the transmission ...
06/26/2019 ∙ by Suihua Cai, et al. ∙ 0

• ### The Capacity of Degraded Cognitive Interference Channel with Unidirectional Destination Cooperation

Previous works established the capacity region for some special cases of...
01/22/2018 ∙ by Mohammad Kazemi, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Interference is an important limiting factor in the capacities of many communication networks. One way to reduce interference is to enable network nodes to work together to coordinate their transmissions. Strategies that employ coordinated transmissions are called cooperation strategies.

Perhaps the simplest cooperation strategy is “time-sharing” (e.g., [3, Theorem 15.3.2]), where nodes avoid interference by taking turns transmitting. A popular alternative model is the “conferencing” cooperation model [4]; in conferencing, unlike in time-sharing, encoders share information about the messages they wish to transmit and use that shared information to coordinate their channel inputs. In this work, we employ a similar approach, but in our cooperation model, encoders communicate indirectly. Specifically, the encoders communicate through another node, which we call the cooperation facilitator (CF) [5, 6]. Figure 1 depicts the CF model in the two-user multiple access channel (MAC) scenario.

The CF enables cooperation between the encoders through its rate-limited input and output links. Prior to choosing a codeword to transmit over the channel, each encoder sends a function of its message to the CF. The CF uses the information it receives from both encoders to compute a rate-limited function for each encoder. It then transmits the computed values over its output links. Finally, each encoder selects a codeword using its message and the information it receives from the CF.

To simplify our discussion in this section, suppose the CF input link capacities both equal and the CF output link capacities both equal . If , then the optimal strategy for the CF is to simply forward the information it receives from one encoder to the other. Using the capacity region of the MAC with conferencing encoders [4], it follows that the average-error sum-capacity gain of CF cooperation in this case is bounded from above by and does not depend on the precise value of . If , however, the situation is more complicated since the CF can no longer forward all of its incoming information. While the upper bound is still valid, the dependence of the sum-capacity gain on is less clear. If the CF simply forwards part of the information it receives, then again by [4], the average-error sum-capacity gain is at most . The bound has an intuitive interpretation: it reflects the amount of information the CF shares with the encoders, perhaps suggesting that the benefit of sharing information with rate with the encoders is at most . It turns out, though, that a much larger gain is possible through more sophisticated coding techniques. Specifically, in prior work [6, Theorem 3], we show that for a class of MACs, for fixed , the average-error sum-capacity has a derivative in that is infinite at ; that is, for small , the gain resulting from cooperation exceeds any function that goes through the origin and has bounded derivative.

The large sum-capacity gain described above is not limited to the average-error scenario. In fact, in related work [5, Proposition 5], we show that for any MAC for which, in the absence of cooperation, the average-error sum-capacity is strictly greater than the maximal-error sum-capacity, adding a CF and measuring the maximal-error sum-capacity for fixed gives a curve that is discontinuous at . In this case, we say that “negligible cooperation” results in a non-negligible capacity benefit.

Given these earlier results, a number of important questions remain open. For example, we wish to understand how many bits from the CF are needed to achieve the discontinuities already shown to be possible in the maximal-error case. We also seek to understand, in the average-error case, whether the sum-capacity gain can be discontinuous in .

For the first question, we note that while the demonstration of discontinuity at for the maximal-error case proves that negligible cooperation can yield a non-negligible benefit, it does not distinguish how many bits are required to effect that change nor whether that number of bits must grow with the blocklength . We therefore begin by pushing that question to its extreme: we seek to understand the minimal output rate from the CF that can change network capacity. Our central result for the maximal-error case demonstrates that even a constant number of bits from the CF can yield a non-negligible impact on network capacity in the maximal-error case.

For the second question, we seek to gain a similar understanding of how many bits from the CF are required to obtain a non-negligible change to network capacity in the average-error case. Since in this case there are no prior results demonstrating the possibility of a discontinuity in , we begin by investigating whether the sum-capacity in the average-error case can ever be discontinuous. Our central result for the average-error case is that the average-error sum-capacity is continuous even at . (See Corollary V.1.) Our proof relies on tools developed by Dueck [7] to prove the strong converse for the MAC. Saeedi Bidokhti and Kramer [8] and Kosut and Kliewer[9, Proposition 15] also use Dueck’s method to address similar problems. Our application of Dueck’s method first appears in [10, Appendix C].

In addition to the contributions above, our work also explicitly strengthens earlier results. Specifically, we refer the reader to Theorem IV.1 in Section IV and Corollary V.2 in Section V, which provide stronger versions of results derived in [6] and [5], respectively.

## Ii Related work

A continuity problem similar to the one considered here appears in studying rate-limited feedback over the MAC. In that setting, Sarwate and Gastpar [11] use the dependence-balance bounds of Hekstra and Willems [12] to show that as the feedback rate converges to zero, the average-error capacity region converges to the average-error capacity region of the same MAC in the absence of feedback.

The problem we study here can also be formulated as an “edge removal problem” as introduced by Ho, Effros, and Jalali [13, 14]. The edge removal problem seeks to quantify the capacity effect of removing a single edge from a network. While bounds on this capacity impact exist in a number of limited scenarios (see, for example, [13] and [14]), the problem remains open in the general case. In the context of network coding, Langberg and Effros show that this problem is connected to a number of other open problems, including the difference between the -error and -error capacity regions [15] and the difference between the lossless source coding regions for independent and dependent sources [16].

In [9], Kosut and Kliewer present different variations of the edge removal problem in a unified setting. In their terminology, the present work investigates whether the network consisting of a MAC and a CF satisfies the “weak edge removal property” with respect to the average-error reliability criterion. A discussion in [10, Chapter 1] summarizes the known results for each variation of the edge removal problem.

The question of whether the capacity region of a network consisting of noiseless links is continuous with respect to the link capacities is studied by Gu, Effros, and Bakshi [17] and Chan and Grant [18]. The present work differs from [17, 18] in the network under consideration; while our network does have noiseless links (the CF input and output links), it also contains a multiterminal component (the MAC) which may exhibit interference or noise; no such component appears in [17, 18].

For the maximal-error case, our study focuses on the effect of a constant number of bits of communication in the memoryless setting. For noisy networks with memory

, it is not difficult to see that even one bit of communication may indeed affect the capacity region. For example, consider a binary symmetric channel whose error probability

is chosen at random and then fixed for all time. If for , equals with positive probability , and , then a single bit of feedback (not rate 1, but exactly one bit no matter how large the blocklength) from the receiver to the transmitter suffices to increase the capacity. For memoryless channels, the question is far more subtle and is the subject of our study.

In the next section, we present the cooperation model we consider in this work.

## Iii The Cooperation Facilitator Model

In this work, we study cooperation between two encoders that communicate their messages to a decoder over a stationary, memoryless, and discrete MAC. Such a MAC can be represented by the triple

 (X1×X2,p(y|x1,x2),Y),

where , , and are finite sets and is a conditional probability mass function. For any positive integer , the th extension of this MAC is given by

 p(yn|xn1,xn2)\coloneqqn∏t=1p(yt|x1t,x2t).

For each positive integer , called the blocklength, and nonnegative real numbers and , called the rates, we next define a -code for communication over a MAC with a -CF. Here and represent the capacities of the CF input and output links, respectively. (See Figure 1.)

### Iii-a Positive Rate Cooperation

For every , let denote the set . For , the transmission of encoder to the CF is represented by a mapping

 φi:[2nRi]→[2nCiin].

The CF uses the information it receives from the encoders to compute a function

 ψi:[2nC1in]×[2nC2in]→[2nCiout]

for encoder , where . Encoder uses its message and what it receives from the CF to select a codeword according to

 fi:[2nRi]×[2nCiout]→Xni.

The decoder finds estimates of the transmitted messages using the channel output. It is represented by a mapping

 g:Yn→[2nR1]×[2nR2].

The collection of mappings

 (φ1,φ2,ψ1,ψ2,f1,f2,g)

defines a -code for the MAC with a -CF.111Technically, the definition we present here is for a single round of cooperation. As discussed in [5], it is possible to define cooperation via a CF over multiple rounds. However, this general scenario does not alter our main proofs. This is due to the fact that in Lemma VI.2, the lower bound only needs one round of cooperation, while the upper bound holds regardless of the number of rounds.

### Iii-B Constant Size Cooperation

To address the setting of a constant number of cooperation bits, we modify the output link of the CF to have support for some fixed integer ; unlike the prior support , the support of this link is independent of the blocklength . Then, for , the transmission of encoder to the CF is represented by a mapping

 φi:[2nRi]→[2nCiin].

The CF uses the information it receives from the encoders to compute a function

 ψi:[2nC1in]×[2nC2in]→[2k]

for encoder , where . Encoder , as before, uses its message and what it receives from the CF to select a codeword according to

 fi:[2nRi]×[2k]→Xni.

We now say that

 (φ1,φ2,ψ1,ψ2,f1,f2,g)

defines a -code for the MAC with a -CF.

### Iii-C Capacity Region

For a fixed code, the probability of decoding a particular transmitted message pair incorrectly equals

 λn(w1,w2)\coloneqq∑yn:g(yn)≠(w1,w2)p(yn|f1(w1,z1),f2(w2,z2)),

where and are the CF outputs and are calculated, for , according to

 zi=ψi(φ1(w1),φ2(w2)).

The average probability of error is defined as

 P(n)e,avg\coloneqq12n(R1+R2)∑w1,w2λn(w1,w2),

and the maximal probability of error is given by

 P(n)e,max\coloneqqmaxw1,w2λn(w1,w2).

A rate pair is achievable with respect to the average-error reliability criterion if there exists an infinite sequence of -codes such that as . The average-error capacity region of a MAC with a -CF, denoted by , is the closure of the set of all rate pairs that are achievable with respect to the average-error reliability criterion. The average-error sum-capacity is defined as

 Csum(Cin,Cout)\coloneqqmax(R1,R2)∈Cavg(Cin,Cout)(R1+R2).

By replacing with , we can similarly define achievable rates with respect to the maximal-error reliability criterion, the maximal-error capacity region, and the maximal-error sum-capacity. For a MAC with a -CF, we denote the maximal-error capacity region and sum-capacity by and , respectively.

## Iv Prior Results on the Sum-Capacity Gain of Cooperation

We next review a number of results from [5, 6] which describe the sum-capacity gain of cooperation under the CF model. We begin with the average-error case.

Consider a discrete MAC . Let be a distribution that satisfies

 Iind(X1,X2;Y)\coloneqqI(X1,X2;Y)∣∣pind(x1,x2)=maxp(x1)p(x2)I(X1,X2;Y); (1)

subscript “ind” here denotes independence between the output of encoders 1 and 2 in the absence of cooperation. In addition, suppose that there exists a distribution such that the support of is contained in the support of ,

 Idep(X1,X2;Y)\coloneqqI(X1,X2;Y)∣∣pdep(x1,x2),

and

 Idep(X1,X2;Y)+D(pdep(y)∥pind(y))>Iind(X1,X2;Y); (2)

here and are the marginals on resulting from and , respectively. Let denote the class of all discrete MACs for which input distributions and , as described above, exist. Theorem IV.1 below is a stronger version of [6, Theorem 3] in the two-user case; the latter result is stated as a corollary below. A similar result holds for the Gaussian MAC [6, Prop. 9].

###### Theorem IV.1.

Let be a MAC in , and suppose . Then there exists a constant , which depends only on the MAC and , such that when is sufficiently small,

 Csum(Cin,hv)−Csum(Cin,0)≥K√h+o(√h). (3)

The proof of Theorem IV.1 appears in Subsection IX-A.

In the above theorem, dividing both sides of (3) by and letting results in the next corollary.222Note that Corollary IV.1 does not lead to any conclusions regarding continuity; a function with infinite derivative at can be continuous (e.g., ) or discontinuous (e.g., .

###### Corollary IV.1.

For any MAC in and any ,

 limh→0+Csum(Cin,hv)−Csum(Cin,0)h=∞.

We next describe the maximal-error sum-capacity gain. While it is possible in the average-error scenario to achieve a sum-capacity that has an infinite slope, a stronger result is known in the maximal-error case. There exists a class of MACs for which the maximal-error sum-capacity exhibits a discontinuity in the capacities of the CF output links. This is stated formally in the next proposition, which is a special case of [5, Proposition 5]. The proposition relies on the existence of a discrete MAC with average-error sum-capacity larger than its maximal-error sum-capacity; that existence was first proven by Dueck [19]. We investigate further properties of Dueck’s MAC in [5, Subsection VI-E].

###### Proposition IV.1.

Consider a discrete MAC for which

 Csum(0,0)>Csum,max(0,0). (4)

Fix . Then is not continuous at .

We next present the main results of this work.

## V Our results: Continuity of Average- and Maximal-Error Sum-Capacities

In the prior section, for a fixed , we discussed previous results regarding the value of and as a function of at . In this section, we do not limit ourselves to the point ; rather, we study over its entire domain.

We begin by considering the case where the CF has full access to the messages. Formally, for a given discrete MAC , let the components of be sufficiently large so that any CF with input link capacities and has full knowledge of the encoders’ messages. For example, we can choose such that

 min{C∗1in,C∗2in}>maxp(x1,x2)I(X1,X2;Y).

Our first result addresses the continuity of as a function of over .

###### Theorem V.1.

For any discrete MAC, the mapping

 Cout↦Csum(C∗in,Cout),

defined on is continuous.

While Theorem V.1 focuses on the scenario where , the result is sufficiently strong to address the continuity problem for a fixed, arbitrary at . To see this, note that for all ,

 Csum(Cin,0) ≤Csum(Cin,Cout) ≤Csum(C∗in,Cout). (5)

Corollary V.1, below, now follows from Theorem V.1 by letting approach zero in (5) and noting that for all ,

 Csum(C∗in,0)=Csum(Cin,0)=Csum(0,0).
###### Corollary V.1.

For any discrete MAC and any fixed , the mapping

 Cout↦Csum(Cin,Cout),

is continuous at .

Recall that Proposition IV.1 gives a sufficient condition under which is not continuous at for a fixed . Corollary V.1 implies that the sufficient condition is also necessary. This is stated in the next corollary. We prove this corollary in Subsection IX-B.

###### Corollary V.2.

Fix a discrete MAC and . Then is not continuous at if and only if

 Csum(0,0)>Csum,max(0,0). (6)

We next describe the second main result of this paper. Our first main result, Theorem V.1, shows that is continuous in over . The next result shows that proving the continuity of over is equivalent to demonstrating its continuity on certain axes. Specifically, it suffices to check the continuity of when one of and approaches zero, while the other arguments of are fixed positive numbers.

###### Theorem V.2.

For any discrete MAC, the mapping

 (Cin,Cout)↦Csum(Cin,Cout),

defined on is continuous if and only if for all , we have

 Csum(Cin,(~C1out,C2out))→Csum(Cin,(0,C2out))

as and

 Csum(Cin,(C1out,~C2out))→Csum(Cin,(C1out,0))

as .

We remark that using a time-sharing argument, it is possible to show that is concave on and thus continuous on its interior. Therefore, it suffices to study the continuity of on the boundary of . Note that Theorem V.2 leaves the continuity problem of the average-error sum-capacity open in one case. If and are positive but not sufficiently large and , then we have not established the continuity of the sum-capacity, solely as function of , at . (Clearly, the symmetric case where and remains open as well.) This last scenario remains a subject for future work.

Finally, we present the third main contribution of our work, the discontinuity of sum-capacity in the maximum-error setting when the outgoing edges of the CF can send only a constant number of bits.

###### Theorem V.3.

For , the discrete MAC presented in [19] satisfies

 Csum,max(C∗in,k/n)>Csum,max(C∗in,0). (7)

We prove our key results in the following sections. In Sections VI,VII, and VIII, we outline the proofs of Theorems V.1, V.2, and V.3, respectively. We provide detailed proofs of our claims in Section IX.

## Vi Continuity of Sum-Capacity: The Cin=C∗in Case

We start our study of the continuity of by presenting lower and upper bounds in terms of an auxiliary function defined for (Lemma VI.2). This function is similar to a tool used by Dueck in [7] but differs with [7]

in its reliance on a time-sharing random variable denoted by

. The random variable plays two roles. First it ensures that is concave, which immediately proves the continuity of over . Second, together with a lemma from [7] (Lemma VI.5 below), it helps us find a single-letter upper bound for (Corollary VIII.1). We then use the single-letter upper bound to prove continuity at .

The following definitions are useful for the description of our lower and upper bounds for . For every finite alphabet and all , define the set of probability mass functions on as

Intuitively, captures a family of “mildly dependent” input distributions for our MAC; this mild dependence is parametrized by a bound on the per-symbol mutual information. In the discussion that follows, we relate to the amount of information that the CF shares with the encoders. For every positive integer , let denote the function333For , this function also appears in the study of the MAC with negligible feedback [11].

 σn(δ)\coloneqqsupUmaxp∈P(n)U(δ)1nI(Xn1,Xn2;Yn|U), (8)

where the supremum is over all finite sets . Thus captures something like the maximal sum-rate achievable under the mild dependence described above. As we see in Lemma VI.4, conditioning on the random variable in (8) ensures that is concave.

For every , satisfies a superadditivity property which appears in Lemma VI.1, below. Intuitively, this property says that the sum-rate of the best code of blocklength is bounded from below by the sum-rate of the concatenation of the best codes of blocklengths and . We prove this Lemma in Subsection IX-C.

###### Lemma VI.1.

For all , all , and defined as in (8), we have

 (m+n)σm+n(δ)≥mσm(δ)+nσn(δ).

Given Lemma VI.1, [20, Appendix 4A, Lemma 2] now implies that the sequence of mappings converges pointwise to some mapping , and

 σ(δ)\coloneqqlimn→∞σn(δ)=supnσn(δ). (9)

We next present our lower and upper bounds for in terms of . The lower bound follows directly from [6, Corollary 8]. We prove the upper bound in Subsection IX-D.

###### Lemma VI.2.

For any discrete MAC and any , we have

 σ(C1out+C2out)−min{C1out,C2out}≤Csum(C∗in,Cout)≤σ(C1out+C2out).

From the remark following Theorem V.2, we only need to prove that is continuous on the boundary of . On the boundary of , however, . Thus it suffices to show that is continuous on , which is stated in the next lemma.

###### Lemma VI.3.

For any finite alphabet MAC, the function , defined by (9), is continuous on .

To prove Lemma VI.3, we first consider the continuity of on and then focus on the point . Note that is the pointwise limit of the sequence of functions . Lemma VI.4 uses a time-sharing argument as in [21] to show that each is concave. (See Subsection IX-E for the proof.) Therefore, is concave as well, and since is open, is continuous on .

###### Lemma VI.4.

For all , is concave on .

To prove the continuity of at , we find an upper bound for in terms of . For some finite set and , consider a distribution . By the definition of ,

 I(Xn1;Xn2|U)≤nδ. (10)

Finding a bound for in terms of requires a single-letter version of (10). In [7], Dueck presents the necessary result. We present Dueck’s result in the next lemma and provide the proof in Subsection IX-F.

###### Lemma VI.5 (Dueck’s Lemma [7]).

Fix positive reals and , positive integer , and finite alphabet . If , then there exists a set satisfying such that

 ∀t∉T:I(X1t;X2t|U,XT1,XT2)≤ϵ,

where for , .

Corollary VI.1 uses Lemma VI.5 to find an upper bound for in terms of . The proof of this corollary, in Subsection IX-G, combines ideas from [7] with results derived here.

###### Corollary VI.1.

For all , we have

 σ(δ)≤δϵlog|X1||X2|+σ1(ϵ).

By Corollary VI.1, we have

 σ(0)≤limδ→0+σ(δ)≤σ1(ϵ).

If we calculate the limit , we get

 σ(0)≤limδ→0+σ(δ)≤limϵ→0+σ1(ϵ).

Since ,444This follows from the converse proof of the MAC capacity region in the absence of cooperation [3, Theorem 15.3.1]. it suffices to show that is continuous at . Recall that is defined as

 σ1(δ)\coloneqqsupUmaxp∈P(1)U(δ)I(X1,X2;Y|U). (11)

Since in (11), the supremum is over all finite sets , it is difficult to find an upper bound for near directly. Instead we first show, in Subsection IX-H, that it is possible to assume that has at most two elements.

###### Lemma VI.6 (Cardinality of U).

In the definition of , it suffices to calculate the supremum over all sets with .

In Subsection IX-I, we prove the continuity of at from Lemma VI.6 using standard tools, such as Pinsker’s inequality [3, Lemma 17.3.3] and the lower bound of KL divergence [3, Lemma 11.6.1]. The continuity of on follows from the concavity of on .

###### Lemma VI.7 (Continuity of σ1).

The function is continuous on .

## Vii Continuity of Sum-Capacity: Arbitrary Cin

In this section, we study the continuity of over with the aim of proving Theorem V.2.

Fix . For arbitrary , the triangle inequality implies

 ∣∣Csum(~Cin,~Cout)−Csum(Cin,Cout)∣∣ ≤∣∣Csum(~Cin,~Cout)−Csum(Cin,~Cout)∣∣+∣∣Csum(Cin,~Cout)−Csum(Cin,Cout)∣∣ (12)

We study this bound in the limit . We begin by considering the first term in (12).

###### Lemma VII.1 (Continuity of Sum-Capacity in Cin).

There exists a function

 Δ:R2≥0×R2≥0→R≥0

that satisfies

 lim~Cin→CinΔ(Cin,~Cin)=0,

and for any finite alphabet MAC and , we have

We prove Lemma VII.1 in Subsection IX-J.

Applying Lemma VII.1 to (12), we get

 ∣∣Csum(~Cin,~Cout)−Csum(Cin,Cout)∣∣≤Δ(Cin,~Cin)+∣∣Csum(Cin,~Cout)−Csum(Cin,Cout)∣∣.

Thus to calculate the limit , Lemma VII.2 investigates

 lim~Cout→Cout∣∣Csum(Cin,~Cout)−Csum(Cin,Cout)∣∣.

We prove this lemma in Subsection IX-K.

###### Lemma VII.2 (Continuity of Sum-Capacity in Cout).

For any finite alphabet MAC and , proving that

 lim~Cout→CoutCsum(Cin,~Cout)=Csum(Cin,Cout)

is equivalent to showing that for all , we have

 lim~C1out→0+Csum(Cin,(~C1out,C2out)) =Csum(Cin,(0,C2out)) lim~C2out→0+Csum(Cin,(C1out,~C2out)) =Csum(Cin,(C1out,0)).

## Viii Discontinuity of Sum-Capacity with a Constant Number of Cooperation Bits

In this section we prove Theorem V.3. We start by presenting Dueck’s deterministic memoryless MAC from [19]. Consider the MAC where , , . The probability transition matrix where for the deterministic mapping , . The mapping is defined as

 W(x1,x2)\coloneqq⎧⎪⎨⎪⎩(c,0)if (x1,x2)∈{(a,0),(b,0)}(C,1)if (x1,x2)∈{(A,1),(B,1)}(x1,x2)otherwise.

For positive integer , we define the mapping as if and only if for all ,

 (y1i,y2i)=W(x1i,x2i).

Set to allow the CF to have access to both source messages and . We use the following theorem from [19].

###### Theorem VIII.1 (Outer Bound on the Maximal-Error Sum-Capacity[19]).

For the MAC defined above, we have

 Csum,max(0,0)≤max0≤p≤1/2[H(1/3)+2/3−p+H(p)].

Optimizing over , and noting that