## I Introduction

Interference is an important limiting factor in the capacities of many communication networks. One way to reduce interference is to enable network nodes to work together to coordinate their transmissions. Strategies that employ coordinated transmissions are called cooperation strategies.

Perhaps the simplest cooperation strategy is “time-sharing” (e.g., [3, Theorem 15.3.2]), where nodes avoid interference by taking turns transmitting. A popular alternative model is the “conferencing” cooperation model [4]; in conferencing, unlike in time-sharing, encoders share information about the messages they wish to transmit and use that shared information to coordinate their channel inputs. In this work, we employ a similar approach, but in our cooperation model, encoders communicate indirectly. Specifically, the encoders communicate through another node, which we call the cooperation facilitator (CF) [5, 6]. Figure 1 depicts the CF model in the two-user multiple access channel (MAC) scenario.

The CF enables cooperation between the encoders through
its rate-limited input and output links. Prior to choosing
a codeword to transmit over the channel, each encoder sends a
function of its message to the CF. The CF uses the information
it receives from *both* encoders to compute
a rate-limited function for each encoder. It then transmits the
computed values over its output links. Finally, each encoder selects a
codeword using its message and the information it receives from
the CF.

To simplify our discussion in this section, suppose the CF input link capacities both equal and the CF output link capacities both equal . If , then the optimal strategy for the CF is to simply forward the information it receives from one encoder to the other. Using the capacity region of the MAC with conferencing encoders [4], it follows that the average-error sum-capacity gain of CF cooperation in this case is bounded from above by and does not depend on the precise value of . If , however, the situation is more complicated since the CF can no longer forward all of its incoming information. While the upper bound is still valid, the dependence of the sum-capacity gain on is less clear. If the CF simply forwards part of the information it receives, then again by [4], the average-error sum-capacity gain is at most . The bound has an intuitive interpretation: it reflects the amount of information the CF shares with the encoders, perhaps suggesting that the benefit of sharing information with rate with the encoders is at most . It turns out, though, that a much larger gain is possible through more sophisticated coding techniques. Specifically, in prior work [6, Theorem 3], we show that for a class of MACs, for fixed , the average-error sum-capacity has a derivative in that is infinite at ; that is, for small , the gain resulting from cooperation exceeds any function that goes through the origin and has bounded derivative.

The large sum-capacity gain described above is not limited to
the average-error scenario. In fact, in related work [5, Proposition 5],
we show that for any MAC for which, in the absence of cooperation,
the average-error sum-capacity
is strictly greater than the maximal-error sum-capacity,
adding a CF and measuring
the *maximal-error* sum-capacity for fixed
gives a curve that is discontinuous at .
In this case, we say that “negligible cooperation” results in a
non-negligible capacity benefit.

Given these earlier results, a number of important questions remain open. For example, we wish to understand how many bits from the CF are needed to achieve the discontinuities already shown to be possible in the maximal-error case. We also seek to understand, in the average-error case, whether the sum-capacity gain can be discontinuous in .

For the first question, we note that while the demonstration of discontinuity at for the maximal-error case proves that negligible cooperation can yield a non-negligible benefit, it does not distinguish how many bits are required to effect that change nor whether that number of bits must grow with the blocklength . We therefore begin by pushing that question to its extreme: we seek to understand the minimal output rate from the CF that can change network capacity. Our central result for the maximal-error case demonstrates that even a constant number of bits from the CF can yield a non-negligible impact on network capacity in the maximal-error case.

For the second question, we seek to gain a similar understanding of how many bits from the CF are required to obtain a non-negligible change to network capacity in the average-error case. Since in this case there are no prior results demonstrating the possibility of a discontinuity in , we begin by investigating whether the sum-capacity in the average-error case can ever be discontinuous. Our central result for the average-error case is that the average-error sum-capacity is continuous even at . (See Corollary V.1.) Our proof relies on tools developed by Dueck [7] to prove the strong converse for the MAC. Saeedi Bidokhti and Kramer [8] and Kosut and Kliewer[9, Proposition 15] also use Dueck’s method to address similar problems. Our application of Dueck’s method first appears in [10, Appendix C].

## Ii Related work

A continuity problem similar to the one considered here appears in studying rate-limited feedback over the MAC. In that setting, Sarwate and Gastpar [11] use the dependence-balance bounds of Hekstra and Willems [12] to show that as the feedback rate converges to zero, the average-error capacity region converges to the average-error capacity region of the same MAC in the absence of feedback.

The problem we study here can also be formulated as an “edge removal problem” as introduced by Ho, Effros, and Jalali [13, 14]. The edge removal problem seeks to quantify the capacity effect of removing a single edge from a network. While bounds on this capacity impact exist in a number of limited scenarios (see, for example, [13] and [14]), the problem remains open in the general case. In the context of network coding, Langberg and Effros show that this problem is connected to a number of other open problems, including the difference between the -error and -error capacity regions [15] and the difference between the lossless source coding regions for independent and dependent sources [16].

In [9], Kosut and Kliewer present different variations of the edge removal problem in a unified setting. In their terminology, the present work investigates whether the network consisting of a MAC and a CF satisfies the “weak edge removal property” with respect to the average-error reliability criterion. A discussion in [10, Chapter 1] summarizes the known results for each variation of the edge removal problem.

The question of whether the capacity region of a network consisting of noiseless links is continuous with respect to the link capacities is studied by Gu, Effros, and Bakshi [17] and Chan and Grant [18]. The present work differs from [17, 18] in the network under consideration; while our network does have noiseless links (the CF input and output links), it also contains a multiterminal component (the MAC) which may exhibit interference or noise; no such component appears in [17, 18].

For the maximal-error case, our study focuses on the effect of a constant number of bits of communication in the memoryless setting. For noisy networks with memory

, it is not difficult to see that even one bit of communication may indeed affect the capacity region. For example, consider a binary symmetric channel whose error probability

is chosen at random and then fixed for all time. If for , equals with positive probability , and , then a single bit of feedback (not rate 1, but exactly one bit no matter how large the blocklength) from the receiver to the transmitter suffices to increase the capacity. For memoryless channels, the question is far more subtle and is the subject of our study.In the next section, we present the cooperation model we consider in this work.

## Iii The Cooperation Facilitator Model

In this work, we study cooperation between two encoders that communicate their messages to a decoder over a stationary, memoryless, and discrete MAC. Such a MAC can be represented by the triple

where , , and are finite sets and is a conditional probability mass function. For any positive integer , the th extension of this MAC is given by

For each positive integer , called the blocklength, and nonnegative real numbers and , called the rates, we next define a -code for communication over a MAC with a -CF. Here and represent the capacities of the CF input and output links, respectively. (See Figure 1.)

### Iii-a Positive Rate Cooperation

For every , let denote the set . For , the transmission of encoder to the CF is represented by a mapping

The CF uses the information it receives from the encoders to compute a function

for encoder , where . Encoder uses its message and what it receives from the CF to select a codeword according to

The decoder finds estimates of the transmitted messages using the channel output. It is represented by a mapping

The collection of mappings

defines a -code for the MAC
with a -CF.^{1}^{1}1Technically,
the definition we present here is for a single round of cooperation.
As discussed in [5], it is possible to define cooperation
via a CF over multiple rounds. However, this general scenario
does not alter our main proofs. This is due to the fact that in Lemma
VI.2, the lower bound only needs one round of cooperation,
while the upper bound holds regardless of the number of rounds.

### Iii-B Constant Size Cooperation

To address the setting of a constant number of cooperation bits, we modify the output link of the CF to have support for some fixed integer ; unlike the prior support , the support of this link is independent of the blocklength . Then, for , the transmission of encoder to the CF is represented by a mapping

The CF uses the information it receives from the encoders to compute a function

for encoder , where . Encoder , as before, uses its message and what it receives from the CF to select a codeword according to

We now say that

defines a -code for the MAC with a -CF.

### Iii-C Capacity Region

For a fixed code, the probability of decoding a particular transmitted message pair incorrectly equals

where and are the CF outputs and are calculated, for , according to

The *average* probability of error is defined as

and the *maximal* probability of error is given by

A rate pair is achievable with respect to the average-error reliability criterion if there exists an infinite sequence of -codes such that as . The average-error capacity region of a MAC with a -CF, denoted by , is the closure of the set of all rate pairs that are achievable with respect to the average-error reliability criterion. The average-error sum-capacity is defined as

By replacing with , we can similarly define achievable rates with respect to the maximal-error reliability criterion, the maximal-error capacity region, and the maximal-error sum-capacity. For a MAC with a -CF, we denote the maximal-error capacity region and sum-capacity by and , respectively.

## Iv Prior Results on the Sum-Capacity Gain of Cooperation

We next review a number of results from [5, 6] which describe the sum-capacity gain of cooperation under the CF model. We begin with the average-error case.

Consider a discrete MAC . Let be a distribution that satisfies

(1) |

subscript “ind” here denotes independence between the output of encoders 1 and 2 in the absence of cooperation. In addition, suppose that there exists a distribution such that the support of is contained in the support of ,

and

(2) |

here and are the marginals on resulting from and , respectively. Let denote the class of all discrete MACs for which input distributions and , as described above, exist. Theorem IV.1 below is a stronger version of [6, Theorem 3] in the two-user case; the latter result is stated as a corollary below. A similar result holds for the Gaussian MAC [6, Prop. 9].

###### Theorem IV.1.

Let be a MAC in , and suppose . Then there exists a constant , which depends only on the MAC and , such that when is sufficiently small,

(3) |

In the above theorem, dividing both sides of (3)
by and letting results in the next corollary.^{2}^{2}2Note that
Corollary IV.1 does not lead to any conclusions regarding continuity;
a function with infinite derivative at can be continuous
(e.g., ) or discontinuous
(e.g., .

###### Corollary IV.1.

For any MAC in and any ,

We next describe the maximal-error sum-capacity gain. While it is possible in the average-error scenario to achieve a sum-capacity that has an infinite slope, a stronger result is known in the maximal-error case. There exists a class of MACs for which the maximal-error sum-capacity exhibits a discontinuity in the capacities of the CF output links. This is stated formally in the next proposition, which is a special case of [5, Proposition 5]. The proposition relies on the existence of a discrete MAC with average-error sum-capacity larger than its maximal-error sum-capacity; that existence was first proven by Dueck [19]. We investigate further properties of Dueck’s MAC in [5, Subsection VI-E].

###### Proposition IV.1.

Consider a discrete MAC for which

(4) |

Fix . Then is not continuous at .

We next present the main results of this work.

## V Our results: Continuity of Average- and Maximal-Error Sum-Capacities

In the prior section, for a fixed , we discussed previous results regarding the value of and as a function of at . In this section, we do not limit ourselves to the point ; rather, we study over its entire domain.

We begin by considering the case where the CF has full access to the messages. Formally, for a given discrete MAC , let the components of be sufficiently large so that any CF with input link capacities and has full knowledge of the encoders’ messages. For example, we can choose such that

Our first result addresses the continuity of as a function of over .

###### Theorem V.1.

For any discrete MAC, the mapping

defined on is continuous.

While Theorem V.1 focuses on the scenario where , the result is sufficiently strong to address the continuity problem for a fixed, arbitrary at . To see this, note that for all ,

(5) |

Corollary V.1, below, now follows from Theorem V.1 by letting approach zero in (5) and noting that for all ,

###### Corollary V.1.

For any discrete MAC and any fixed , the mapping

is continuous at .

Recall that Proposition IV.1 gives a sufficient condition under which is not continuous at for a fixed . Corollary V.1 implies that the sufficient condition is also necessary. This is stated in the next corollary. We prove this corollary in Subsection IX-B.

###### Corollary V.2.

Fix a discrete MAC and . Then is not continuous at if and only if

(6) |

We next describe the second main result of this paper. Our first main result, Theorem V.1, shows that is continuous in over . The next result shows that proving the continuity of over is equivalent to demonstrating its continuity on certain axes. Specifically, it suffices to check the continuity of when one of and approaches zero, while the other arguments of are fixed positive numbers.

###### Theorem V.2.

For any discrete MAC, the mapping

defined on is continuous if and only if for all , we have

as and

as .

We remark that using a time-sharing argument, it is possible to show that is concave on and thus continuous on its interior. Therefore, it suffices to study the continuity of on the boundary of . Note that Theorem V.2 leaves the continuity problem of the average-error sum-capacity open in one case. If and are positive but not sufficiently large and , then we have not established the continuity of the sum-capacity, solely as function of , at . (Clearly, the symmetric case where and remains open as well.) This last scenario remains a subject for future work.

Finally, we present the third main contribution of our work, the discontinuity of sum-capacity in the maximum-error setting when the outgoing edges of the CF can send only a constant number of bits.

###### Theorem V.3.

For , the discrete MAC presented in [19] satisfies

(7) |

## Vi Continuity of Sum-Capacity: The Case

We start our study of the continuity of by presenting lower and upper bounds in terms of an auxiliary function defined for (Lemma VI.2). This function is similar to a tool used by Dueck in [7] but differs with [7]

in its reliance on a time-sharing random variable denoted by

. The random variable plays two roles. First it ensures that is concave, which immediately proves the continuity of over . Second, together with a lemma from [7] (Lemma VI.5 below), it helps us find a single-letter upper bound for (Corollary VIII.1). We then use the single-letter upper bound to prove continuity at .The following definitions are useful for the description of our lower and upper bounds for . For every finite alphabet and all , define the set of probability mass functions on as

Intuitively, captures a family
of “mildly dependent” input distributions for our MAC; this mild
dependence is parametrized by a bound on the per-symbol mutual information.
In the discussion that follows, we relate to the amount of information
that the CF shares with the encoders. For every positive integer , let
denote the function^{3}^{3}3For , this function also appears in
the study of the MAC with negligible feedback [11].

(8) |

where the supremum is over all finite sets . Thus captures something like the maximal sum-rate achievable under the mild dependence described above. As we see in Lemma VI.4, conditioning on the random variable in (8) ensures that is concave.

For every , satisfies a superadditivity property which appears in Lemma VI.1, below. Intuitively, this property says that the sum-rate of the best code of blocklength is bounded from below by the sum-rate of the concatenation of the best codes of blocklengths and . We prove this Lemma in Subsection IX-C.

###### Lemma VI.1.

For all , all , and defined as in (8), we have

Given Lemma VI.1, [20, Appendix 4A, Lemma 2] now implies that the sequence of mappings converges pointwise to some mapping , and

(9) |

We next present our lower and upper bounds for in terms of . The lower bound follows directly from [6, Corollary 8]. We prove the upper bound in Subsection IX-D.

###### Lemma VI.2.

For any discrete MAC and any , we have

From the remark following Theorem V.2, we only need to prove that is continuous on the boundary of . On the boundary of , however, . Thus it suffices to show that is continuous on , which is stated in the next lemma.

###### Lemma VI.3.

For any finite alphabet MAC, the function , defined by (9), is continuous on .

To prove Lemma VI.3, we first consider the continuity of on and then focus on the point . Note that is the pointwise limit of the sequence of functions . Lemma VI.4 uses a time-sharing argument as in [21] to show that each is concave. (See Subsection IX-E for the proof.) Therefore, is concave as well, and since is open, is continuous on .

###### Lemma VI.4.

For all , is concave on .

To prove the continuity of at , we find an upper bound for in terms of . For some finite set and , consider a distribution . By the definition of ,

(10) |

Finding a bound for in terms of requires a single-letter version of (10). In [7], Dueck presents the necessary result. We present Dueck’s result in the next lemma and provide the proof in Subsection IX-F.

###### Lemma VI.5 (Dueck’s Lemma [7]).

Fix positive reals and , positive integer , and finite alphabet . If , then there exists a set satisfying such that

where for , .

Corollary VI.1 uses Lemma VI.5 to find an upper bound for in terms of . The proof of this corollary, in Subsection IX-G, combines ideas from [7] with results derived here.

###### Corollary VI.1.

For all , we have

By Corollary VI.1, we have

If we calculate the limit , we get

Since ,^{4}^{4}4This follows from the
converse proof of the MAC capacity region in the
absence of cooperation [3, Theorem 15.3.1]. it suffices to
show that is continuous
at .
Recall that is defined as

(11) |

Since in (11), the supremum is over *all* finite sets
, it is difficult to find an upper
bound for near directly.
Instead we first show, in Subsection IX-H,
that it is possible to assume that
has at most two elements.

###### Lemma VI.6 (Cardinality of ).

In the definition of , it suffices to calculate the supremum over all sets with .

In Subsection IX-I, we prove the continuity of at from Lemma VI.6 using standard tools, such as Pinsker’s inequality [3, Lemma 17.3.3] and the lower bound of KL divergence [3, Lemma 11.6.1]. The continuity of on follows from the concavity of on .

###### Lemma VI.7 (Continuity of ).

The function is continuous on .

## Vii Continuity of Sum-Capacity: Arbitrary

In this section, we study the continuity of over with the aim of proving Theorem V.2.

Fix . For arbitrary , the triangle inequality implies

(12) |

We study this bound in the limit . We begin by considering the first term in (12).

###### Lemma VII.1 (Continuity of Sum-Capacity in ).

There exists a function

that satisfies

and for any finite alphabet MAC and , we have

Applying Lemma VII.1 to (12), we get

Thus to calculate the limit , Lemma VII.2 investigates

We prove this lemma in Subsection IX-K.

###### Lemma VII.2 (Continuity of Sum-Capacity in ).

For any finite alphabet MAC and , proving that

is equivalent to showing that for all , we have

## Viii Discontinuity of Sum-Capacity with a Constant Number of Cooperation Bits

In this section we prove Theorem V.3. We start by presenting Dueck’s deterministic memoryless MAC from [19]. Consider the MAC where , , . The probability transition matrix where for the deterministic mapping , . The mapping is defined as

For positive integer , we define the mapping as if and only if for all ,

Set to allow the CF to have access to both source messages and . We use the following theorem from [19].

###### Theorem VIII.1 (Outer Bound on the Maximal-Error Sum-Capacity[19]).

For the MAC defined above, we have

Optimizing over , and noting that

Comments

There are no comments yet.