## I Introduction

The inherent two-way nature of communication links provides an opportunity to enable *interaction* among nodes. It allows the nodes to efficiently exchange their messages by adapting their transmitted signals to the past received signals that can be fed back through backward communication links. This problem was first studied by Shannon in [3]. However, we are still lacking in our understanding of how to treat two-way information exchanges, and the underlying difficulty has impeded progress on this field over the past few decades.

Since interaction is enabled through the use of *feedback*, feedback is a more basic research topic that needs to be understood beforehand. The history of feedback traces back to Shannon who showed that feedback has no bearing on capacity for memoryless point-to-point channels [4]. Subsequent work demonstrated that feedback provides a gain for point-to-point channels with memory [5, 6] as well as for many multi-user channels [7, 8, 9]. For many scenarios, however, capacity improvements due to feedback are rather modest.

On the contrary, one notable result in [10] has changed the traditional viewpoint on the role of feedback. It is shown in [10] that feedback offers more significant capacity gains for the Gaussian interference channel. Subsequent works [11, 12, 13] show more promise on the use of feedback. In particular, [13] demonstrates a very interesting result: Not only feedback can yield a net increase in capacity, but also we can sometimes get *perfect-feedback capacities* simultaneously in both directions.

We seek to examine the role of feedback for more general scenarios in which nodes now intend to compute *functions* of the raw messages rather than the messages themselves. These general settings include many realistic scenarios such as sensor networks [14] and cloud computing scenarios [15, 16]. For an idealistic scenario where feedback links are perfect with infinite capacities and are given for free, Suh-Gastpar [17] have shown that feedback provides a significant gain also for computation. However, the result in [17] assumes a dedicated infinite-capacity feedback link as in [10]. As an effort to explore a net gain that reflects feedback cost, [2] investigated a two-way setting of the function multicast channel considered in [17] where two nodes wish to compute a linear function (modulo-sum) of the two Bernoulli sources generated from the other two nodes. The two-way setting includes a backward computation demand as well, thus well capturing feedback cost. A scheme is proposed to demonstrate that a net interaction gain can occur also in the computation setting. However, the maximal interaction gain is not fully characterized due to a gap between the lower and upper bounds. In particular, whether or not one can get all the way to perfect-feedback computation capacities in both directions (as in the two-way interference channel [13]) has been unanswered.

In this work, we characterize the computation capacity region of the two-way function multicast channel via a new capacity-achieving scheme. In particular, we consider a deterministic model [18] which well captures key properties of the wireless Gaussian channel. As a result, we answer the above question positively. Specifically, we demonstrate that for some channel regimes (to be detailed later; see Corollary ), the new scheme simultaneously achieves the perfect-feedback computation capacities in both directions. As in the two-way interference channel [13], this occurs even when feedback offers gains in both directions and thus feedback w.r.t. one direction must compete with the traffic in the other direction.

Our achievability builds upon the scheme in [13] where feedback allows the exploitation of effectively future information as side information via retrospective decoding (to be detailed later; see Remark ). A key distinction relative to [13] is that in our computation setting, the retrospective decoding occurs in a *nested manner* for some channel regimes; this will be detailed when describing our achievability. We also employ network decomposition in [19] for ease of achievability proof.

## Ii Model

Consider a four-node Avestimehr-Diggavi-Tse (ADT) deterministic network as illustrated in Fig. This network is a full-duplex bidirectional system in which all nodes are able to transmit and receive signals simultaneously. Our model consists of forward and backward channels which are assumed to be orthogonal. For simplicity, we focus on a setting in which both forward and backward channels are symmetric but not necessarily the same. In the forward channel, and indicate the number of signal bit levels (or resource levels) for direct and cross links respectively. The corresponding values for the backward channel are denoted by .

With uses of the network, node wishes to transmit its own message , while node wishes to transmit its own message We assume that are independent and identically distributed according to . Here we use shorthand notation to indicate the sequence up to (or ), e.g., . Let be an encoded signal of node where and be part of visible to node . Similarly let be an encoded signal of node where and be part of visible to node The signals received at node and are then given by

(1) | ||||

(2) |

where and are shift matrices and operations are performed in :

The encoded signal of node at time is a function of its own message and past received signals: . We define where denotes node ’s received signal at time . Similarly the encoded signal of node at time is a function of its own message and past received sequences:

From the received signal , node wishes to compute modulo- sums of and (i.e., ). Similarly node wishes to compute from its received signals We say that a computation rate pair

is achievable if there exists a family of codebooks and encoder/decoder functions such that the decoding error probabilities go to zero as the code length

tends to infinity. Here and The capacity region is the closure of the set of achievable rate pairs.## Iii Main Results

###### Theorem 1 (Two-way Computation Capacity)

The computation capacity region is the set of such that

(3) | |||

(4) | |||

(5) | |||

(6) |

where and indicate the perfect-feedback computation capacities in the forward and backward channels respectively (see and in Baseline for detailed formulae).

For comparison to our result, we state two baselines: The capacity region for the non-interactive scenario in which there is no interaction among the signals arriving from different nodes; and the capacity for the perfect-feedback scenario in which feedback is given for free to aid computations in both directions.

###### Baseline 1 (Non-interaction Computation Capacity [19])

Let and . The computation capacity region for the non-interactive scenario is the set of such that and where

(7) | |||

(8) |

Here and denote the non-feedback computation capacities of forward and backward channels respectively.

###### Baseline 2 (Perfect-feedback Computation Capacity [17])

The computation capacity region for the perfect-feedback scenario is the set of such that and where

(9) | |||

(10) |

With Theorem and Baseline one can readily see that feedback offers a gain (in terms of capacity region) as long as A careful inspection reveals that there are channel regimes in which one can enhance or without sacrificing the other counterpart. This implies a net interaction gain.

###### Definition 1 (Interaction Gain)

We say that an interaction gain occurs if one can achieve for some and such that

Our earlier work in [2] has demonstrated that an interaction gain occurs in the light blue regime in Fig.

We also find the regimes in which feedback does increase capacity but interaction cannot provide such increase, meaning that whenever mush be and vice versa. The regimes are and One can readily check that this follows from the cut-set bounds and

Achieving perfect-feedback capacities: It is noteworthy to mention that there exist channel regimes in which both and can be strictly positive. This implies that for these regimes, not only feedback does not sacrifice one transmission for the other, but it can actually improve both simultaneously. More interestingly, as in the two-way interference channel [13], the gains and can reach up to the maximal feedback gains, reflected in and respectively. The dark blue/dots regimes in Fig. indicate such channel regimes when Note that such regimes depend on The amount of feedback that one can send is limited according to available resources, which is affected by the channel asymmetry parameter

The following corollary identifies channel regimes in which achieving perfect-feedback capacities in both directions is possible.

###### Corollary 1

Consider a case in which feedback helps for both forward and backward channels: and Under such a case, the channel regimes in which are as follows:

See Appendix A.

###### Remark 1 (Why the Perfect-feedback Regimes?)

The rationale behind achieving perfect-feedback capacities in both directions bears a resemblance to the one found in the two-way interference channel [13]: Interaction enables full-utilization of available resources, whereas the dearth of interaction limits that of those. Below we elaborate on this for the considered regime in Corollary

We first note that the total number of available resources for the forward and backward channels depend on and in this regime. In the non-interaction case, observe from Baseline that some resources are under-utilized; specifically one can interpret and as the remaining resource levels that can potentially be utilized to aid function computations. It turns out feedback can maximize resource utilization by filling up such resource holes under-utilized in the non-interactive case. Note that represents the amount of feedback that needs to be sent for achieving Hence, the condition (similarly ) in Corollary implies that as long as we have enough resource holes, we can get all the way to perfect-feedback capacity. We will later provide an intuition as to why feedback can do so while describing our achievability; see Remark in particular.

## Iv Proof of Achievability

Our achievability proof consists of three parts. We initially provide two achievable schemes for two toy examples in which the key ingredients of our achievability idea are well presented. Once the description of the two schemes is done, we will then outline the proof for generalization while leaving the detailed proof in Appendices B and C.

### Iv-a Example 1:

First, we review the perfect-feedback scheme [17], which we will use as a baseline for comparison to our achievable scheme. It suffices to consider the case of as the other case of follows similarly by symmetry.

#### Iv-A1 Perfect-feedback strategy

The perfect-feedback scheme for consists of two stages; the first stage has two time slots; and the second stage has one time slot. See Fig. Observe that the bottom level at each receiving node naturally forms a modulo- sum function, say where (or ) denotes a source symbol of node (or ). In the first stage, we send forward symbols at node and At time node sends and node sends Node then obtains and node obtains As in the first time slot, node and deliver and respectively at time Then node and obtain and respectively. Note that until the end of time are not yet delivered to node Similarly are missing at node

Feedback can however accomplish the computation of these functions of interest. With feedback, each transmitting node can now obtain the desired functions which were obtained only at one receiving node. Exploiting a feedback link from node to node node can obtain Similarly, node can obtain from node

The strategy in Stage is to forward all of these fed-back functions at time 3. Node then receives cleanly at the top level. At the bottom level, it gets a mixture of the two desired functions: Note that in the mixture was already obtained at time Hence, using node can decode Similarly, node can obtain In summary, node and can compute four modulo- sum functions during three time slots, thus achieving

In our model, however, feedback is provided in the limited fashion, as feedback signals are delivered only through the backward channel. There are two different types of transmissions for using the backward channel. The channel can be used for backward-message computation, or for sending feedback signals. Usually, unlike the perfect-feedback case, the channel use for one purpose limits that for the other, and this tension incurs a new challenge. We develop an achievable scheme that can completely resolve the tension, thus achieving the perfect-feedback performance.

#### Iv-A2 Achievability

Like the perfect-feedback case, our scheme has two stages. The first stage has time slots; and the second stage has time slots. During the first stage, the number and of fresh symbols are transmitted through the forward and backward channels, respectively. No fresh symbols are transmitted in the second stage, but some refinements are performed (to be detailed later). In this example, we claim that the following rate pair is achievable: In other words, during the total time slots, our scheme ensures and forward and backward-message computations. As we obtain the desired result:

Stage : The purpose of this stage is to compute and modulo- sum functions on the bottom level of forward and backward channels, while relaying feedback signals (as in the perfect feedback case) on the top level. To this end, each node superimposes fresh symbols and feedback symbols. Details are given below. Also see Fig. 4.

Time 1 & 2: Node sends and node sends Node and then receive and respectively. Observe that and have not yet been delivered to node and respectively. In an attempt to satisfy these demands, the perfect-feedback strategy is to feed back from node to node and to feed back from node to node

A similar transmission strategy is employed in our backward channel. Node and wish to transmit fresh backward symbols: and so that node and can compute and However, feedback transmission over the backward channel must be accomplished in order to achieve forward perfect-feedback capacity. Recall that in the perfect-feedback strategy, the received signals and are desired to be fed back. One way to accomplish both tasks is to superimpose feedback signals onto fresh symbols. Specifically node and encode and on the top level respectively. Then, a challenge arises if these signals are transmitted without additional encoding procedure. Observe that node would receive while the original goal is to compute the backward functions solely on the bottom level. In other words, the feedback signal causes interference to node because there is no way to cancel out this signal.

Interestingly, the idea of *interference neutralization* [20] can play a role. On the bottom level, node sending the mixture of (fresh symbol) and (received on the top level) enables the interference to be neutralized. This allows node to obtain which in turn leads node to obtain by canceling (own symbol).
Similarly node delivers As a result, node and can obtain and respectively.

At time we repeat this w.r.t. new symbols. As a result, node and receive and respectively, while node and receive and Similar to the first time slot, node and utilize their own symbols as side information to obtain and respectively.

Time : For time the transmission signals at node and are as follows:

(11) | |||

(12) |

Similarly, for time node and deliver:

(13) | |||

(14) |

There are a few points to note. First, the transmitted signal of each node includes two parts: Fresh symbols, e.g., at node and feedback signals, e.g., Moreover, the feedback signals sent through the bottom levels ensure modulo- sum function computations at the bottom levels as these null out interference. Finally, we assume that if the index of a symbol is non-positive, we set the symbol as *null*, e.g., we set (in ) as null until time

For the last two time slots, node and do not send any fresh backward symbols. Instead, they mimic the perfect-feedback scheme; at time node feeds back on the top level, while node feeds back on the top level.

Note that until time a total of forward symbols are delivered for Similarly, a total of backward symbols are delivered.

One can readily check that node and can obtain and respectively. Similarly, node and can correspondingly obtain and Recall that among the total and forward and backward functions, and are not yet delivered to node and respectively. Similarly and are missing at node and respectively.

For ease of understanding, Fig. illustrates a simple case of At time node receives and node receives Note that using their own symbols and node and can obtain and respectively. At time we repeat the same process w.r.t. new symbols. As a result, node and obtain and In the last two time slots (time and ), node and get and respectively.

Stage : During the next time slots in the second stage, we accomplish the computation of the desired functions not yet obtained by each node. Recall that the transmission strategy in the perfect-feedback scenario is simply to forward all of the received signals at each node. The received signals are in the form of modulo- sum functions of interest (see Fig. ). In our model, however, the received signals include symbols generated from the other-side nodes. For instance, the received signal at node in time is which contains the backward symbol Hence, unlike the perfect-feedback scheme, forwarding the signal directly from node to node is not guaranteed for node to decode the desired function

To address this, we introduce a recently developed approach [13]: *Retrospective decoding.* The key feature of this approach is that the successive refinement is done in a retrospective manner, allowing us to resolve the aforementioned issue. The outline of the strategy is as follows:
Node and start to decode and respectively. Here one key point to emphasize is that these decoded functions act as *side information.* Ultimately, this information enables the other-side nodes to obtain the desired functions w.r.t. the past symbols. Specifically the decoding order reads:

With the refinement at time (i.e., the th time of Stage ), node and can decode the following:

Subsequently, node and decode:

Note that after one more refinement at time and from and can be canceled out at node and and therefore finally decode and respectively.

Specifically, the transmission strategy is as follows:

Time 2L1: Taking the perfect-feedback strategy for one can readily observe that node and can decode and respectively.

Time 2L : With newly decoded functions at time a successive refinement is done to achieve reliable function computations both at the top and bottom levels. Here we note that the idea of interference neutralization is also employed to ensure function computations at the bottom levels. In particular, the transmission signals at node and are:

node | ||||

(15) | ||||

node | ||||

(16) |

Notice that the signals in the first bracket are newly decoded functions; the signals in the second bracket are those received at time on the top level; and those in the third bracket are modulo- sum functions decoded at Stage (e.g., even-index functions for node ). This transmission allows node and to decode and using their own symbols and previously decoded functions.

Similarly, for time node and deliver:

node | (17) | |||

node | (18) | |||

Note that the signals in the third bracket are modulo- sum functions decoded at Stage and the summation of those and the received signals on the top level. In particular, and (in the third bracket of and ) are the received signals at time As a result, node and can compute and using their own symbols and past decoded functions.

For ease of illustration, we elaborate on how decoding works in the case of We exploit the received signals at time and at node and As they obtain modulo- sums of forward symbols directly, the transmission strategy of node and at time is identical to that in the perfect-feedback scheme: Forwarding and respectively. Then node and obtain and Using (received at time ), node can decode Similarly node can decode .

Now in the backward channel, with the newly decoded (received at time ) and (received at time ), node can construct:

This constructed signal is sent at the top level.

Furthermore, with the newly decoded (received at time and ) and (received at time ), node can construct:

This is sent at the bottom level.

In a similar manner, node encodes Sending all of the encoded signals, node and then receive and respectively.

Observe that from the top level, node can finally decode of interest using (own symbols). From the bottom level, node can also obtain from by utilizing (received at time ) and (own symbols). Similarly, node can decode