I Introduction
Hybrid digital radio systems [1],[2], involving analog and digital information superposed in the same communication signal, are getting increasingly popular nowadays. In these systems, the receiver must estimate the analog signal, while also decoding the digital information. Such systems can be modelled as statedependent channels [3], where the transmitter aids the receiver in estimating the channel state while also conveying a stream of messages [4]. This is a simultaneous estimation and communication problem. For an additive channel, when the channel noise and the state are independent Gaussian processes, Sutivong et al. [4] established the optimal tradeoff between the mean squared error distortion in estimating the state and the communication rate in a pointtopoint setting. Joint state estimation and communication is also relevant in the context of multiuser networks with several sensor nodes observing a common phenomenon. The base station/receiver is interested not only in the source process but also in the data from each node, this is the topic of this work.
Channels with state are used to model situations in which the channel statistics are controlled by an external random process, known as the state process. The state process may be known either at the encoder, decoder, or both. The encoder state information can be either causal or noncausal (i.e. the entire state sequence is known a priori). Seminal papers by Shannon [5] (causal case) and Gelfand & Pinsker [6] (noncausal case) introduced statedependent models. The latter model was motivated by coding for memory with defects, first studied in [7]. Costa [8] introduced the notion of dirty paper coding (DPC) for a statedependent AWGN channel with noncausal state knowledge at the encoder, wherein the surprising conclusion that the capacity is unchanged by the presence of the state was arrived at. DPC later found extensive applications in broadcast settings, leading to the solution of the MIMO broadcast capacity region [9].
In certain state dependent channels, the transmitter may wish to aid the receiver in estimating the channel state, in addition to communicating messages. For a pointtopoint AWGN channel with additive state, splitting the available average power between uncoded transmission of the state and DPC for the message was found to be optimal for the mean squared error distortion measure [4]. There is only limited success in extending this result to other models. For example, the discrete memoryless counterpart of the state estimation problem was analyzed by Kim et al. [10], for a restricted setting where the distortion is measured in terms of the state uncertainty reduction rate. Under noncausal state knowledge at the encoders, there is no known network setting where the joint state estimation and communication tradeoff is completely available, to the best of our knowledge. The current paper solves this for the AWGN MAC setting. We now briefly mention some of the relevant contributions in the literature.
Following [4], the idea of channel state amplification has been studied in several network information theoretic settings. In [11], the problem of communicating a common source and two independent messages over a Gaussian broadcast channel (BC) without state was analyzed, and it was shown that a power splitting strategy to meet the two different goals is not optimal. The problem of communicating channel state information over a state dependent discrete memoryless channel with causal state information at the encoder was analyzed in [12]. More recently, it was shown in [13] that for simultaneous message and state communication over memoryless channels with memoryless states, feedback can improve the optimal tradeoff region both for causal and strictlycausal encoder side information. The dual problem of [4], known as state masking, in which the transmitter tries to conceal the state from the receiver was studied in [14]. In [15], a statedependent Gaussian BC was considered with the goal of amplifying the channel state at one of the receivers while masking it from the other receiver, with no message transmissions. [16] gave inner bounds for simultaneous message transmission and state estimation over a statedependent Gaussian BC. For message communication and state masking over a discrete memoryless BC, inner and outer bounds were derived in [17].
The main concern of this paper is state estimation and communication in a scalar Gaussian multiple access setting. We will present a model with two senders first. For the model shown in Fig. 1, a common additive independent and identically distributed (IID) Gaussian stateprocess affects the transmissions from both senders.
The transmitters know the stateprocess in a noncausal fashion. Our first objective is to obtain an estimate of the state process at the receiver to within a prescribed distortion bound. In addition, there is a message stream from each encoder to the receiver. Given a rate pair for their respective private messages, the transmitters attempt to minimize the distortion incurred in state estimation at the receiver. Under individual average transmit power constraints at the encoders, the Gaussian state dependent MAC with state estimation requirement leads to interesting tradeoffs between the achievable distortion and the rates. We name this model as the dirty paper MAC with state estimation.
In the absence of a state process, the AWGN MAC capacity can be seen as a natural extension of the point to point AWGN model [18]. When the state is present, it is tempting to look for such an extension of the joint state estimation and communication tradeoff, using the single user results in [4]. However, notice that the former connection is greatly aided by the polymatroidal capacity region of a Gaussian MAC (GMAC). Essentially three inequalities suffice to establish the converse result for a two user MAC [18]. On the other hand, for joint estimation and communication in a two user MAC, even the crosssection of the optimal rateregion under a given distortion is not always a polytope. This explains why single user techniques are not enough in our setup. Our main contributions are summarized below.

We provide a complete characterization of the optimal tradeoff between joint state estimation and communication over a two user dirty paper MAC with state estimation.

For a multisensor network of nodes observing a common source phenomenon, with each node possibly having an additional independent message stream, we characterize the optimal distortionrate performance in joint stateestimation and communication over a Gaussian MAC without state. The model is sufficiently general to include cases where the source symbols also act as additive state, which is known noncausally at the transmitters.
The transmission of correlated sources through a MAC is a very important open problem in literature [19, 20]. Our problem is related, but an extreme case called the cooperative MAC [19], where the source observation is common to all the transmitters. In our model, we are only constraining the reconstruction fidelity, whereas [19] considers the lossless case. In some sense, the problem which comes closest to the one here is the source estimation problem in [21]. Here transmitters in a Gaussian MAC observe independent noisy versions of a single source. The transmitters are assumed to be symmetric
, i.e. they have identical power constraints, and the same noise variance in the source observations. The channel state process is completely absent, but uncoded transmissions turn out optimal. Some relaxations on symmetric users are provided in
[22, 23]. In fact [22] also considers the scalar single user model with additive state noncausally known at the transmitter, and shows that uncoded transmission is optimal for state estimation in the Gaussian setting. However, even for the point to point system, optimal communication schemes in presence of additional messages are unknown when the source observations are noisy [22]. Other relevant studies regarding communication of sources over a MAC include [24] (distributed correlated sources over an orthogonal MAC), [25] (bivariate Gaussian source over GMAC with individual distortion constraints), [26] (reliable function computation over MAC) and [27] (linear functions of correlated sources over GMAC). Notice that none of these models consider additional data streams along with source communications over MACs. For noiseless source observations, [4] characterizes the optimal tradeoff for message as well as rate, the current paper extends this to a sender GMAC, with or without state.Notations: We use to denote the probability of an event, and
to denote the expected value of a random variable. All logarithms in this paper are to the base
, unless specified otherwise. We denote random vectors as
and . Calligraphic letters represent the alphabets. denotes the Euclidean norm of a vector.The paper is organized as follows: we introduce the system model and main results in Section II. Sections III and IV respectively contain the achievable coding scheme and converse to the optimal region. Section VI considers the generalization to transmitters over a GMAC, with and without state. Concluding remarks are given in Section VII.
Ii System Model and Results
The dirty paper MAC with state estimation is shown in Fig. 1. Here is the channel state and is the channel noise, with . The state and noise processes are i.i.d., with the state being noncausally available at both encoders. The receiver observes (for a single channel use)
(1) 
where . After observations, the decoder estimates using a reconstruction map , and also decodes the independent messages , which are assumed be independent of . We also take to be uniformly drawn from for .
Our objective here is to maintain the distortion below a prescribed value, while ensuring that the average error probability of decoding the messages is small enough, i.e.
(2)  
(3) 
Here is the decoding map, represents the distortion target, and the probability of error target.
Definition 1.
We say that a triple is achievable if a communication scheme exists for every , possibly by taking large enough. Let be the closure of the set of all achievable triples, with . Our main result is stated below.
Theorem 2.
For the dirtypaper MAC with state, the optimal tradeoff region is given by the convex closure of all such that
(4)  
(5)  
(6)  
(7) 
for some and , with and .
Before we prove this result, notice that the rate region is not in general a polytope for any given distortion value, unlike the case where state estimation is not required [28]. Nevertheless, the region admits a compact representation as given in (4)–(7).
Proof.
In Section III, we present a communication scheme to achieve tuples satisfying the constraints (4)–(7). Then in Section IV, we show a converse result which bounds the distortionrate performance for any successful communication scheme. We further show that the tradeoff cannot be better than the ones defined by (4)–(7) for some values of , this is given in Section V. The main novelty of the proof is in the converse result. ∎
Remark 3.
Notice that on setting , which amounts to no state estimation requirements, we recover the multiuser writing on dirty paper result in [28] which proves that the capacity region of the dirty paper MAC with state is not affected by the presence of state.
One of the motivations of our model comes from sensor networks employed in onboard platforms. We now make this connection more explicit.
Iia Connection to Multisensor models
Let us introduce a more general sender Gaussian MAC framework as in Figure 2, where the receiver observes
(8) 
When the parameter we recover the state dependent model, and corresponds to a source estimation and message communication problem over a GMAC without state. The transmissions of user is subjected to an average power constraint of . Each transmitter observes the source process , which is assumed to be noncausally available to them. The receiver should estimate the source, as well as an independent message stream from each transmitter. We term the model as the sourcemessage communication problem. Notice that some of the nodes may not have any messages, they simply help in the estimation of source. It turns out that having uncoded transmission at each node devoid of any messages is indeed the optimal strategy in such set ups, marking the importance of the results presented for statedependent models in the previous subsection. A brief literature review on source and message communication is in order.
Goblick [29] showed that for transmission of Gaussian sources over Gaussian channels, an uncoded strategy of sending a scaled version of the source to meet the power constraint and then MMSE estimation at the receiver is optimal. The same approach can be used to communicate Gaussian sources over Gaussian broadcast channels, as was observed in Prabhakaran et. al [30]. We already mentioned that the uncoded approach is optimal for source estimation in some symmetric GMAC models, in the absence of messages [21]. In the presence of independent private message stream at each node in an sender Gaussian MAC, we show that uncoded source transmissions are indeed optimal while communicating a common source observation.
Our main result for the sender Gaussian MAC with sourcemessage communication is stated in the following theorem.
Theorem 4.
For an sender GMAC with message and state communication, the optimal tradeoff region is given by the convex closure of the set of such that
(9)  
(10) 
for some .
Proof:
This is given in Section VI. ∎
Let us specialize our results to a three sender Gaussian MAC, as shown in Fig. 3. The channel model is
(11) 
with , and the power constraints .
Suppose the third terminal is only interested in conveying the source process under a distortion constraint, and it follows an uncoded strategy by sending
(12) 
Then the overall model becomes
(13)  
(14) 
where is noncausally known to both the encoders. Notice that this is indeed the joint state estimation and communication model for a dirty paper MAC with state estimation. The following corollary is of interest when the third user has no message.
Iii Achievability for the Dirty Paper MAC
We employ a suitable power splitting strategy along with dirty paper coding to prove the achievability. This is rather straightforward, but the details are given for completeness. The available power at encoder is split into two parts: namely for message transmission and for state amplification, for some . Likewise, the power available at the second encoder is split into (message transmission) and (state amplification) for some . Then generate the state amplification signals
(16) 
at the respective encoders. Now the system model in (1) can be rewritten as
(17) 
Here the index m in the subscript indicates that the corresponding signals are intended for message transmission, while the subscript s indicates state amplification signals. Now in order to communicate the messages across to the receiver, we employ the writing on dirty paper result for a Gaussian MAC [28].
Recall that a known dirt over an AWGN channel can be completely cancelled by dirty paper coding [8]. More generally, a rate satisfying
(18) 
when evaluated for some feasible distribution , can be achieved by GelfandPinsker coding [6] for a pointtopoint channel with noncausally known state. In order to achieve (4) – (6), we first consider a dirty paper channel with input , known state and unknown noise . We choose , with and . The achievable rate is
(19) 
Once the codeword is decoded, it can be subtracted from to obtain
(20) 
Now for sender , this can be considered as another dirty paper channel with input , known state and unknown noise . Let us choose , with and . The achievable rate becomes
(21) 
By reversing the decoding order, we can show that the following rate pair is also achievable
(22) 
The entire rate region as in expressions (4) through (6) can now be achieved by time sharing.
Now we turn to the proof of the achievable distortion. Based on the observation , the receiver forms the linear estimate
The MMSE can be readily calculated to be the RHS of expression (7). This completes the proof of achievability.
Iv Outer Bound for the Dirty Paper MAC with State Estimation
In this section and the next, we show that any successful communication scheme has to satisfy the rate and distortion constraints of Theorem 2. Two ideas from the single user result of [4] will turn out to be very useful towards the proof. The first is stated below as a lemma, its proof can be found in [4].
Lemma 6.
Any communication scheme achieving a distortion over block length will have
(23) 
The second useful idea is to construct bounds for the term , instead of separate bounds for rate and distortion . As in the single user case, the above transformation of the distortion function turns out to be sufficient for the dirty paper MAC with state as well, however we now have to consider ratepairs . In addition, we will use the following property of Gaussian random variables [18]: Gaussians maximize entropy, i.e. for
(24) 
The above facts will be extensively used in our proofs. For , let us define
where the maximum is over all obeying (4) – (7). Notice that we did not consider , as this will trivially correspond to in the maximization, a case already accounted for by . Similarly, since , we need to consider only . Thus, only nonnegative weighing coefficients are considered in the sequel. A converse proof can be obtained by showing that if is achievable using block length , then, for all ,
(25) 
Our strategy is to convert the LHS of (25) to a form where (24) can be applied. Since the messages are independent of , we have the Markov condition . Denoting
we have for the th entry in a block,
(26) 
More generally, we can define the empirical covariance matrix of the combined vector , with denoting its entries. Let us denote
Now, let us introduce two parameters for each such that
(27) 
where is the sign of the correlation. The remaining terms of can be evaluated using (27) and
Let us also define two parameters and :
(28) 
With this, we are all set to prove (25). First of all, considering is sufficient, as a simple renaming of the indices will give us the opposite case. For , since is an arbitrary positive number, we can equivalently maximize . Dividing by , and then renaming as , the maximization becomes ,
(29) 
For a given , three regimes of are of interest, as depicted in Figure 4. These regimes can be identified in the plane as the three cases marked in Figure 5.
We give slightly different proofs for the three cases marked above. We begin with some discussion common to all the cases. Let and , respectively, denote the RHS of equations (4) – (7). The following two lemmas play a key role in our proofs for the various regimes.
Lemma 7.
For , and defined in (28), we have
(30) 
Proof:
The proof is given in Appendix A. ∎
Lemma 8.
For , the function is a nonincreasing function in each of the arguments, i.e. for and .
Proof:
Notice that increases with (or ), see (7). Furthermore, a simple inspection shows that the function is decreasing in each of the arguments. ∎
Let us now consider the different regimes for as in Figure 5.
Case 1 (): In this regime, Lemma 7 directly gives a bound on the weighted sumrate.
Case 2 (): This requires a slightly different approach than above. Since , we can write
(31) 
In step we used Lemma 7 followed by Lemma 8, and follows from the fact that the minimal distortion possible is obtained by uncoded transmission of the state by the two users acting as a superuser with power [4]. In other words, an equivalent pointtopoint channel results in the absence of messages at both the transmitters.
Case 3 (
): Here also we modify an appropriate hyperplane by changing the weight on
, but this time with respect to the weight on the distortion. More specifically, sincewhere the last step used Lemmas 7 and 8. From (49), we can infer that is at most . Thus,
(32) 
Let us now show that bounds (30) – (32) indeed define the region given in Theorem 2.
V Equivalence of Inner and Outer Bounds
In this section, we prove that the respective regions defined by the inner and outer bounds in Sections III and IV coincide, thereby establishing the capacity region. As before, for each value of the weight , we will consider three different regimes for , and show that the maximal value of in the outerbound can be achieved. Before we embark on this, a numerical example is in order.
Let us take . The optimal tradeoff is plotted in Figure 6, where axis have values. In the first quadrant of , distinct faces to the region can be identified. Notice the oblique face along axis in the plot. This face is a pentagon, corresponding to the maximal distortion, however this does not coincide with plane in the plot shown (the plot begins at ). This is to emphasize the fact that even if we care only about optimizing the transmission rates, still some reduction of distortion from its maximal possible value of can be achieved. This can also be seen from the employed DPC scheme. Essentially the maximal distortion can be achieved while operating at the maximal sumrate. In principle, the extreme pentagon at in the current plot can be extended all the way to . Notice that any other crosssection along axis in the interior of the plot is not even a polytope, see Figure 7.
The respective faces intersecting and axis represent the single user tradeoff between estimation error and communication rate, when only one of the users is present [4]. Notice the three remaining faces in the interior of . The middle one (striped, red) is a collection of lines, corresponding to the sumrate constraints at different values of distortion. The other two surfaces are curved, Figure 7 illustrates this using a cross section for a given value.
Let us now generalize our observations from the example. While maximizing , we already showed that corresponds to an extreme point where the sumrate is zero (Case in Section IV). Clearly, the corresponding distortion lower bound for this case can be achieved by uncoded transmission of the state by both the transmitters, using all the available powers. Thus, the condition subsumes all . Furthermore the regime (Case of Section IV) corresponds to the case where . This implies that we need to only consider instead of . Notice that the region with matches the single user results of [4], albeit for a state process with variance . This leaves us with showing achievable schemes for those cases in which . The following lemma holds the key in this regime.
Lemma 9.
For , the function is jointly strictly concave in for and .
Proof:
The proof is relegated to Appendix B. ∎
Since we know that for some value of , the strict concavity of suggests that for the given and , there is a unique for which is maximized. Clearly, choosing the maximizing parameters in our achievable theorem will give us the same operating point. Reversing the roles of and , we have covered the whole region, except when . Anticipating the endresult, we will call the extremal surface for as the dominant face of the plot, somewhat abusing the term face. Clearly is a strictly concave function, and hence maximized at a unique value of . Thus for a given value of distortion, the dominant line simply connects the points and given by
(33) 
for appropriate . Evidently, each ratepair in the dominant line is achievable by our communication scheme. This completes the proof of Theorem 2.
Let us now turn our attention towards a multiuser MAC with or without state.
Vi Message and Source Communication for a sender GMAC
In the sender model all the transmitting nodes observe the same source process. They should help the receiver estimate the state process. In addition each node may have an independent stream of messages to be communicated to the base station. Theorem 4 gives the optimal tradeoff region. We prove this theorem below.
Via Achievable Scheme
The proof of achievability follows the same lines as before, via power sharing, dirty paper coding and MMSE estimation. In particular we choose
(34) 
and choose , where is dirty paper coded. Suppose we employ successive cancellation at the decoder. Then user does DPC to generate , treating as the known dirt. The effective interference for user is , where is the set of users decoded after user by the successive cancellation decoder. Once all the messages are decoded, these signals are removed from the received symbols, and state estimation is done using a linear MMSE estimate. Taking different user permutations for successive cancellation, and further timesharing will give the rate region given in Theorem 4. These straightforward computations are omitted here.
ViB Converse Bound
We now prove the converse. We are interested in maximizing
(35) 
for and . Without loss of generality, assume that . As we did for the two user case, since is arbitrary, by suitable scaling, we can take , and .
Then the following three regimes arise.



for some .
Let us first extend Lemma 7 to senders. To this end, define for ,
(36) 
Also let denote the RHS of (10).
Lemma 10.
For , , and ,
(37) 
Proof:
The proof is relegated to Appendix C. ∎
The following lemma can be found true by inspection.
Lemma 11.
For , the function is a nonincreasing function in each of the arguments .
Now we consider the various regimes involving .
Case 1 (): In this regime, Lemma 10 directly gives a bound on the weighted sumrate.
Case 2 (): Since in this case, we can write
(38) 
where is a vector of zeros. Observe that is implied by Lemmas 10 – 11, and follows since is the minimal distortion possible. Notice that can be achieved by uncoded communication of the state by all transmitters.
Case 3 (): Since let us bound
Comments
There are no comments yet.