Distributed Iterative Processing for Interference Channels with Receiver Cooperation

04/17/2012 ∙ by Mihai-Alin Badiu, et al. ∙ 0

We propose a framework for the derivation and evaluation of distributed iterative algorithms for receiver cooperation in interference-limited wireless systems. Our approach views the processing within and collaboration between receivers as the solution to an inference problem in the probabilistic model of the whole system. The probabilistic model is formulated to explicitly incorporate the receivers' ability to share information of a predefined type. We employ a recently proposed unified message-passing tool to infer the variables of interest in the factor graph representation of the probabilistic model. The exchange of information between receivers arises in the form of passing messages along some specific edges of the factor graph; the rate of updating and passing these messages determines the communication overhead associated with cooperation. Simulation results illustrate the high performance of the proposed algorithm even with a low number of message exchanges between receivers.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Cooperation in interference-limited wireless networks has the potential to significantly improve the system performance [1]

. Additionally, variational techniques for Bayesian inference 

[2] are proven extremely useful for the design of iterative receiver architectures in non-cooperative scenarios. Hence, using such inference methods to design iterative algorithms for receiver cooperation could be beneficial.

Algorithms based on belief propagation (BP) are proposed in [3, 4] for distributed decoding in the uplink of cellular networks with base-station cooperation, assuming simple network models, uncoded transmissions and perfect channel knowledge at the receivers; it is shown that the performance of optimal joint decoding can be achieved with decentralized algorithms. In [5, 6], the authors discuss strategies for base-station cooperation and study the effect of quantizing the exchanged values, still assuming perfect channel knowledge.

In this paper, we study cooperative receiver processing in an interference channel and formulate it as probabilistic inference in factor graphs. We state a probabilistic model that explicitly incorporates the ability of the receivers to exchange a certain type of information. To infer the information bits, we apply a recently proposed inference framework that combines BP and the mean-field (MF) approximation [7]

. We obtain a distributed iterative algorithm within which all receivers iteratively perform channel and noise precision estimation, detection and decoding, and also pass messages along the edges in the factor graph that connect them. The rate of updating and passing these messages determines the amount of communication over the cooperation links.

Notation: The relative complement of in a set is written as . The set is denoted by

. Boldface lowercase and uppercase letters are used to represent vectors and matrices, respectively; superscripts

and denote transposition and Hermitian transposition, respectively. The Hadamard product of two vectors is denoted by

. The probability density function (pdf) of a multivariate complex Gaussian distribution with mean

and covariance matrix is denoted by

; the pdf of a Gamma distribution with scale

and rate is denoted by . We write when for some positive constant . The Dirac delta function is denoted by . Finally,

stands for the expectation of a random variable.

Ii System Model

We consider a system with parallel point-to-point links where each user sends information to its corresponding receiver and interferes with the others by doing so. To decode the desired messages, the receivers are able to cooperate by exchanging information over dedicated error-free links.

A message sent by user is represented by a vector of information bits and is conveyed by sending data and pilot channel symbols having the sets of indices and , respectively, such that and ; the sets and are identical for all users. The bits in are encoded and interleaved into a vector of bits which are then mapped to data symbols , where is a (user specific) discrete complex modulation alphabet of size . Symbols are multiplexed with pilot symbols which are randomly drawn from a QPSK modulation alphabet.

The users synchronously transmit their aggregate vectors of channel symbols over an interference channel with input-output relationship


The vector contains the signal received by receiver , is the vector of complex weights of the channel between transmitter and receiver , and contains the samples of additive noise at receiver with pdf for some positive precision . For all , we define the signal-to-noise ratio (SNR) and interference-to-noise ratio (INR) at receiver as

Iii The Combined BP-MF Inference Framework

In this section, we consider a generic probabilistic model and briefly describe the unified message-passing algorithm that combines the BP and MF approaches [7].

Let be an arbitrary pdf of a random vector which factorizes as


where is the vector of all variables that are arguments of the function for all . We have grouped the factors into two sets that partition : and . The factorization in (2) can be visualized by means of a factor graph [8]. We define to be the set of indices of all variables that are arguments of function ; similarly, denotes the set of indices of all functions that have variable as an argument. The parts of the graph that correspond to and to are referred to as “BP part” and “MF part”, respectively.

The combined BP-MF inference algorithm approximates the marginals , , by auxiliary pdfs called beliefs. They are computed as [7]




where and are constants that ensure normalized beliefs.

Iv Distributed Inference Algorithm

In this section, we state a probabilistic formulation of cooperative receiver processing and use the combined BP-MF framework to obtain the message updates in the corresponding factor graph; finally, we define a parametric iterative algorithm for distributed receiver processing.

Iv-a Probabilistic system model

The probabilistic system function can be obtained by factorizing the joint pdf of all unknown variables in the signal model. Collecting the unknown variables in vector , we have:


To include in the probabilistic model the ability of the different receivers to exchange information of a certain type, we define an augmented pdf. Depending on the type of shared information, several cooperative strategies can be devised: the receivers could exchange their current local knowledge about the modulated data symbols , or coded and interleaved bits , or information bits . We focus on the case in which the receivers share information on 111The other alternatives can be implemented with straightforward modifications to the model presented in this section.. To construct the augmented pdf for this cooperation scenario, we replace each vector variable and with “alias” variables and , , which are constrained to be equal to the corresponding original variable. Keeping in mind that receiver is interested in decoding message , the factorization of the augmented pdf reads


where denotes the vector of all unknown variables in (6), including the alias variables. Next, we denote, define and group in sets the factors in (6). For all , the factors

incorporate the observation vector and they form the set ; the factors are the prior pdfs of the parameters and they form the set ; the factors , , represent the prior pdfs of the vectors and they form the set ; denoting by the subvector of containing the bits mapped on and by the mapping function, for all , the factors

account for the modulation mapping and they form the set ; the factors stand for the coding and interleaving operations performed at transmitter and they form the set ; the factors

are the uniform prior probability mass functions of the information bits and they form the set

; finally, for all , the factors


constrain the alias variables , to be equal, and they form the set . Note that, due to these additional constraints, marginalizing (6) over all alias variables , leads to the original probabilistic model (5).

The factorization in (6) can be visualized in a factor graph, which is partially depicted in Fig. 1. The graphs corresponding to the channel codes and interleavers are not given explicitly, their structures being captured by . We coin “receiver ” the subgraph containing the factor nodes , , , , , and the variable nodes connected to them. The factor nodes and model the cooperative link between receivers and .

We can now recast the problem of cooperative receiver processing as an inference problem on the augmented probabilistic model (6): receiver needs to infer the beliefs of the information bits in using the observation vector and prior knowledge, i.e., the pilot symbols of all users222Since the pseudo-random pilot sequences can be generated deterministically based on some information available to all receivers, each receiver is able to reconstruct all the pilot symbols without the need of exchanging them. and their set of indices , the channel statistics, the modulation mappings of all users, the structure of the channel code and interleaver of user , and the external information provided by the other receivers. The inference problem is solved by applying the method described in Section III, which leads to iteratively passing messages in the factor graph. We can control the communication overhead between receivers by adjusting the rate of passing messages through nodes and .

Fig. 1: Factor graph representation of the pdf factorization in (6): receivers and are depicted together with the connections between them. For all , the bits in receiver are connected to the bits in all other receivers, while the bits , , are only connected to the bits in receiver .

Iv-B Message computations

To make the connection with the arbitrary model in Section III, we define and to be the sets of all factors and variables, respectively, introduced in the previous subsection333With a slight abuse of notation, from this point on we use the names of functions and variables as indices in the sets and , respectively.. We choose to split into the following two sets that yield the “MF part” and the “BP part”:


In the following, we use (4) to derive messages in our setup, focusing on their final expressions. More detailed message computations using the combined BP-MF method can be found in [7] and [9] for non-cooperative scenarios.

First, for all we define the statistics

for and we set and , for . We also define444The defined quantities are the parameters determining the corresponding beliefs, because is in the MF part and therefore the beliefs are equal to the “” messages (see (3),(4)). , ,

and we denote by the th entry of .

Channel estimation: Using (4), we obtain the messages


for all , where is a diagonal covariance matrix and

We have ; so, using (4), we obtain


, with

Noise precision estimation: Using (4), we obtain


, with and

We select the conjugate prior pdf

, . Using (4), we obtain


Setting the prior pdfs to be non-informative, i.e., , we obtain the estimate , .

Symbol detection: Using (4), we obtain



Assume that in the BP part of the graph we have obtained


where is the extrinsic value of for . According to (4), the discrete messages (APP values)


are sent to the MF part, while are sent to the BP part as extrinsic values.

(De)mapping, decoding, information exchange: These operations are obtained using (4), which due to (8

) reduce to the BP computation rules. Messages from and to binary variable nodes are of the form

, with . Computing is equivalent to MAP demapping, , . The messages , , and represent the input values to the de-interleaving and decoding BP operations which output and . Due to the equality constraints (7), messages pass transparently through the factor nodes , . Therefore, the following messages are received by receiver from receiver , and :


The messages


, , are used in (4) to obtain the soft mapping updates (14).

Iv-C Algorithm outline

We define the cooperative processing algorithm by specifying the order in which the messages in Section IV-B are computed and passed in the factor graph. The algorithm consists of three main stages:

Iv-C1 Initialization

Receiver obtains initial estimates of its variables. First, estimates of with are obtained for all by using an iterative estimator based on the signals at pilot positions only, similar to the one described in [9, Sec. V.A]. Specifically, we restrict (9), (10), (11) to include only subvectors and submatrices corresponding to pilot indices and we initialize and , . We compute (9) and (10) successively for all , and then (11) and (12); repeat this process times. The initial estimates of are obtained by applying (10) for whole vectors and matrices, with , . Then, we set and , . Estimation of is performed using (11) and (12), followed by symbol detection (13), applied successively for all ; this process is repeated times. Finally, soft demapping and decoding are performed in the BP part, with and initialized to have equal bit weights.

Iv-C2 Information exchange

Receiver sends given by (16) to receiver and simultaneously receives from all receivers ; then, it computes and sends given by (17) to all receivers .

Iv-C3 Local iteration

Receiver computes (18), followed by (14) and (15), for all . Next, , , are successively estimated using (9) and (10), and is estimated using (12). Then, (13) is successively computed for all , repeating this process times. Finally, soft demapping and decoding are performed in the BP part.

To define the distributed iterative algorithm, we use three parameters: describes the total number of receiver iterations, including the Initialization stage as first iteration; denotes the number of Information exchange stages; for , the vector with strictly increasing elements contains the iteration indices after which an Information exchange stage takes place. For we set .

Algorithm 1

The steps of the algorithm are:

  1. Initialization for all ; Set and ;

  2. If then go to step 5);

  3. If then Information exchange ; ;

  4. Local iteration for all ; ; go to step 2);

  5. Take hard decisions using the beliefs

V Simulation Results

We consider an OFDM system consisting of links with symmetric channel powers, same noise levels at the receivers, and strong interference, i.e. . The detailed assumptions are listed in Table I. The performance of Algorithm 1 is evaluated through Monte-Carlo simulations. The BER dependence on SNR is illustrated in Fig. 2, while the BER convergence is given in Fig. 3. Receiver collaboration provides a significantly improved performance compared to a non-cooperative setting (). When , an error-floor occurs at BER , but the cooperation scheme with only two exchanges almost achieves the performance of “full” cooperation (); the improvement brought by the second exchange is clearly visible in Fig. 3. All schemes need about receiver iterations to converge. The benefits of cooperation are also observed in the improved channel weights and noise precision estimation (results are not presented here), which of course lead to improved detection and decoding, and vice-versa.

Parameters of the OFDM system Value
Number of users
Subcarrier spacing
Number of active subcarriers
Number of pilot symbols evenly spaced pilots
Modulation scheme for data symbols
Convolutional code (of both users)
Multipath channel model 3GPP ETU
Parameters of the algorithm Value
Number of receiver iterations
Number of exchanges
Exchange indices
Number of sub-iterations ,
TABLE I: Simulation parameters
Fig. 2: BER vs. SNR performance of the distributed iterative algorithm.
Fig. 3: BER vs. iteration number at dB.

Vi Conclusions

We proposed a message-passing design of a distributed algorithm for receiver cooperation in interference-limited wireless systems. Capitalizing on a unified inference method that combines BP and the MF approximation, we obtained an iterative algorithm that jointly performs estimation of channel weights and noise powers, detection, decoding in each receiver and information sharing between receivers. Simulation results showed a remarkable improvement compared to a non-cooperative system, even with 1–2 exchanges between receivers; as expected, a trade-off between performance and amount of shared information could be observed.

In general, our approach provides several degrees of freedom in the design of distributed algorithms, such as the type of shared information and the parameters of the algorithm (number of receiver iterations, rate and schedule of information exchange). The proposed approach can be extended to other cooperation setups and it can accommodate the exchange of quantized values – the quantization resolution thus becoming another implementation choice – by quantizing the parameters of the messages passed between the receivers.