# Channel Dependent Mutual Information in Index Modulations

Mutual Information is the metric that is used to perform link adaptation, which allows to achieve rates near capacity. The computation of adaptive transmission modes is achieved by employing the mapping between the Signal to Noise Ratio and the Mutual Information. Due to the high complexity of the computation of the Mutual Information, this process is performed off-line via Monte Carlo simulations, whose results are stored in look-up tables. However, in Index Modulations, such as Spatial Modulation or Polarized Modulation, this is not feasible since the constellation and the Mutual Information are channel dependent and it would require to compute this metric at each time instant if the channel is time varying. In this paper, we propose different approximations in order to obtain a simple closed-form expression that allows to compute the Mutual Information at each time instant and thus, making feasible the link adaptation.

## Authors

• 19 publications
• 6 publications
• 3 publications
• 7 publications
11/12/2018

### Mutual Information of Wireless Channels and Block-Jacobi Ergodic Operators

Shannon's mutual information of a random multiple antenna and multipath ...
09/17/2020

01/09/2018

### Spatial Lattice Modulation for MIMO Systems

This paper proposes spatial lattice modulation (SLM), a spatial modulati...
02/01/2021

### On conditional Sibson's α-Mutual Information

In this work, we analyse how to define a conditional version of Sibson's...
02/04/2021

### Cumulant Expansion of Mutual Information for Quantifying Leakage of a Protected Secret

The information leakage of a cryptographic implementation with a given d...
01/19/2019

### Ergodic MIMO Mutual Information: Twenty Years After Emre Telatar

In the celebrated work of Emre Telatar in the year 1999 (14274 citations...
05/24/2021

### A Practical Consideration on Convex Mutual Information

In this paper, we focus on the convex mutual information, which was foun...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Link Adaptation in modern communications is performed by computing the Effective Signal to Noise (SNR) Mapping (ESM) based on Mutual Information (MI-ESM) [Tao2011, Cheema2014, Hosseini2016]. For instance, the work described in [emd2008] describes the procedure of computing MI-ESM in Single-Input Single-Output systems for IEEE 802.16e standard. Analogously, authors of [Latif2012]

describe the MI-ESM algorithm for Long Term Evolution (LTE) networks. All of these works have in common the computation of the Mutual Information (MI), which involves an expectation of a function of a Random Variable (RV) without closed-form solution.

In the literature, the computation of the expectation of MI is performed off-line via Monte Carlo simulations and the results are stored into a look-up table (LUT). After this step, the received SNR of each symbol within a codeblock or frame is mapped to the LUT to obtain the MI corresponding to the SNR.

In Index Modulations (IM), such as Spatial Modulation [DiRenzo2014] or Polarized Modulation [Henarejos2015a], the information is transmitted not only with a fixed constellation, such as Quadrature Amplitude Modulation (QAM), but also with the channel hops. Due to the dependence on the channel, the MI computation cannot be performed off-line since the expressions contain the channel realization [Tato17]. The solution is to compute the MI curve in each time instant, depending on the channel realization. Due to the high computational complexity of MI computation, this approach is not feasible.

This paper presents closed-form expressions based on different order approximations of the MI of IM. Based on the works [Yang2008, Rajashekar2014, Henarejos2017]

, which compute the capacity of IM, we aim at solving the difficulty of finding a closed-form expression of MI. Thanks to this expression, we are able to compute the MI at each time instant with much less computational complexity and making the problem of adaptive IM affordable. Hence, the MI estimated is used to select the Modulation and Coding Scheme in the link adaptation algorithm process.

## 2 System Model and Mutual Information

Given a discrete time instant, the IM over an arbitrary Multiple-Input Multiple-Output (MIMO) channel realization, with inputs and outputs, is defined as

 y=√γHx+w, (1)

where

is the average SNR, , is the all-zero vector except at position that is 1, is the channel matrix, is the hopping index, is the complex symbol from the constellation . The AWGN noise is modeled as vector . In other words, has only one component different from zero (th component) and its value is ; that is, the transmitted symbol hops among the different channels.

Differently from previous works, in this paper we do not analyze the statistics of , as we are only interested in the MI given a channel realization. models the effects and specific impairments of the employed domain (spatial, polarization, frequency, etc.).

Since the transmitted vector is determined by , it is possible to rewrite (1) as

 y=√γhls+w. (2)

Thus, the MI between the received signal and is expressed as

 I(y;s,l)=I(y;s|l)+I(y;l)=H(s|l)−h(s|l,y)+H(l)−h(l|y)=H(s)+H(l)−h(s,l|y) (4)

where the third equality assumes that and are independent RV, is the entropy of and is the differential entropy of . Note that, in contrast to [Henarejos2017], where the capacity is obtained, in our case the symbol is not maximized and belongs to a particular constellation.

The entropy of and is expressed as and , where is the number of symbols defined in the constellation. The expression of the differential entropy is denoted in (3), where is the domain of , is the expectation of ,

is the joint probability density function (pdf) of

, and , is the conditional pdf of conditioned to and , is the pdf of , and are the probabilities of symbol and index , respectively, , and is the domain of the th component of .

The pdf of conditioned to and is obtained by assuming and to be deterministic in (2). In this case, it is clear that is a multivariate complex Gaussian RV, with mean equal to and identity covariance. Thus, the conditioned pdf is expressed as

 f\mathitbfsfY|\mathitsfS,\mathitsfL(y,s,l)=1πre−∥y−√γhls∥2. (5)

Note that we assume that and are equiprobable. By substituting (5) in (3), the expectation can be described as

 (6)

where and, thus, the conditioned RV .

Computing (6) is achieved numerically by generating a very large number of realizations of and averaging the results via Monte Carlo simulations. However, this can only be feasible in scenarios where fixed constellations are employed. In the case of IM, the constellation depends on the channel realization. Hence, the expectation has to be calculated at each time instant, requiring high computational complexity and making the problem of link adaptation unaffordable. Our approach overcomes this problem, since it does not require off-line computations and presents closed-form expressions.

Once is defined, we apply the same procedure as described in [Henarejos2017]

, which uses the Taylor Series Expansion (TSE) to approximate the expectation of a function by its moments. The central moments of

are defined by

 μ\mathitsfW′i,R=μ\mathitsfW′i,I=0ϑn\mathitsfW′i,R=ϑn\mathitsfW′i,I=⎧⎨⎩(n−1)!!1(2γ)n2=if {n} is even0if {n} is odd, (7)

where and are the real and imaginary parts of the th component of the RV . By assuming that

 gsl(w′)=log2(∑s′∈St∑l′=1e−γ(∥∥xsl−xs′l′+w′∥∥2−∥∥w′∥∥2)), (8)

we define the TSE of function in the vicinity of as , where is the Taylor polynomial of degree and is the remainder term of degree . Thus, the expectation of (6) is equal to

 (9)

where and

 PN(gsl,w′,μ\mathitbfsfW′)=IE\mathitbfsfW′{PN(gsl,w′,μ\mathitbfsfW′)}=gsl(μ\mathitbfsfW′)+⌊N/2⌋∑n=11(2γ)n(2n)!!r∑m=1⎛⎝∂2ngsl∂w′2nm,R(μ\mathitbfsfW′)+∂2ngsl∂w′2nm,I(μ\mathitbfsfW′)⎞⎠RN(gsl,w′,ξ)=IE\mathitbfsfW′{RN(gsl,w′,ξ)}. (13)

Hereinafter, for the sake of clarity, we introduce the following definitions:

 xsl≐hlsDsl,s′l′≐e−γ∥xsl−xs′l′∥2Dsl≐∑s′∈St∑l′=1Dsl,s′l′=∑s′∈St∑l′=1e−γ∥xsl−xs′l′∥2. (14)

The first term of (9) is described as

 gsl(μ\mathitbfsfW′)=log2(∑s′∈St∑l′=1e−∥xsl−xs′l′∥2)=log2(Dsl). (15)

Thus, by using (9), (15) and substituting them into (4), then the MI can be expressed in a closed-form as in (10) and it can be approximated by considering additional terms. The simplest expression is the first order approximation, which is obtained by omitting the third term in (10). Consequently, the first order approximation is denoted by

 I(1)(y;s,l)≃log2(tS)−1tS∑s∈St∑l=1log2(Dsl)=log2(tSG(Dsl)), (16)

where and are the geometric and arithmetic mean, respectively, i.e., and .

The second order approximation involves the second derivative of at . Thus, after some mathematical manipulations, the second term is expressed as (11), where

 Dm,sl,R=∑s′∈St∑l′=1(xm,s′l′,R−xm,sl,R)Dsl,s′l′Dm,sl,I=∑s′∈St∑l′=1(xm,s′l′,I−xm,sl,I)Dsl,s′l′ (17)

and

 Asl(Dsl,s′l′)=1tS∑s′∈St∑l′=1Dsl,s′l′Gsl(DDsl,s′l′sl,s′l′)=(∏s′∈St∏l′=1DDsl,s′l′sl,s′l′)1tS (18)

are the arithmetic and geometric means over

and by keeping and fixed. Hence, by plugging (11) in (10), the second order approximation of MI is described by (12).

### 2.1 Bounds of approximated Mutual Information

TSE applied to the expectation of a function of a RV allows to express it as a function of its moments instead of the RV; thus, making more efficient the computation by successive approximations. An important remark is that the expectation of TSE is lower or upper bounded by the first order approximation, depending on its convexity or concavity, respectively.

In our case, this can be proven by examining the convexity of (8) and applying the Jensen’s inequality, which results that the expectation of TSE is lower bounded by (15).

This can be proven by using the Jensen’s inequality as follows

 P1(f,x,μ\mathitbfsfX)=f(μ\mathitbfsfX)=f(IE\mathitsfX{x})≤IE\mathitbfsfX{f(x)}. (19)

Note that, due to the minus sign in (4), the lower bound of Jensen’s inequality becomes an upper bound, which is increased by the factor and averaged by and .

## 3 Results

In this section, we illustrate the results derived from the previous sections. We compare the performance of first and second order approximations, i.e., (16) and (12), respectively, by simulating the curves of MI with the integral-based expression (3), (4).

In this simulation, we generate independent channel realizations following a Rayleigh distribution and average the results to obtain a single smooth curve. Note that we do not average over noise realizations since we obtained mathematical expressions that are not functions of a noise RV. We also depict different input/outputs configurations and different constellations. Particularly, we consider QPSK and 16-QAM constellations.

Fig. 5 illustrates the MI of first and second order approximations, (16) and (12), respectively, compared with the integral-based expression, (3), (4). First, as we denoted in Section 2.1, the first order approximation is, at the same time, the upper bound of the integral-based expression. Additionally, we can observe that, as expected, the second order approximation produces tighter curve compared with the first order approximation.

## 4 Conclusions

In this paper we introduce the problem of implementing link adaptation in Index Modulations, such as Spatial Modulation or Polarized Modulation, where the information is modulated with fixed constellations and dynamic channel hops. If the channel is time varying, it is unaffordable to compute the Mutual Information at each time instant. With our approach it is possible to obtain a smooth curve by using closed-form expressions, decreasing the computational complexity and allowing to perform the link adaptation. Finally, we depict the first and second order approximations compared with integral-based expression for several configurations and constellation size.