Saddlepoint Approximations for Rayleigh Block-Fading Channels

04/23/2019 ∙ by Alejandro Lancho, et al. ∙ Chalmers University of Technology Universidad Carlos III de Madrid 0

This paper presents saddlepoint approximations of state-of-the-art converse and achievability bounds for noncoherent, single-antenna, Rayleigh block-fading channels. These approximations can be calculated efficiently and are shown to be accurate for SNR values as small as 0 dB, blocklengths of 168 channel uses or more, and when the channel's coherence interval is not smaller than two. It is demonstrated that the derived approximations recover both the normal approximation and the reliability function of the channel.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The study of the maximum coding rate achievable for a given blocklength and error probability has recently regained attention in the research community due to the increased interest of short-packet communication in wireless communications systems. Indeed, some of the new services in next-generation’s wireless-communication systems will require low latency and high reliability; see

[1] and references therein. Under such constraints, capacity and outage capacity may no longer be accurate benchmarks, and more refined metrics on the maximum coding rate that take into account the short packet size required in low-latency applications are called for.

Several techniques can be used to characterize the finite-blocklength performance. One possibility is to fix a reliability constraint and study the maximum coding rate as a function of the blocklength in the limit as the blocklength tends to infinity. This approach, sometimes referred to as normal approximation, was followed inter alia by Polyanskiy et al. [2] who showed, for various channels with positive capacity , that the maximum coding rate at which data can be transmitted using an error-correcting code of a fixed length with a block-error probability not larger than can be tightly approximated by

(1)

where denotes the channel dispersion, denotes the inverse Gaussian -function, and comprises terms that decay no slower than . The work by Polyanskiy et al. [2] has been generalized to several wireless communication channels; see, e.g., [3, 4, 5, 6, 7, 8, 9, 10]. Particularly relevant to the present paper is the recent work by Lancho et al. [9, 10] who derived a high-SNR normal approximation for noncoherent single-antenna Rayleigh block-fading channels, which is the channel model considered in this work.

An alternative analysis of the short packet performance follows from fixing the coding rate and studying the exponential decay of the error probability as the blocklength grows. The resulting error exponent is usually referred to as reliability function [11, Ch. 5]. Error exponent results for this channel can be found in [12] and [13], where a random coding error exponent achievability bound is derived for multiple-antenna fading channels and for single-antenna Rician block-fading channels, respectively.

Both the exponential and sub-exponential behavior of the error probability can be characterized via the saddlepoint method [14, Ch. XVI]. This method has been applied in [15, 16, 17] to obtain approximations of the random coding union (RCU) bound [2, Th. 16], the RCU bound with parameter (RCUs) [18, Th. 1], and the meta-converse (MC) bound [2, Th. 31] for some memoryless channels.

In this paper, we apply the saddlepoint method to derive approximations of the MC and the RCUs bounds for noncoherent single-antenna Rayleigh block-fading channels. While these approximations must be evaluated numerically, the computational complexity is independent of the number of diversity branches . This is in stark contrast to the nonasymptotic MC and RCUs bounds, whose evaluation has a computational complexity that grows linearly in . Numerical evidence suggests that the saddlepoint approximations, although developed under the assumption of large , are accurate even for if the SNR is greater than or equal to  dB. Furthermore, the proposed approximations are shown to recover the normal approximation and the reliability function of the channel, thus providing a unifying tool for the two regimes, which are usually considered separately in the literature.

In our analysis, the saddlepoint method is applied to the tail probabilities appearing in the nonasymptotic MC and RCUs bounds. These probabilities often depend on a set of parameters, such as the SNR. Existing saddlepoint expansions do not consider such dependencies. Hence, they can only characterize the behavior of the expansion error in function of , but not in terms of the remaining parameters. In contrast, we derive in Section II

a saddlepoint expansion for random variables whose distribution depends on a parameter

, carefully analyze the error terms, and demonstrate that they are uniform in . We then apply the expansion to the Rayleigh block-fading channel introduced in Section III. As shown in Sections IVVII, this results in accurate performance approximations in which the error terms depend only on the blocklength and are uniform in the remaining parameters.

Notation

We denote scalar random variables by upper case letters such as , and their realizations by lower case letters such as

. Likewise, we use boldface upper case letters to denote random vectors, i.e.,

, and we use boldface lower case letters such as

to denote their realizations. We use upper case letters with the standard font to denote distributions, and lower case letters with the standard font to denote probability density functions (pdf).

We use to denote the purely imaginary unit magnitude complex number . The superscript denotes Hermitian transposition. We use “” to denote equality in distribution.

We further use to denote the set of real numbers, to denote the set of complex numbers, to denote the set of integer numbers, for the set of positive integer numbers, and for the set of nonnegative integer numbers.

We denote by the natural logarithm, by the cosine function, by the sine function, by the Gaussian Q-function, by the Gamma function [19, Sec. 6.1.1], by the regularized lower incomplete gamma function [19, Sec. 6.5], by the digamma function [19, Sec. 6.3.2], and by the Gauss hypergeometric function [20, Sec. 9.1]

. The gamma distribution with parameters

and is denoted by . We use to denote and to denote the ceiling function. We denote by Euler’s constant.

We use the notation to describe terms that vanish as and are uniform in the rest of parameters involved. For example, we say that a function is if it satisfies

(2)

for some independent of . Similarly, we use the notation to describe terms that are of order and are uniform in the rest of parameters. For example, we say that a function is if it satisfies

(3)

for some , , and independent of and .

Finally, we denote by the limit inferior and by the limit superior.

Ii Saddlepoint Expansion

Let be a sequence of independent and identically distributed (i.i.d.), real-valued, zero-mean, random variables, whose distribution depends on , where denotes the set of possible values of .

The moment generating function (MGF) of

is defined as

(4)

the cumulant generating function (CGF) is defined as

(5)

and the characteristic function is defined as

(6)

We denote by and the -th derivative of and , respectively. For the first, second, and third derivatives we sometimes use the notation , , , , , and .

A random variable is said to be lattice if it is supported on the points , , …for some and . A random variable that is not lattice is said to be nonlattice. It can be shown that a random variable is nonlattice if, and only if, for every we have that [14, Ch. XV.1, Lemma 4]

(7)

We shall say that a family of random variables (parametrized by ) is nonlattice if for every

(8)

Similarly, we shall say that a family of distributions (parametrized by ) is nonlattice if the corresponding family of random variables is nonlattice.

Proposition 1

Let the family of i.i.d. random variables (parametrized by ) be nonlattice. Suppose that there exists a such that

(9)

and

(10)

Then, we have the following results:

Part 1): If for the nonnegative there exists a such that , then

(11)

where comprises terms that vanish faster than and are uniform in and . Here,

(12a)
(12b)

Part 2): Let

be uniformly distributed on

. If for the nonnegative there exists a such that , then

(13)

where is defined as

(14)

and is uniform in and .

Corollary 2

Assume that there exists a satisfying (9) and (10). If for the nonnegative there exists a (for some arbitrary independent of and ) such that , then the saddlepoint expansion (13) can be upper-bounded as

(15)

where is independent of , and is defined as

(16)

and is uniform in and .

Remark 1

Since is zero-mean by assumption, we have that by Jensen’s inequality. Together with (9), this implies that

(17)
Remark 2

When the nonnegative grows sublinearly in , for sufficiently large , one can always find a such that . Indeed, it follows by (9) and Remark 1 that is an analytic function on with power series

(18)

Here, we have used that by definition and because is zero mean. By assumption (10), the function is strictly convex. Together with , this implies that strictly increases for . Hence, the choice

(19)

establishes a one-to-one mapping between and , and implies that . Thus, for sufficiently large , is inside the region of convergence .

Proof:

The proof follows closely the steps by Feller [14, Ch. XVI]. Since we consider a slightly more involved setting, where the distribution of depends on a parameter , we reproduce all the steps here. Let denote the distribution of , where . The CGF of is given by

(20)

We consider a tilted random variable with distribution

(21)

where the parameter lies in . Note that the exponential term on the right-hand side (RHS) of (21) is a normalizing factor that guarantees that is a distribution.

Let denote the MGF of the tilted random variable , which is given by

(22)

Together with , this yields

(23)

Note that, by (9), derivative and expected value can be swapped as long as . This condition is, in turn, satisfied for sufficiently small as long as . Following along similar lines, one can show that

(24)
(25)

and

(26)

Let now denote the distribution of and denote the distribution of . By (21) and (22), the distributions and again stand in the relationship (21) except that the term is replaced by and is replaced by . Since , by inverting (21) we can establish the relationship

(27)

Furthermore, by choosing such that , it follows from (23) that the distribution has zero mean. We next substitute in (27) the distribution

by the zero-mean normal distribution with variance

, denoted by , and analyze the error incurred by this substitution. To this end, we define

(28)

By fixing according to (19), (28) becomes

(29)

where the second equality follows by the change of variable , and the fourth equality follows by the change of variable .

We next show that the error incurred by substituting for in (27) is small. To do so, we write

(30)

where the last equality follows by integration by parts [14, Ch. V.6, Eq. (6.1)].

We next use [14, Sec. XVI.4, Th. 1] (stated as Lemma 3 below) to assess the error commited by replacing by . To state Lemma 3, we first introduce the following additional notation. Let

be a sequence of i.i.d., real-valued, zero-mean, random variables with one-dimensional probability distribution

that depends on an extra parameter . We denote the -th moment for any possible value of by

(31)

and we denote the second moment as .

For the distribution of the normalized -fold convolution of a sequence of i.i.d., zero-mean, unit-variance random variables, we write

(32)

Note that has zero-mean and unit-variance. As above, we denote by the zero-mean, unit-variance, normal distribution, and we denote by the zero-mean, unit-variance, normal probability density function.

Lemma 3

Assume that the family of distributions (parametrized by ) is nonlattice. Further assume that, for any ,

(33)

and

(34)

Then, for any ,

(35)

where the term is uniform in and .

Proof:

See Appendix A.

We next use (35) from Lemma 3 to expand (30). To this end, we first note that, as shown in Appendix B, if a family of distributions is nonlattice, then so is the corresponding family of tilted distributions. Consequently, the family of distributions (parametrized by ) is nonlattice since the family (parametrized by ) is nonlattice by assumption. We next note that the variable in (30) corresponds to in (32). Hence, , so applying (35) to (30) with and , we obtain

(36)

with defined in (12a), and defined in (12b). Here we used that and coincide with the second and third moments of the tilted random variable , respectively; see (24) and (25). The second equality follows by the change of variable .

Finally, substituting in (29) into (36), and recalling that , we obtain Part 1) of Proposition 1, namely

(37)
Proof:

The proof of Part 2) follows along similar lines as the proof of Part 1). Hence, we will focus on describing what is different. Specifically, the left-hand-side (LHS) of (13) differs from the LHS of (11) by the additional term . To account for this difference, we can follow the same steps as Scarlett et al. [15, Appendix E]. Since in our setting the distribution of depends on the parameter , we repeat the main steps in the following:

(38)

where the second equality follows from Fubini’s theorem [21, Ch. 2, Sec. 9.2]. We next proceed as in the proof of the previous part. The first term in (38) coincides with (27). We next focus on the second term, namely,

(39)

We substitute in (39) the distribution by the zero-mean normal distribution with variance , denoted by , which yields

(40)

By fixing according to (19), (40) can be computed as

(41)

where the second equality follows by the change of variable , and the fourth equality follows by the change of variable .

As we did in (30), we next evaluate the error incurred by substituting by in (39). Indeed,

(42)