Low SNR Asymptotic Rates of Vector Channels with One-Bit Outputs

12/15/2017 ∙ by Amine Mezghani, et al. ∙ 0

We analyze the performance of multiple-input multiple-output (MIMO) links with one-bit output quantization in terms of achievable rates and characterize their performance loss compared to unquantized systems for general channel statistical models and general channel state information (CSI) at the receiver. One-bit ADCs are particularly suitable for large-scale millimeter wave MIMO Communications (massive MIMO) to reduce the hardware complexity. In such applications, the signal-to-noise ratio per antenna is rather low due to the propagation loss. Thus, it is crucial to analyze the performance of MIMO systems in this regime by means of information theoretical methods. Since an exact and general information-theoretic analysis is not possible, we resort to the derivation of a general asymptotic expression for the mutual information in terms of a second order expansion around zero SNR. We show that up to second order in the SNR, the mutual information of a system with two-level (sign) output signals incorporates only a power penalty factor of pi/2 (1.96 dB) compared to system with infinite resolution for all channels of practical interest with perfect or statistical CSI. An essential aspect of the derivation is that we do not rely on the common pseudo-quantization noise model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In this paper, we investigate the theoretically achievable rates under one-bit analog-to-digital conversion (ADC) at the receiver for a wide class of channel models. To this end, we consider general multi-antenna communication channels with coarsely quantized outputs and general communication scenarios, e.g. correlated fading, full and statistical channel state information (CSI) at the transmitter and the receiver, etc.. Since exact capacity formulas are intractable in such quantized channels, we resort to a low signal-to-noise ratio (SNR) approximation and lower bounds on the channel capacity to perform the analysis. Such mutual information asymptotics can be utilized to evaluate the performance of quantized output channels or design and optimize the system in practice. Additionally, the low SNR analysis under coarse quantization is useful in the context of large scale (or massive) multiple-input multiple-output (MIMO) [4, 5, 6] and millimeter-wave (mmwave) communications [7, 8, 9, 10, 11]

, considered as key enablers to achieve higher data rates in future wireless networks. In fact, due to high antenna gains possible with massive MIMO and the significant path-loss at mmwave frequencies, such systems will likely operate at rather low SNR values at each antenna, while preferably utilizing low cost hardware and low resolution ADCs, in order to access all available dimensions even at low precision. Our asymptotic analysis demonstrates that the capacity degradation due to quantized sampling is surprisingly small in the low SNR regime for most cases of practical interest.

I-a Less precision for more dimensions: The motivation for coarse quantization

The use of low resolution (e.g., one-bit) ADCs and DACs is a potential approach to significantly reducing cost and power consumption in massive MIMO wireless transceivers. It was proposed as early as 2006 by [12]-[15] in the context of conventional MIMO. In the last three years however, the topic has gained significantly increased interest by the research community [16]-[50] as an attractive low cost solution for large vector channels. In the extreme case, a one-bit ADC consists of a simple comparator and consumes negligible power. One-bit ADCs do not require an automatic gain control and the complexity and power consumption of the gain stages required prior to them are substantially reduced [51]. Ultimately, one-bit conversion is, in view of current CMOS technology, the only conceivable option for a direct mmwave bandpass sampling implementation close to the antenna, eliminating the need for power intensive radio-frequency (RF) components such as mixers and oscillators. In addition, the use of one-bit ADCs not only simplifies the interface to the antennas by relaxing the RF requirements but also simplifies the interface between the converters and the digital processing unit (DSP/FPGA). For instance, the use of 10-bit converters running at 1 Gsps for 100 antennas would require a data pipeline of 1 Tbit/s to the FPGAs and a very complex and power consuming interconnect. By using only one-bit quantization the speed and complexity are reduced by a factor of 10. Sampling with one-bit ADCs might therefore qualify as a fundamental research area in communication theory.

Even though use of only a single quantization bit, i.e., simply the sign of the sampled signal, is a severe nonlinearity, initial research has shown that the theoretical “best-case” performance loss that results with a one-bit quantizer is not as significant as might be expected, at least at low SNRs where mmwave massive MIMO is expected to operate, prior to the beamforming gain, which can still be fully exploited. This is also very encouraging in the context of low-cost and low-power IoT devices which will also likely operate in relatively low SNR regimes. Figure 1 shows how the theoretical spectral efficiency versus energy efficiency () of a one-bit transceiver that uses QPSK symbols in an additive white Gaussian noise (AWGN) channel compares with that of an infinite-precision ADC using a Gaussian input, i.e. the Shannon limit . In fact, the capacity of the one-bit output AWGN channel is achieved by QPSK signals and reads as [52, 21]

(1)

where we make use of the binary entropy function

and the cumulative Gaussian distribution

. Surprisingly, at low SNR the loss due to one-bit quantization is approximately equal to only (1.96dB) [52, 53] and actually decreases to roughly 1.75dB at a spectral efficiency of about 1 bit per complex dimension, which corresponds to the spectral efficiency of today’s 3G systems.

Spectral Efficiency[c][c]Spectral Efficiency Eb/N0[c][c]Energy Efficiency in dB

Fig. 1: Spectral efficiency versus energy efficiency for One- and Infinite-Bit Quantization in AWGN channels.

Even if a system is physically equipped with higher resolution converters and high performance RF-chains, situations may arise where the processing of desired signals must be performed at much lower resolution, due for instance to the presence of a strong interferer or a jammer with greater dynamic range than the signals of interest. In fact, after subtracting or zero-forcing the strong interferer, the residual effective number of bits available for the processing of other signals of interest is reduced substantially. Since future wireless systems must operate reliably even under severe conditions in safety-critical applications such as autonomous driving, investigating communication theory and signal processing under coarse quantization of the observations is crucial.

I-B Related Work and Contributions

Many contributions have studied MIMO channels operating in Rayleigh fading environments in the unquantized (infinite resolution) case, for both the low SNR [53]-[58] and high SNR [59] regimes. Such asymptotic analyses are very useful since characterizing the achievable rate for the whole SNR regime is in general intractable. This issue becomes even more difficult in the context of one-bit quantization at the receiver side, apart from very special cases. In the works [15, 1], the effects of quantization were studied from an information theoretic point of view for MIMO systems, where the channel is perfectly known at the receiver. These works demonstrated that the loss in channel capacity due to coarse quantization is surprisingly small at low to moderate SNR. In [2, 3], the block fading single-input single-output (SISO) non-coherent channel was studied in detail. The work of [27] provided a general capacity lower bound for quantized MIMO and general bit resolutions that can be applied for several channel models with perfect CSI, particularly with correlated noise. The achievable capacity for the AWGN channel with output quantization has been extensively investigated in [20, 21], and the optimal input distribution was shown to be discrete. The authors of [19] studied the one-bit case in the context of an AWGN channel and showed that the capacity loss can be fully recovered when using asymmetric quantizers. This is however only possible at extremely low SNR, which might not be useful in practice. In [60], it was shown that, as expected, oversampling can also reduce the quantization loss in the context of band limited AWGN channels. In [23, 28], non-regular quantizer designs for maximizing the information rate are studied for intersymbol-interference channels. More recently, [31] studied bounds on the achievable rates of MIMO channels with one-bit ADCs and perfect channel state information at the transmitter and the receiver, particularly for the multiple-input single-output (MISO) channel. The recent work of [50] analyzes the sum capacity of the two-user multiple access SISO AWGN channel, which turn to be achievable with time division and power control.

Motivated by these works, we aim to study and characterize the communication performance of point-to-point MIMO channels with general assumptions about the channel state information at the receiver taking into account the 1-bit quantization as a deterministic operation. In particular, we derive asymptotics for the mutual information up to the second order in the SNR and study the impact of quantization. We show that up to second order in SNR for all channels of practical interest, the mutual information of a system with two-level (1-bit sign operation) output signals incorporates only a power penalty of (-1.96 dB) compared to a system with infinite resolution. Alternatively, to achieve the same rate with the same power up to the second order as in the ideal case, the number of one-bit output dimensions has to be increased by a factor of for the case of perfect CSI and at least by for the statistical CSI case, while essentially no increase in the number of transmit dimensions is required. We also characterize analytically the compensation of the quantization effects by increasing the number of 1-bit receive dimensions to approach the ideal case.

This paper is organized as follows: Section II describes the system model. Then, Section III provides the main theorem consisting of a second order asymptotic approximation of the entropy of one-bit quantized vector signals. In Section IV, we provide a general expression for the mutual information between the inputs and the quantized outputs of the MIMO system with perfect channel state information, then we expand that into a Taylor series up to the second order in the SNR. In Section V, we extend these results to elaborate on the asymptotic capacity of 1-bit MIMO systems with statistical channel state information including Rayleigh flat-fading environments with delay spread and receive antenna correlation.

I-C Notation

Vectors and matrices are denoted by lower and upper case italic bold letters. The operators , , , , and stand for transpose, Hermitian (conjugate transpose), matrix trace, complex conjugate, real and imaginary parts of a complex number, respectively. The terms and denote the -dimensional vectors of all zeros and all ones, respectively, while

represents the identity matrix of size

. The vector is the -th column of matrix and denotes the (th, th) element, while is the -th element of the vector . The operator

stands for expectation with respect to all random variables, while the operator

stands for the expectation with respect to the random variable given . In addition, represents the covariance matrix of and denotes . The functions and

symbolize the joint probability mass function (pmf) and the conditional pmf of

and , respectively. Additionally, denotes a diagonal matrix containing only the diagonal elements of and . Finally, we represent element-wise multiplication and the Kronecker product of vectors and matrices by the operators ”” and ””, respectively.

Ii System Model

We consider a point-to-point quantized MIMO channel with transmit dimensions (e.g. antennas or, more generally, spatial and temporal dimensions) and dimensions at the receiver. Fig. 2 shows the general form of a quantized MIMO system, where is the channel matrix, whose distribution is known at the receiver side. The channel realizations are in general unknown to both the transmitter and receiver, except for the ideal perfect CSI case. The vector comprises the transmitted symbols, assumed to be subjected to an average power constraint . The vector represents the additive noise, whose entries are i.i.d. and distributed as . The quantized channel output is thus represented as

(2)

In a one-bit system, the real parts and the imaginary parts of the unquantized receive signals , , are each quantized by a symmetric one-bit quantizer. Thus, the resulting quantized signals read as

(3)

The operator will also be denoted as and represents the one-bit symmetric scalar quantization process in each real dimension. The restriction to one-bit symmetric quantization is motivated by its simple implementation. Since all of the real and imaginary components of the receiver noise

are statistically independent with variance

, we can express each of the conditional probabilities as the product of the conditional probabilities on each receiver dimension

(4)

where

is the cumulative normal distribution function. We first state the main theorem used throughout the paper, and then provide the asymptotics of the mutual information for several channel models up to second order in the SNR.

H[c][c] G[c][c] xd[c][c] x[c][c] y[c][c] r[c][c] Q[.][c][c] n[c][c] M[c][c] N[c][c] SNR[c][c]

Fig. 2: Quantized MIMO System

Iii Main theorem for the asymptotic entropy of one-bit quantized vector signals

We provide a theorem that can be used for deriving the second order approximation of the mutual information. It considers the 1-bit signal , where is a random vector with a certain distribution and is random with i.i.d. Gaussian entries and unit variance, while is a signal scaling parameter.

Theorem 1

Assuming is a proper complex random vector () satisfying for some finite constants and is i.i.d. Gaussian with unit variance, then the following entropy approximation holds up to the second order in

(5)

where and , while the expectation is taken with respect to and is the covariance matrix of .

See appendix A. From this theorem, we can deduce some useful lemmas.

Lemma 1

For any possibly non-deterministic function satisfying and for some finite constants , we have to the second order in

Lemma 2

For any function satisfying and , we have the following second order approximation of the conditional entropy

Lemma 1 is a direct result of Theorem 1, where we just replace the random vector by if the stated conditions are fulfilled for . For Lemma 2 we just perform the expectation in Theorem 1 first conditioned on , to get the entropy for a given and then we take the average with respect to , again if the stated assumptions regarding the distribution of are fulfilled. These results will be used to derive a second order approximation of the mutual information of quantized MIMO systems for the case of perfect as well as statistical channel state information.

Iv Mutual Information and Capacity with full CSI

When the channel is perfectly known at the receiver, the mutual information (in nats/s/Hz) between the channel input and the quantized output in Fig. 2 reads as [61]

(6)

with and is the expectation taken with respect to . For large , the computation of the mutual information has intractable complexity due to the summation over all possible , except for low dimensional outputs (see [31] for the single output case), which is not relevant for the massive MIMO case. Therefore, we resort to a low SNR approximation to perform the analysis on the achievable rates.

Iv-a Second-order Expansion of the Mutual Information with 1-bit Receivers

In this section, we will elaborate on the second-order expansion of the input-output mutual information (6) of the considered system in Fig. 2 as the signal-to-noise ratio goes to zero.

Theorem 2

Consider the one-bit quantized MIMO system in Fig. 2 under a zero-mean input distribution with covariance matrix , satisfying (zero-mean proper complex distribution)111This restriction is simply justified by symmetry considerations. and for some finite constants . Then, to the second order, the mutual information (in nats) between the inputs and the quantized outputs with perfect CSI is given by:

(7)

where is the 4-norm of taken to the power 4: .

We start with the definition of the mutual information [61]

(8)

then we use Lemma 1 and Lemma 2 with and to get the following asymptotic expression:

In the case that the distribution is zero-mean , we end up exactly with the formula stated by the theorem. The condition for some finite constants is necessary, so that the remainder term of the expansion given by

(9)

satisfies

(10)

as already stated by Theorem 1.

For comparison, we use the results of Prelov and Verdú [54] to express the mutual information (in nats) between the input and the unquantized output with the same input distribution as in Theorem 2:

(11)

While the mutual information for the unquantized channel in (11), up to the second order, depends only on the input covariance matrix, in the quantized case (7) it also depends on the fourth order statistics of

(the fourth mixed moments of its components).


Now, using (7) and (11), we deduce the mutual information penalty in the low SNR (or large dimension) regime incurred by quantization

(12)

which is independent of the channel and the chosen distribution. Since the Gaussian distribution achieves the capacity of the unquantized channel but not necessarily for the quantized case, we obtain for the supremum of the mutual information, i.e the capacity

(13)

These results can be also obtained based on the pseudo-quantization noise model [16, 27] and it generalizes the result known for the AWGN channel [52].

Fig. 3 illustrates the mutual information for a randomly generated 44 channel222The generated entries of are uncorrelated and with QPSK signaling and total power , computed exactly using (6), and also its first and second-order approximations from (7). For comparison, the mutual information without quantization (using i.i.d. Gaussian input) is also plotted. Fig. 3 shows that the ratio holds for low to moderate .

For a larger number of antennas, the inner summation in (6) may be intractable. In this case, the second-order approximation in (7) is advantageous at low SNR to overcome the high complexity of the exact formula.

Mutual Information in bits[c][c]Mutual Information in bpcu SNR (linear)[c][c]SNR (linear)

Fig. 3: Mutual information of a 1-bit quantized 44 QPSK MIMO system and its first and second-order approximations. For comparison the mutual information without quantization is also plotted.

Iv-B Capacity with Independent-Component Inputs

Lacking knowledge of the channel or its statistics, the transmitter assigns the power evenly over the components of the input vector , i.e., , in order to achieve good performance on average. Furthermore, let us assume these components to be independent of each other (e. g. multi-streaming scenario).333Clearly, this is not necessarily the capacity achieving strategy.

Thus, the probability density function of the input vector

is .444Note that the have to be even functions and , due to the symmetry (see Theorem 2) and convexity of the mutual information. Now, with

(14)
(15)

and the kurtosis of the random component defined as

(16)

we get

(17)

Similar results hold for the other components of the vector . Plugging this result and into (7), we obtain an expression for the mutual information with independent-component inputs and up to second order:

(18)

Now, we state a theorem on the structure of the near-optimal input distribution under these assumptions.

Theorem 3

To second order, QPSK signals are capacity-achieving among all signal distributions with independent components. The achieved capacity up to second order is

(19)

Since , we have . Obviously, the QPSK distribution is the unique distribution with independent-component inputs that can achieve all these lower bounds simultaneously, i.e., , and thus maximize in (18) up to second order.

Iv-C Ergodic Capacity under i.i.d. Rayleigh Fading Conditions

Here we assume the channel to be ergodic with i.i.d. Gaussian components . The ergodic capacity can be written as

(20)

We apply the expectation over using the second order expansion of in (19). By expanding the following expressions and taking the expectation over the i.i.d. channel coefficients, we have

(21)
(22)
(23)

The ergodic mutual information over an i.i.d. channel can be obtained as

(24)

Next, we characterize the capacity achieving distribution up to second order in the SNR.

Theorem 4

The ergodic capacity of the 1-bit quantized i.i.d. MIMO channel is achieved asymptotically at low SNR by QPSK signals and reads as

(25)

Since with equality if , and with equality if the input has a constant norm of 1, the ergodic capacity is achieved by a constant-norm uncorrelated input ( and ), which can be obtained for instance by QPSK signals Finally, using (19) we obtain the second-order expression for the ergodic capacity of one-bit quantized Rayleigh fading channels for the QPSK case. Compared to the ergodic capacity in the unquantized case achieved by i.i.d. Gaussian inputs (or even by QPSK up to the second order) [53]

(26)

the ergodic capacity of one-bit quantized MIMO under QPSK incorporates a power penalty of almost (1.96 dB), when considering only the linear term that characterizes the capacity in the limit of infinite bandwidth.

On the other hand, the second order term quantifies the convergence of the capacity function to the low SNR limit, i.e. the first order term, by reducing the power or increasing the bandwidth [53]. Therefore, it can be observed from

(27)

that the quantized channel converges to this limit slower than the unquantized channel. Nevertheless, for or (massive MIMO uplink scenario), this difference in the convergence behavior vanishes almost completely, since both second-order expansions (25) and (26) become nearly the same up to the factor in SNR.

In addition, the ergodic capacity of the quantized channel in (25) increases linearly with the number of receive antennas and only sublinearly with the number of transmit antennas , which holds also for . For the special case of one receive antenna, , does not depend on the number of transmit antennas up to the second order, contrary to . On the other hand, if one would achieve, up to the second order, the same ergodic capacity at the same power with one-bit receivers as in the ideal case by adjusting the number of receive and transmit antennas, i.e.,

(28)

then we can deduce by equating coefficients that

(29)

The one-bit receive dimension has to be increased by , while the behavior for the number of transmit antennas is shown in Fig. 4. Clearly, when , which corresponds to a typical massive MIMO uplink scenario, we have . This means that, at the transmitter (or user) side, there is no need to increase the number of antennas up to the second order in SNR, showing that the total increase of dimensions is moderate.

M[c][c] N=20[c][c] N=30[c][c] N=40[c][c] M1/M[c][c]

Fig. 4: Required to achieve ideal ergodic capacity up to the second order.

V Mutual Information and Capacity with Statistical CSI and 1-bit Receivers

We reconsider now the extreme case of 1-bit quantized communications over MIMO Rayleigh-fading channels but assuming that only the statistics of the channel are known at the receiver. Later, we will also treat the achievable rate for the SISO channel case for the whole SNR range.

Generally, the mutual information (in nats/s/Hz) between the channel input and the quantized output in Fig. 2 with statistical CSI reads as [61]

(30)

where and and represent the entropy and the conditional entropy, respectively. If the channel is Gaussian distributed with zero mean, then given the input , the unquantized output is zero-mean complex Gaussian with covariance , and thus we have

(31)

Thus, we can express the conditional probability of the quantized output as

(32)

where the integration is performed over the positive orthant of the complex hyperplane.


The evaluation of this multiple integral is in general intractable. Thus, we consider first a simple lower bound involving the mutual information under perfect channel state information at the receiver, which turns out to be tight in some cases as shown later. The lower bound is obtained by the chain rule and the non-negativity of the mutual information:

(33)

On the other hand, an upper bound is given by the coherent assumption (channel perfectly known at the receiver)

(34)

where we can express each of the conditional probabilities as the product of the conditional probabilities on each receiver dimension, since the real and imaginary components of the receiver noise are statistically independent with power in each real dimension:

(35)

where is the cumulative normal distribution function. Evaluating the lower bound in (33), even numerically, is very difficult, except for some simple cases such as SISO block fading channels, as considered next.

V-a The non-coherent block-Rayleigh fading SISO Case

Here we treat the block-Rayleigh fading SISO case in more detail, where , , and is the coherence time. For simplicity and ease of notation we assume that , therefore we have without loss of generality . The covariance matrix is the sum of an identity matrix and a rank one matrix. Then, we obtain the conditional probability of the 1-bit output as

(36)

where is the cumulative normal distribution function.

V-A1 Achievable Rate with i.i.d. QPSK for the 1-bit Block Fading SISO Model

With the above formula, the achievable rate of the one-bit quantized SISO channel with QPSK input reads as

(37)

Here, is drawn from all possible sequences of equally likely QPSK data symbols, i.e., and denotes the constant sequence . The second equality in (37) follows due to the symmetry of the QPSK constellation and since

(38)

We note that the rate expression (37) corresponds exactly to its lower bound in (33) due to the fact that in the i.i.d. QPSK case and thus . Furthermore, we use (36) to get a simpler expression for as