Fog-Based Detection for Random-Access IoT Networks with Per-Measurement Preambles

04/20/2020 ∙ by Rahif Kassab, et al. ∙ King's College London Aalborg University 0

Internet of Things (IoT) systems may be deployed to monitor spatially distributed quantities of interests (QoIs), such as noise or pollution levels. This paper considers a fog-based IoT network, in which active IoT devices transmit measurements of the monitored QoIs to the local edge node (EN), while the ENs are connected to a cloud processor via limited-capacity fronthaul links. While the conventional approach uses preambles as metadata for reserving communication resources, here we consider assigning preambles directly to measurement levels across all devices. The resulting Type-Based Multiple Access (TBMA) protocol enables the efficient remote detection of the QoIs, rather than of the individual payloads. The performance of both edge and cloud-based detection or hypothesis testing is evaluated in terms of error exponents. Cloud-based hypothesis testing is shown theoretically and via numerical results to be advantageous when the inter-cell interference power and the fronthaul capacity are sufficiently large.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The density of connected wireless devices is expected to continue growing as 5G and beyond-5G systems are deployed, especially for Internet-of-Things (IoT) services supported by massive Machine-Type Communications [ding20196g]. This motivates the investigation of access schemes that support high device densities without penalizing the end-to-end performance for specific IoT services.

Fig. 1:

Multi-cell fog-based IoT network aimed at estimating correlated distributed quantities of interest (QoIs) through either edge or cloud processing.

In this paper, we address this problem by considering the fog-radio access network deployment illustrated in Fig. 1, in which IoT devices monitor distributed Quantities of Interest (QoIs), such as noise or pollution levels. The devices access the network through their local Edge Nodes (ENs), e.g., access points, which are in turn connected via fronthaul links to a cloud processor. Devices are interrogated periodically from the corresponding EN, and they only transmit their measurements of the QoIs if active. The goal of the network is to detect the distributed QoIs based on hypothesis testing at either ENs or at the cloud.

A conventional approach would prescribe a random access protocol, such as ALOHA, through which devices communicate individual payloads to the local ENs. In case two or more devices select the same preamble during the random access phase, a collision would occur and no information would be delivered to the network. As recognized for single-cell systems in [anandkumar2007type, mergen2006tbma, tbma_sayeed]

, when the goal is estimating a common QoI measured at multiple devices, the requirement of distinct preambles per active user for a successful transmission is unnecessary. In such a case, it is in fact potentially more efficient to assign a specific preamble to each measurement level: in this way, devices making the same measurement contribute energy to the same preamble, potentially reinforcing its detection signal-to-noise ratio. The estimate of the QoI can then be obtained from the histogram of the received measurements

[anandkumar2007type, mergen2006tbma, tbma_sayeed]. The outlined protocol can be considered as a form of joint source-channel coding, and is known as Type-Based Multiple Access (TBMA).

In this paper, we investigate the performance of TBMA in a multi-cell fog-based system, as seen in Fig. 1 and detailed in Sec. II and Sec. III. A key new aspect of this type of network deployments is that detection, via hypothesis testing, of the distributed QoIs can be either carried out locally at the ENs or centrally at the cloud. Edge detection is impaired by inter-cell interference, while cloud detection is subject to fronthaul capacity constraints. In contrast to recent works that considered distributed hypothesis testing over wireless channels [distributed_hypothesis_1, distributed_hypothesis_2]

, the goal in this paper is to detect the value of the QoI and not the joint distribution of all QoIs. The error exponent analysis presented in this paper (Sec. IV) provides insights into the performance comparison between edge and cloud processing, and the presented numerical results (Sec. V) validate our findings. Additional results can be found in the extended version of this paper

[rahif_information_centric].

Notation:

Lower-case and upper case bold characters represent vectors and matrices respectively.

denotes the transpose of matrix . denotes the determinant of matrix . denotes the element of located at the -th row and -th column.

is the probability density function (pdf) of a complex Gaussian random variable (RV) with mean

and standard deviation

.

represents the Chernoff information for the probability distributions

and . Given , represents the segment of values between and . represents the correlation as applied to the given correlation interval.

Ii System and Signal Model

System Model: As illustrated in Fig. 1, we study a multi-cell fog-based IoT system that aims at detecting QoIs, such as pollution level, based on measurements received from IoT devices. There are cells, with a single-antenna EN and multiple IoT devices per cell. We assume that each QoI is described in each cell by a Random Variable (RV) . RVs are arbitrarily correlated across cells, and each device in cell makes a noisy measurement of . In this paper, we assume for simplicity of notation and analysis that each QoI can take two possible values, denoted as and .

The IoT devices are interrogated periodically by their local EN during collection intervals, which are synchronized across all cells. In each collection interval, a random number of devices in each cell transmit their measurements in the uplink using a grant-free access protocol based on TBMA [anandkumar2007type]. Mathematically, in any collection interval , each IoT device in cell is active probabilistically, independently of the observation being sensed, so that the total number of devices active in collection interval in cell is a Poisson RV with mean . All devices share the same spectrum and hence their transmissions generally interfere, both within the same cell and across different cells.

We compare two different architectures to perform hypothesis testing in order to detect the QoIs: (i) Edge-based Hypothesis Testing (EHT): Estimation of each QoI is done locally at the EN in cell based on the uplink signals received from the IoT devices, producing a local estimate (see Fig. 1); and (ii) Cloud-based Hypothesis Testing (CHT): The ENs are connected with orthogonal finite-capacity digital fronthaul links to a cloud processor with fronthaul capacity of . Each EN forwards the received signal upon quantization to the cloud processor using the fronthaul link. Unlike conventional C-RAN systems, here the goal is for the cloud to estimate all QoIs (see Fig. 1).

Signal Model: When active, an IoT device in cell during the -th collection makes a measurement . We assume that the measurement takes values in an alphabet of size . The distribution of each observation depends on the underlying QoI as

(1)

for . In words, devices in cell make generally noisy measurements with -dependent distributions and . When conditioned on QoIs , measurements are i.i.d. across all values of the cell index , device index , and the collection index .

We denote by the flat-fading Ricean fading channel, with mean

and variance

, from device to the in the same cell during collection interval ; and by , with mean and variance , the flat-fading Ricean fading channel from device in cell to the EN in cell during collection interval . All channels are assumed i.i.d. across indices and .

Iii Communication protocol and metrics

In this section, we detail the communication protocol and the performance metrics used.

Iii-a Communication Protocol

Within the available bandwidth and time per-collection interval, as in [mergen2006tbma], we assume the presence of orthogonal waveforms , or preambles, with unit energy. According to TBMA, each waveform encodes the value of the observations of a device. The signal transmitted by a device in cell that is active in interval is then given as that is, we have if the observed signal is , where is the transmission energy of a device per collection interval. Devices observing the same value hence transmit using the same waveform. As a result, the spectral resources required by TBMA scale with the number of observations values rather than with the total amount of packets sent by all the active devices, which may be much larger than .

The received signal at the in cell during the -th collection can be written as

(2)

where is white Gaussian noise, i.i.d. over and , with power . The first term in (2) represents the contribution from the IoT devices in the same cell , while the second term represents the contribution from devices from the remaining cells . We emphasize that contributions related to the same preamble from different devices are not necessarily added coherently, but they only contribute to the average received energy for the preamble.

Given the orthogonality of the waveforms , a demodulator based on a bank of matched filters can be implemented at each EN without loss of optimality [anandkumar2007type] (see [JSC_MP_grant_free_IoT] for extensions). After matched filtering of the received signal with all waveforms each EN obtains the vector

(3)

where is a vector with i.i.d. elements, with ; and represents an unit vector with all zero entries except in position .

For detection of the QoIs, we study both EHT and CHT:

EHT: Each EN produces an estimate of the RV based on the received signals for all collection intervals , where is given in (3).

CHT: Each EN compresses the received signals across all collection intervals and sends the resulting compressed signals to the cloud. The cloud carries out joint detection of all QoIs producing estimates .

Iii-B Performance Metrics

The performance of CHT and EHT will be evaluated in terms of the error exponent that describes the scaling of the joint error probability as a function of the number of collections. The joint error probability is given by

(4)

where is the estimate of the QoI obtained at or at the cloud, for EHT and CHT respectively. From large deviation theory, the detection error probability decays exponentially as [cover2012elements]

(5)

where as , for some error exponent . We will hence be interested in the rest of this paper in computing analytically the error exponent for EHT and CHT.

Iv Asymptotic Performance

In this section, we derive the error exponent in (5) for the optimal detection when the number of collection intervals grows to infinity. In order to simplify the analysis, as in [anandkumar2007type], we will take the assumption of large average number of active devices, i.e., of large . This scenario is particularly relevant for mMTC [ding20196g].

Iv-a Edge-based Hypothesis Testing

With EHT, each in cell performs the binary test

(6)

based on the available received signals in (3). The optimum Bayesian decision rule that minimizes the probability of error at each EN chooses the hypothesis with the Maximum A Posteriori (MAP) probability. The error exponent in (5) using EHT can be lower bounded as shown in the following proposition.

Proposition 1: Under the optimal Bayesian detector, the error exponent in (5) in the large- regime and for any is lower bounded as , where

(7)

with

(8)

and

(9)

Proof: In a manner similar to [anandkumar2007type, Theorem 3]

, the proof of the above theorem relies on the Central Limit Theorem (CLT) with random number of summands

[cover2012elements, p. 369] and on the Chernoff Information [cover2012elements]. We refer to the Appendix for more details. ∎

The term in (7) being optimized over corresponds to the Chernoff information [cover2012elements, Chapter 11] for the binary hypothesis test between the distributions of the received signal under hypotheses and when . In fact, for large values of , when and , the received signal in (3) can be shown to be approximately distributed as , with mean vector and diagonal covariance matrix with diagonal elements .

Iv-B Cloud-based Hypothesis Testing

The cloud tackles the -ary hypothesis testing problem of distinguishing among hypotheses for on the basis of the quantized signals received from both ENs on the fronthaul links.

Following a standard approach, see, e.g., [bookcransimeone], the impact of fronthaul quantization is modeled as an additional quantization noise. In particular, the signal received at the cloud from EN can be written accordingly as , where represents the quantization noise vector. As in most prior references (see, e.g., [bookcransimeone]), the quantization noise vector

is assumed to have i.i.d. elements being normally distributed with zero mean and variance

. Furthermore, from rate-distortion theory, the fronthaul capacity constraint implies the following inequality, for each EN

(10)

This is because the number of bits available to transmit each measurement is given by bits per symbol, or, equivalently, per orthogonal spectral resource; that is, bits in total for all resources. From (10), one can in principle derive the quantization noise power . However, evaluating the mutual information in (10) directly is difficult due to the non-Gaussianity of the received signals . To tackle this issue, we bound the mutual information term in (10

) using the property that the Gaussian distribution maximizes the differential entropy under covariance constraints

[cover2012elements], obtaining the following Lemma. In what follows, we denote .

Lemma 1: The quantization noise power can be upper bounded as , where is obtained by solving the non-linear equation

(11)

with given in equation (9).

Proof: See [rahif_information_centric, Appendix A] for details. ∎
Proposition 2: Under optimal detection, the error exponent in (5) in the large- regime for CHT can be lower bounded as , where

(12)

where the entries of the vector are defined as

(13)

with defined in (8), and the entries of the covariance matrix given as

(14)

where is defined in (9) and all other entries of matrix are zero.

Proof: The proof follows in a manner similar to Proposition 1 and uses Sanov’s Theorem [cover2012elements, p. 362] as detailed in [rahif_information_centric, Appendix C]. ∎

The term in (12) being optimized over corresponds to the Chernoff information for the binary test between the distribution of the signal received at the cloud under hypotheses and . As discussed above, for large , the signal received at the cloud under hypothesis is approximately distributed as , where the elements of the mean vector and covariance matrix are described in (13) and (14).

Iv-C Edge vs Cloud-Based Hypothesis Testing

In this section, we prove that the performance of CHT is superior to EHT as long as the inter-cell channel power gain power is sufficiently large. The main result can be summarized in the following theorem.

Theorem 1: The error exponents derived in Proposition 1 and Proposition 2 satisfy the following limits

(15)

Proof: The proof can be found in [rahif_information_centric, Appendix D]

Theorem 1 implies that, for high inter-cell power gains, EHT leads to vanishing error exponent, while this is not the case for CHT. This demonstrates that the performance of EHT is inter-cell interference limited, while this is not the case for CHT. In practice, as shown via numerical results in Sec. V, fairly low interference levels are sufficient for CHT to outperform EHT.

Fig. 2: Error exponent for EHT and CHT as function of the inter-cell power gain (, , and ).

V Numerical Results

In this section, we provide numerical simulations to evaluate the performance of both CHT and EHT. Unless specified otherwise, we fix the following values for the parameters , and cells. The joint distribution of QoIs is defined as

(16)

where represents a “correlation” parameter that measures the probability that the two QoIs have the same value, i.e., . Note that under (16), both values of the QoI are equiprobable, i.e., for and . Furthermore, when , the two QoIs are independent.

In Fig. 2, we plot the error exponent for both EHT and CHT with different values of as function of the inter-cell power gain . As increases, the performance of edge detection is seen to decrease, since interference from the other cell is treated as noise by the edge. In contrast, inline with the theoretical results in Theorem 1, CHT is able to benefit from a sufficiently large inter-cell interference due to centralized processing. We note that, the same U-shaped behavior is observed for the uplink throughput in C-RAN as function of the inter-cell interference [simeone2012cooperative]. Furthermore, a larger fronthaul capacity leads to an improved detection performance, since measurements are received at the cloud with a better resolution.

In Fig. 3, we plot the error exponent as function of the fronthaul capacity . For low values of , EHT outperforms CHT since in this regime, the quantization noise is large and thus measurements are received with low resolution. In contrast, CHT outperforms EHT for high enough values of .

Fig. 3: Error exponent for EHT and CHT as function of the fronthaul capacity (, and ).

Vi Conclusions

This paper considers the problem of detection of Quantities of Interest (QoIs) at the edge or the cloud of a fog-based IoT network. The performance of cloud-based detection was demonstrated analytically and via numerical results to be superior to edge-based detection for sufficiently high fronthaul capacity and inter-cell interference. As for future research directions, we mention the study of the coexistence of heterogeneous IoT services with different service requirements.

Appendix: Proof of Proposition 1

From the union bound with and the identity , we directly obtain the lower bound on the error exponent

(17)

where is the error exponent for detection of QoI conditioned on the condition for . Under optimal Bayesian detection, the error exponent is given by the Chernoff information [cover2012elements, Chapter 11] as

(18)

where we have denoted for brevity. Computing the error exponent in (18) requires finding the distributions . Following [anandkumar2007type], this can be approximated by a Gaussian distribution in the regime of large thanks to the Central Limit Theorem (CLT) with random number of summands [billingsley2008probability, p. 369]. In particular, referring to [anandkumar2007type] for details, we can conclude that, when , the conditional distribution tends in distribution to , where and are the mean vector and covariance matrix respectively when and and are defined in (8) and (9).

The Chernoff Information between two Gaussian distributions can be obtained by maximizing over the -Chernoff information defined as [nielsen2011chernoff]

(19)

By plugging in (17) and (19) the expressions of and and using (18) we obtain the desired result.∎

References