Information-Centric Grant-Free Access for IoT Fog Networks: Edge vs Cloud Detection and Learning

07/11/2019 ∙ by Rahif Kassab, et al. ∙ King's College London Aalborg University 0

A multi-cell Fog-Radio Access Network (F-RAN) architecture is considered in which Internet of Things (IoT) devices periodically make noisy observations of a Quantity of Interest (QoI) and transmit using grant-free access in the uplink. The devices in each cell are connected to an Edge Node (EN), which may also have a finite-capacity fronthaul link to a central processor. In contrast to conventional information-agnostic protocols, the devices transmit using a Type-Based Multiple Access (TBMA) protocol that is tailored to enable the estimate of the field of correlated QoIs in each cell based on the measurements received from IoT devices. TBMA has been previously introduced in the single-cell scenarios as a bandwidth-efficient data collection method, and is here studied for the first time in a multi-cell F-RAN model as an instance of information-centric access protocols. To this end, in this paper, edge and cloud detection are designed and compared for a multi-cell system. In the former case, detection of the local QoI is done locally at each EN, while, with the latter, ENs forward the received signals, upon quantization, over the fronthaul links to the central processor that carries out centralized detection of all QoIs. Optimal model-based detectors are introduced and the resulting asymptotic behaviour of the probability of error at cloud and edge is derived. Then, for the scenario in which a statistical model is not available, data-driven edge and cloud detectors are discussed and evaluated in numerical results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Context

Most commercial Internet of Things (IoT) systems are currently based on proprietary systems, most notably LoRa [1] and Sigfox [2] [3], and target long-range low-duty cycle transmission [4][5]. With the advent of 5G, cellular systems are expected to play an increasing role in IoT systems, thanks to the introduction of NarrowBand IoT (NB-IoT) [6]. IoT deployments based on cellular systems come with potential advantages in terms of reliability and coverage, but they also pose a number of novel challenges, particularly in terms of interference management and system optimization.

A key communication primitive for IoT systems is grant-free access, whereby devices transmit using randomly selected preambles [7][8]. Random access is agnostic to the information being communicated, since all packets are generally treated in the same way as independent messages. This paper proposes to improve the efficiency of grant-free access schemes in cellular systems by introducing an information centric protocol based on Type-Based Multiple Access (TBMA)[9][10][11].

To define the problem of interest, as illustrated in Fig. 1, consider an IoT application that aims at detecting the spatial distribution, of field, defined by a given Quantity of Interest (QoI) in each cell . As an example, the IoT network may be deployed to monitor the pollution level across the covered geographical area. IoT devices operate as sensors that observe generally correlated information given that QoIs measured in nearby locations are likely to be similar. A conventional approach, implemented for instance in Sigfox, is to have each device transmit its observation using grant-free access to the local Edge Node (EN), which estimates the given QoI based on the received observations. This solution has a number of drawbacks that we address in this paper, namely:

  • The communication protocol does not account for the correlation in the devices’ observations and for the fact that the goal of the system is not to retrieve individual observations, but rather to estimate the field of QoIs;

  • Local detection at the EN does not leverage the possible availability of central, or “cloud”, processors that are connected to multiple ENs via fronthaul links. The presence of cloud processors, also known as Central Units in 3GPP documents [12], define cellular architectures referred to here as Fog-Radio Access Network (F-RAN) as in, e.g., [13][14].

I-B TBMA in F-RAN Systems

With regards to the first point raised above, in this work we adopt an information-centric TBMA-based protocol. TBMA is a random access technique introduced in [9] and [11] and further studied, among other papers, in [10]. TBMA relies on the fact that, in order to optimally estimate a given parameter, only the histogram of the parameter-dependent measurements is needed and not the individual observations of the devices. Therefore, conventional transmission schemes that aim at ensuring recovery of all individual observations at the receiver are generally inefficient. In contrast, TBMA is designed to allow the receiver to estimate the histogram of the observations across the devices. To this end, in TBMA, all devices that make the same measurement, upon suitable quantization [11], transmit the same waveform in a non-orthogonal fashion to the receiver. Assigning orthogonal waveforms for each measurement value hence yields bandwidth requirements that do not scale with the number of devices but only with the size of the quantized observation space. This produces potentially dramatic savings in terms of bandwidth and overall power, particularly in the regime of large number of devices [9, 11, 10]. All prior work on TBMA assumed a single-cell scenario with a single receiver.

Concerning the second point, with 5G, the cellular architecture is evolving from a base station-centric architecture, which is characterized by local processing, to a fog-like set-up, in which network functionalities can be distributed more flexibly between centralized processing at the cloud and local processing at the edge. Enabling this flexibility are fronthaul links connecting ENs to the cloud processor and network softwarization. At one extreme of the resulting F-RAN architecture, all processing can be local, e.g., carried out at the ENs, while, at the other, all processing can be centralized as in a Cloud-Radio Access Network (C-RAN) [15][16]. In an IoT network, it is hence interesting to investigate under which conditions a centralized, cloud-based, detection of the QoIs can be advantageous. The problem is non-trivial due to the limitations on the capacity of the fronthaul links (see, e.g., [15] [16]).

In this paper, as illustrated in Fig. 1, we investigate an information-centric TBMA-based access scheme for F-RAN IoT systems that integrates in-cell TBMA with inter-cell non-orthogonal frequency reuse in the presence of either edge or cloud detection.

(a) Edge detection
(b) Cloud detection
Fig. 1: A multi-cell fog radio access network with IoT devices making observations of local quantities of interest (QoIs) for each cell . Each cell uses the same frequency band. The goal of the system is to compute an estimate for each . This can be done in: (a) a distributed fashion at each EN, or (b) a centralized fashion at the cloud.

I-C Related Work

IoT systems have been extensively studied for a variety of applications and tasks, including notably device detection techniques, often based on sparsity constraints [17, 18, 19, 20]

and possibly leveraging machine learning methods

[21]. This line of work is currently of particular interest in the context of massive Machine-Type Communications (mMTC) for 5G systems [22]. TBMA can be interpreted as carrying out a special form of Non-Orthogonal Multiple Access (NOMA) in that the devices transmit using non-orthogonal waveforms. In this sense, it is also related to the unsourced model of random access studied in [23]. Unlike conventional NOMA (see, e.g., [24][25][26]), in TBMA, the communication protocol is tailored to the information being transmitted and to the detection task. It can hence be interpreted as an example of joint source-channel coding, which is more generally receiving renewed interest for its potential spectral and power efficiency in IoT systems (see, e.g., [27][28][29]). To the best of our knowledge, TBMA has not been studied in multi-cell F-RAN systems.

The problem of studying the performance trade-offs between processing at the edge and at the cloud has been studied in a number of works, including for content delivery [30][31], scheduling [32], and coexistence of different 5G services [33].

I-D Main Contributions

The main contributions of this paper are summarized as follows:

  • An information-centric grant free access scheme is introduced for F-RAN IoT cellular systems that combines in-cell TBMA and inter-cell non-orthogonal frequency reuse;

  • Optimal edge and cloud detectors are derived for the system at hand that leverage correlations in the QoIs across different cells;

  • An analytical study of the performance of optimal cloud and edge detection is provided in terms of detection error exponents;

  • Assuming absence of model knowledge at the edge or cloud, learning-based data-driven detection schemes are considered for both cloud and edge processing.

The rest of the paper is organized as follows. In Sec. II we detail both the system and the signal models. In Sec. III we highlight the communication protocol used by the devices in addition to the performance metrics utilized to evaluate the performance of the system. In Sec. IV and V we study and analyze edge and cloud detection with optimal detection and the corresponding asymptotic behaviour respectively. In Sec. VI, we investigate data-driven edge and cloud detection for the case where a statistical model is not available. Numerical results are presented in Sec. VII and conclusions and extensions are proposed in Sec. VIII.

Notation:

Lower-case bold characters represent vectors and upper-case bold characters represent matrices.

denotes the transpose of matrix . denotes the determinant of matrix . denotes the element of located at the -th row and -th column.

is the probability density function (pdf) of a complex Gaussian RV with mean

and standard deviation

. represents the probability mass function (pmf) of a Poisson RV with mean . and

represent the Chernoff information and the Kullback-Leibler (KL) divergence respectively for the probability distributions

and . Given , represents the segment values between and .

Ii System and Signal model

Ii-a System Model

As illustrated in Fig. 1, we study a multi-cell wireless fog network that aims at detecting a field of Quantities of Interest (QoIs), such as temperature or pollution level, based on signals received from IoT devices. Each cell contains a single-antenna Edge Node (EN) and multiple IoT devices. We assume that the QoI is described in each cell

by a Random Variable (RV)

. RVs are generally correlated across cells, and each device in cell makes a noisy measurement of . For example, QoI may represent the pollution level in the area covered by cell . In this paper, we assume for simplicity of notation and analysis that each QoI can take two possible values and . Continuing the example above, may represent a low () or high () pollution level in cell . Extensions to more general QoIs follow directly but at the cost of a more cumbersome notation and analysis as further discussed in Sec. VIII.

The IoT devices are interrogated periodically by their local EN over a number of collection intervals, which are synchronized across all cells. In each collection interval, a number of devices in each cell transmit their measurements in the uplink using a grant-free access protocol based on Type-Based Multiple Access (TBMA) [11][10]. Note that the random activation pattern assumed here can also model aspects such as discontinuous access to the QoI or to sufficient energy-communication resources at the devices. Mathematically, in any collection interval , each IoT device in cell is active probabilistically, so that the total number of devices active in collection interval in cell is a Poisson RV with mean and probability mass function . When active, a device transmits a noisy measurement of the local QoI in the uplink. All devices share the same spectrum and hence their transmissions generally interfere, both within the same cell and across different cells.

We compare two different architectures for detection of the QoIs: (i) Edge detection: Detection of each QoI is done locally at the EN in cell based on the uplink signals received from the IoT devices, producing a local estimate (see Fig. 0(a)); and (ii) Cloud detection: The ENs are connected with orthogonal finite-capacity digital fronthaul links to a cloud processor with fronthaul capacity of . As in a C-RAN architecture [16], each EN forwards the received signal upon quantization to the cloud processor using the fronthaul link. Unlike conventional C-RAN systems, here the goal is for the cloud to compute estimates of all QoIs (see Fig. 0(b)).

Ii-B Signal Model

When active, an IoT device in cell during the -th collection observes a measurement . We assume that the measurement takes values in an alphabet of size . If the observation is analog, measurement can be obtained upon quantization to levels. The problem of designing the quantizer is an interesting direction for future research (see Sec. VIII). The distribution of each observation depends on the underlying QoI as

(1)

for . In words, devices in cell make generally noisy measurements with -dependent distributions and . When conditioned on QoIs , measurements are i.i.d. across all values of the cell index , device index , and the collection index .

Fig. 2: Two-cell system model.

While the analysis can be generalized for a multi-cell scenario as further discussed in Sec. VIII, we henceforth focus on the two-cell case illustrated in Fig. 2 in order to concentrate on the essence of the problem without complicating the notation. In this case, we define as

(2)

the joint distribution of the QoIs in the two cells, where

represents a “correlation” parameter that measures the probability that the two QoIs have the same value, i.e., . Note that under (2), both values of the QoI are equiprobable, i.e., for and . Extensions to more general probability distributions are immediate.

We denote by the flat-fading Ricean fading channel, with mean

and variance

, from device to the in the same cell during collection interval ; and by , with mean and variance , the flat-fading Ricean fading channel from device in cell to the EN in cell during collection interval . All channels are assumed i.i.d. across indices and . In the next section, we detail the communication protocol, including the physical-layer model and the performance metrics used.

Iii Communication protocol and Performance metrics

In this section, we detail the communication protocol and the performance metrics used to evaluate the system’s performance.

Iii-a Communication Protocol

As mentioned in Sec. I, based on the single-cell results in [9][10][11], in this paper we focus on an information-centric TBMA-based protocol that leverages the correlation between observations of different devices in different cells. To this end, within the available bandwidth and time per-collection interval, as in [9], we assume the presence of orthogonal waveforms with unit energy. In practice, preambles allocated for the random access phase in cellular standards can be used as preambles. These waveforms are used in a non-orthogonal fashion by the IoT devices to transmit their observations in the uplink. As detailed next, we allow for non-orthogonal frequency reuse across the two cells, and study also the orthogonal frequency reuse for comparison.

Non-orthogonal frequency reuse: According to TBMA, each waveform encodes the value of the observations of a device. The signal transmitted by a device in cell that is active in interval is then given as

(3)

that is, we have if the observed signal is , where is the transmission energy of a device per collection interval. With TBMA, devices observing the same value hence transmit using the same waveform. This is why, as discussed in Sec. I, the spectral resources required by TBMA scale with the number of observations values rather than with the total amount of information by all the active devices, which may be much larger than .

The received signal at the in cell during the -th collection can be written as

(4)

where is white Gaussian noise, i.i.d. over and , with power ; and represents the index of the other cell. The first term in (4) represents the contribution from the IoT devices in the same cell , while the second term represents the contribution from IoT devices from the other cell .

Given the orthogonality of the waveforms , a demodulator based on a bank of matched filters can be implemented at each EN without loss of optimality [10]. After matched filtering of the received signal with all waveforms for , each EN obtains the vector

(5)

where is a vector with i.i.d. elements, with ; and represents an unit vector with all zero entries except in position . In (5), we used the notation to represent the correlation integral as applied to the given correlation interval. To gain insight into the operation of TBMA, we note that, in the absence of noise and inter-cell interference, and if the channel coefficients are all equal one, i.e., with and , the -th element of vector is equal to the number of active devices that have observed the -th data level in cell [9].

Orthogonal frequency reuse: For reference, we also consider a rate- frequency reuse scheme that eliminates inter-cell interference. In this baseline scheme, the available orthogonal resources are equally partitioned between the two cells, so that in each cell only orthogonal waveforms are available. We assume here to be even for simplicity of notation. In this case, each active IoT device in cell quantizes its observation to levels as if for before transmission. The signal received at EN during collection can hence be written as

(6)

Comparing (6) with (5), we observe that, on the one hand, orthogonal frequency reuse reduces the resolution of the observations of each device from to levels, but, on the other hand, it removes inter-cell interference. In the remainder of this paper, we consider and derive the performance of the more general non-orthogonal frequency reuse. The performance for orthogonal frequency reuse can be derived the same way by replacing the number of resources by and setting the interference channel coefficients to zero in all the derived equations. As for detection of the QoI, we study both edge and cloud detection described as follows:
Edge Detection: With edge detection, each EN produces an estimate of the RV based on the received signals for all collection intervals , where is given in (5) and (6) for non-orthogonal and orthogonal frequency reuse, respectively.

Cloud Detection: With cloud detection, each EN compresses the received signals across all collection intervals and sends the resulting compressed signals to the cloud. Compression is needed in order to account for the finite fronthaul capacity . The cloud carries out joint detection of both QoIs producing estimates .

Iii-B Performance Metrics

The performance of cloud and edge detection methods will be evaluated in terms of the joint error probability

(7)

where is the estimate of the QoI obtained at or at the cloud, for edge detection and cloud detection respectively. In order to enable analysis, we will also study analytically the scaling of the error probability as a function of the number of collections. From large deviation theory, the detection error probability decays exponentially as [34]

(8)

where as , for some detection error exponent . We will hence be interested in computing analytically the error exponent for edge and cloud detection to verify our experimental results using optimal and machine learning based detection where is used as a performance metric.

In the next two sections, we consider the case in which the model (1)-(4) is available for the design of optimal detection at edge and cloud, and describe the resulting detectors and their asymptotic behavior in terms of the error probability via the error exponent when . Then, in Sec. VI, we study the case in which the detectors need to be learned from data rather than being derived from a mathematical model.

Iv Optimal Detection

In this section, we assume that the joint distribution (1)-(4) of the QoI, of the observations, and of the received signal is known, and we detail the corresponding optimal detectors at edge and cloud. The performance of these detectors is evaluated numerically in terms of the probability of error (7) in Sec. VII.

Iv-a Optimal Edge Detection

With edge detection, each in cell performs the binary test

(9)

based on the available received signals in (5). The optimum Bayesian decision rule that minimizes the probability of error at each EN chooses the hypothesis with the Maximum A Posteriori (MAP) probability. Since the hypotheses in (9) are a priori equiprobable the MAP rule is given by the log-likelihood ratio test:

(10)

Using the law of total probability and the i.i.d. property across collection intervals

, the likelihood can be expressed as

(11)

where is the conditional probability of the QoI in cell obtained from (2), and represents the distribution of the signal (4) received at EN during interval when we have and . This distribution can be written as

(12)

where we have defined

(13)

The distribution (12) follows since: (i) conditioned on the numbers and of active devices in cell and , respectively, the distribution of in (4) is complex Gaussian with mean and variance ; and (ii) by the Poisson thinning property [35], the average number of devices transmitting signal level in cell under hypothesis is equal to .

Iv-B Optimal Cloud Detection

The cloud tackles the quaternary hypothesis testing problem of distinguishing among hypotheses for on the basis of the quantized signals received from both ENs on the fronthaul links. The optimal test for deciding among multiple hypotheses is the Bayes MAP rule that chooses the hypothesis by solving the problem

(14)

where the first term represents the prior probability of hypothesis

while the second term represents the distribution of the compressed signals sent on the fronthaul links. This is derived next.

Following a by now standard approach, see, e.g., [15][36], the impact of fronthaul quantization is modeled as an additional quantization noise. In particular, the signal received at the cloud from EN can be written accordingly as

(15)

where represents the quantization noise vector. As in most prior references (see, e.g., [15][36]), the quantization noise vector

is assumed to have i.i.d. elements being normally distributed with zero mean and variance

. Furthermore, from rate-distortion theory, the fronthaul capacity constraint implies the following inequality [36], for each EN

(16)

This is because the number of bits available to transmit each measurement is given by bits per symbol, or equivalently per orthogonal spectral resource, that is, bits in total for all resources. From (16), one can in principle derive the quantization noise power .

Evaluating the mutual information in (16) directly is, however, made difficult by the non-Gaussianity of the received signals . To tackle this issue, we bound the mutual information term in (16

) using the property that the Gaussian distribution maximizes the differential entropy under covariance constraints

[34], obtaining the following result.

Lemma 1: The quantization noise power can be upper bounded as , where is obtained by solving the non-linear equation

(17)

where

(18)

are the diagonal elements of the covariance matrix of when and .

Proof: See Appendix -A for details. ∎
Using Lemma 1, the distribution of the received signal in (14) can be evaluated as in (12) but with a variance of in lieu of for each cell .

V Asymptotic Performance

In this section, we derive the error exponent in (8) for the optimal detectors discussed in Sec. IV when the number of collection intervals grows to infinity. In order to simplify the analysis, as in [10], we will take the assumption of large average number of active devices, i.e., of large . This scenario is practically relevant for scenarios such as massive Machine Type Communication systems (mMTC), with large devices’ density [4]. In Sec. VII, we will further validate the approach by means of numerical results for smaller values of and .

V-a Edge Detection

The error exponent in (8) using edge detection can be lower bounded as shown in the following proposition.

Proposition 1: Under the optimal Bayesian detector (10), the error exponent in (8) in the large- regime and for any is lower bounded as , where

(19)

with

(20)

and given in (18) for , and .

Proof: In a manner similar to [10, Theorem 3]

, the proof of the above theorem relies on the Central Limit Theorem (CLT) with random number of summands

[34, p. 369] and on the error exponent for optimal binary Bayesian detection based on the Chernoff Information [34]. We refer to Appendix -B for details. ∎

The term in (19) being optimized over corresponds to the Chernoff information [34, Chapter 11] for the binary test between the distributions of the received signal under hypotheses and when . In fact, for large values of , when and , the received signal in (5) can be shown to be approximately distributed as , with mean vector and diagonal covariance matrix with diagonal elements .

V-B Cloud Detection

Here we analyze the performance of joint detection at the cloud described in (14) in terms of the error exponent .

Proposition 2: Under the optimal detector (14), the error exponent in (8) in the large- regime for cloud detection can be lower bounded as , where

(21)

where the vector is defined as

(22)

where is defined in (20), and the covariance matrix is given as

(23)

where is defined in (18) and all other entries of matrix are zero.

Proof: The proof follows in a manner similar to Proposition 1 as we detail in Appendix -C. ∎

The term in (21) being optimized over corresponds to the Chernoff information for the binary test between the distribution of the signal received at the cloud under hypotheses and . As for edge detection, the signal received at the cloud under hypothesis is approximately distributed as , where the elements of the mean vector and covariance matrix are described in (22) and (23). Note that, by (23), the signals received from cell and are correlated, when conditioned on any hypothesis , if channels have non-zero mean.

Vi Edge and Cloud Learning

In the previous sections, we have assumed that ENs and the cloud are aware of the joint distribution (1)-(4) of the QoIs, observations, and received signals. As a result, the conditional distributions are known at each EN and the distributions are known at the cloud for all values of the QoIs. These distributions are needed in order to implement the optimal detectors (10) and (14) at the edge and cloud respectively. In contrast, in this section, we assume lack of knowledge of the aforementioned distributions and use data-driven learning-based techniques at the edge and the cloud in order to train edge and cloud detectors. The performance of these detectors is evaluated using the probability of error , and it is compared with the optimal detectors’ performance, in Sec. VII.

Vi-a Edge Learning

In order to enable the training of a binary classifier at each EN

, we assume the availability of a labeled training set for supervised learning. This data set is defined by

i.i.d. observations for , where is the vector of observations at EN , which is distributed according to the unknown conditional distribution and is the binary QoI. Any binary classifier can be trained based on this data set in order to generalize the mapping between input and output

outside the training set. For illustration, we consider a feedforward neural network, which is described through the functional relations (see, e.g.,

[37] [38])

(24)

where is the number of hidden layers; represents the vector of outputs of the -th hidden layer with weight matrix for ; is the vector of weights for the last layer; is a non-linear function, here taken to be hyperbolic tangent [38];

is the sigmoid function; and we have

as the input of the neural network. The output of the neural network provides the probability that the QoI is equal for the given weights

. The neural network is trained to minimize the cross-entropy loss via the backpropagation algorithm. Details of this standard procedure can be found, e.g., in

[37] [38].

Vi-B Cloud Learning

Unlike the ENs, the cloud needs to train a multi-class classifier in order to distinguish among the four hypotheses for . To enable supervised learning, we assume the availability of a labelled training set defined by i.i.d. observations for , where is the vector of observations at the cloud, which is distributed according to the unknown joint distribution and are the QoIs for the two cells. While any multi-class classifier can be used, here we consider a classifier based on a neural network as discussed above. Unlike the classifier in (24

), the cloud-based classifier contains four output neurons with each neuron representing the probability of one of the four hypotheses. The output layer is defined as in (

24) but with a softmax non-linearity in lieu of the sigmoid [37][38]. Training is carried out by optimizing the cross-entropy criterion.

Vii Numerical Results

In this section, we discuss the performance of edge and cloud-based detection and learning as a function of different system parameters, such as inter-cell interference strength and fronthaul capacity, through numerical examples. For the optimal detectors described in Sec. IV, which require knowledge of the measurements and channel models, we consider both the analytical performance in terms of error exponent derived in Sec. V and the performance in the regime with a finite number of observations evaluated via Monte Carlo simulations. For the learning-based solution, we evaluate the performance under the system model discussed in Sec. II in order to ensure a fair comparison with model-based solutions.

The system contains two cells as illustrated in Fig. 2, and unless specified otherwise, we set the system parameters as follows: average number of active devices per cell ; average SNR equal to ; direct channel parameters and ; inter-cell channel parameters and ; correlation between the QoIs in the two cells ; and number of observations levels . Furthermore, the conditional distributions of the observations for both cells are given for QoI value as , , and and for QoI value . Note that, under QoI , devices in both cells tend to measurements with small values , while the opposite is true under QoI . For example, value may represent a low pollution level or temperature.

Fig. 3: Error exponent for edge and cloud detection as function of the inter-cell power gain (, , and ).

Asymptotic analysis: In Fig. 3, we plot the error exponent derived in Sec. V for both edge and cloud detection as a function of the inter-cell power gain . The performance of edge detection is seen to decrease, i.e., the error exponent decreases, when the inter-cell gain increases. This is due to the fact that the QoI in the other cell may be different, with non-zero probability, from the QoI in the given cell. When this happens, signals sent from devices in the other cell create interference at the EN in the given cell. In contrast, the performance of cloud detection depends on the inter-cell power gain in a more complex fashion that is akin to the behavior of the sum rate in cellular systems with cloud-based decoding [16]. In fact, joint detection at the cloud treats as useful the signal received by both cells. Therefore, as long as the inter-cell interference power is large enough, having an additional signal path to the cloud through the other EN can improve the detection performance. This is not the case for smaller values of , in which case the potentially deleterious effect of inter-cell interference is not compensated by the benefits accrued via joint decoding on the detection of the QoI of the other cell.

In Fig. 3, the performance of cloud detection is also seen to depend strongly on the values of the fronthaul capacity . When is small enough, making fronthaul quantization noise significant, cloud detection can in fact be outperformed by edge detection. In contrast, if is sufficiently large, edge and cloud detection have the same performance when is small, in which case no benefits can be accrued via joint decoding at the cloud, but cloud detection can vastly outperform edge detection when is large enough.

Fig. 4: Error exponent for edge and cloud detection as function of the fronthaul capacity (, and ).

The role of the fronthaul capacity in determining the relative performance of the edge and cloud detection is further explored in Fig. 4, where we plot the error exponent as function of the fronthaul capacity for two different values of the SNR. Consistently with the discussion above, the cloud’s detection performance is observed to increase with the fronthaul capacity, outperforming edge detection for large enough . Furthermore, the threshold value of at which cloud detection outperforms edge detection is as low as .

Fig. 5: Probability of error for edge and cloud detection as function of (, , and ).

Probability of error for optimal detection: We now validate the results from the analysis by evaluating the probability of error of the optimal detectors described in Sec. IV via Monte Carlo simulations. Throughout, we set . We start in Fig. 5 by plotting the probability of error as a function of the inter-cell power gain . In a manner consistent with the analytical results illustrated in Fig. 3, the probability of error for edge detection with non-orthogonal frequency reuse is seen to increase when the interference’s power increases. In contrast, for cloud detection, the probability of error grows larger with an increasing inter-cell gain for smaller values of , and then it decreases gradually for higher values of as the inter-cell signals become beneficial for joint detection in the cloud.

In Fig. 5, we also compare the performance of non-orthogonal frequency reuse in all cells, which has been assumed thus far, with orthogonal frequency reuse. For edge detection, orthogonal frequency reuse outperforms non-orthogonal frequency reuse for high inter-cell interference power, in which regime the rate gain of having more radio resources in the non-orthogonal reuse scheme is outweighted by the absence of interference with the orthogonal scheme. In contrast, for cloud detection, for high enough inter-cell power, inter-cell signals become useful thanks to joint decoding, and thus, non-orthogonal frequency reuse outperforms orthogonal frequency reuse.

Fig. 6: Probability of error for optimal edge and cloud detection as function of (, , and ).
Fig. 7: Probability of error for edge and cloud detection using both learning and optimal detection as function of the correlation between the two QoIs in the two cells (, , , and ).

We now study the impact of the fronthaul capacity by plotting the probability of error for optimal edge and cloud detection as function of in Fig. 6. Confirming the discussion based on the asymptotic analysis considered in Fig. 4, we observe that the probability of error for optimal cloud detection decreases as function of the fronthaul capacity, and, for a large enough value of , cloud detection is able to outperform edge detection.

Since the asymptotic analysis is insensitive to the value of the QoI correlation parameter , in Fig. 7, we evaluate the impact of by studying the probability of error as function of for both optimal edge and cloud detection. For , the QoIs in the two cells have opposite values with probability one. Therefore, given the large value of the inter-cell gain, the signals received at the ENs are close to being statistically indistinguishable under the two possible hypotheses and . In contrast, when increases, the correlation between the two QoIs in the two cells increases, i.e., and are more likely to have the same value. In this case, the inter-cell signals are likely to carry information about the same QoI value, which decreases the probability of error for both cloud and edge.

Fig. 8: Probability of error for edge and cloud detection using learning as function of the training set size (, , , , and ).

Edge and cloud learning: We now evaluate the performance of learning-based detection as a function of the size of the available training set. Training is done using scaled conjugate gradient backpropagation on the cross-entropy loss, as proposed in [39]

and implemented in MATLAB’s Deep Learning tool box

111https://www.mathworks.com/products/deep-learning.html with fixed learning rate equal to . In Fig. 8, we plot the probability of error for both edge and cloud detection using the optimal and learning-based detection techniques as function of . For both edge and cloud detection, the probability of error decreases as function of the training set size until it approximates closely the optimal detector’s probability of error. The key observations in Fig. 8 is that the probability of error for cloud learning converges faster than edge learning to the optimal error. Even though the cloud detector performs a quaternary hypothesis testing problem, its operation in a larger domain space makes it easier to train an effective detector. This is particularly the case for large correlation coefficients, here , since this implies that two hypotheses, namely, and , have a significantly higher prior probability than the remaining two hypotheses.

Viii Conclusions and Extensions

This paper considers the problem of detecting correlated quantities of interest (QoIs) in a multi-cell Fog-Radio Access Network (F-RAN) architecture. An information-centric grant-free access scheme is proposed that combines Type-Based Multiple Access (TBMA) [10] with inter-cell non-orthogonal frequency reuse scheme. For this scheme, detecting QoIs at the cloud via a fronthaul-aided network architecture was found to be advantageous over separate edge detection for high enough fronthaul capacity in the presence of sufficiently large inter-cell power gains. This is because cloud detection can benefit from inter-cell interference via joint decoding when the correlation between QoIs among different cells is high enough thanks to TBMA. The latter observation was also verified analytically for the asymptotic regime when the number of measurement collections from devices goes to infinity. Under the same conditions, cloud detection was seen via numerical results to outperform edge detection even without model information in the presence of limited data used for supervised learning.

Finally, the proposed protocol can be implemented by using the random access preambles from the standard cellular protocols. Hence, this form of TBMA changes only the interpretation of those preambles, which means that it can be implemented without intervention on the physical layer of the existing IoT devices.

Some extensions and open problems are discussed next. First, it would be interesting to consider QoIs with more than two values and multi-cell network with more than two cells. The analysis of this scenario follows directly from the derivations in this paper at the cost of a significantly more cumbersome notation. More fundamentally, it would be relevant to study the design of optimized quantizers between analog observations and discrete levels used for grant-free access.

Another interesting direction of research, following [5][33], is to consider the coexistence of IoT devices with other 5G services, most notably eMBB and URLLC. While orthogonal resource allocation among services would yield separate design problems, non-orthogonal multiple access across different services was found to be advantageous in [5] [33]. As a brief note on this problem, in contrast with the sporadic and short IoT transmissions, eMBB transmissions typically span multiple time slots [40]. Accordingly, from each IoT device point of view, eMBB signals may be treated as an additional source of noise. However, IoT signals may be decoded and cancelled prior to eMBB decoding [5]. Like IoT traffic, URLLC traffic is instead typically sporadic and hard to predict. Detectors should hence be designed in order to adapt to the possible presence of URLLC signals. As for URLLC transmissions, the key issue is guaranteeing high reliability despite interference from IoT signals.

-a Proof of Lemma 1

The mutual information term in (16) can be written as

(25)

where the equality follows from the assumption that the quantization noises are Gaussian and independent across all observations. The first term in equation (25) can be bounded as

(26)

where is the covariance matrix of vector . The inequality follows by the property of the Gaussian distribution of maximizing the differential entropy under a covariance constraint [34]. Using the law of iterated expectations, the covariance can be written as

(27)

where matrices are diagonal and represent the covariance matrices of when hypothesis and hold as defined in Proposition 1. This concludes the proof. ∎

-B Proof of Proposition 1

From the union bound with and the identity , we directly obtain the lower bound on the error exponent

(28)

where is the error exponent for detection of QoI conditioned on the condition . Under the optimal Bayesian detector (10), the detection error exponent is given by the Chernoff information [34, Chapter 11] as

(29)

where we have denoted for brevity. Computing the error exponent in (29) requires finding the distributions for . Following [10], this can be approximated by a Gaussian distribution in the regime of large thanks to the Central Limit Theorem (CLT) with random number of summands [35, p. 369]. In particular, referring to [10] for details, we can conclude that, when , the conditional distribution tends in distribution to , where and