The advent of fifth generation (5G) of wireless systems opens up new possibilities and gives rise to new use cases with stringent reliability requirements, e.g., Ultra Reliable Low Latency Communication paradigm (URLLC) . Some examples are 
: factory automation, where the maximum error probability should be around; smart grids (), professional audio (), etc. Meeting such requirements is not an easy task and usually various diversity sources are necessary in order to attain the ultra-reliability region . The problem becomes even more complicated if stringent delay constraints have to be satisfied111In general, there is a fundamental trade-off between delay and reliability metrics due to the fact that by relaxing one of them, we can enhance the performance of the other. In fact, Long-Term Evolution (LTE) already offers guaranteed bit rate that can support packet error rates down to , however, the delay budget goes up to ms including radio, transport and core network latencies , which is impractical for many real time applications., and/or if power consumption is somewhat limited as is the case in systems of low-power devices such as sensors or tiny actuators. The interplay between the diverse requirements makes physical layer design of such systems very complicated .
authors outline the key technical requirements and architectural approaches pertaining to wireless access protocols, radio resource management aspects, next generation core networking capabilities, edge-cloud, and edge artificial intelligence capabilities, and propose first avenues for specific solutions to enable the Tactile Internet revolution. The trade-off between reliability, throughput, and latency, when transmitting short packets in a multi-antenna setup, is identified in. Moreover, authors present some bounds that allow to determine the optimal number of transmit antennas and the optimal number of time-frequency diversity branches that maximize the rate. Shared diversity resources are explored deeply in  when multiple connections are only intermittently active, while cooperative communications are also considered in literature, e.g., , and [11, 12] for wireless powered communications, as a viable alternative to direct communication setups .
Intelligent resource allocation strategies are of paramount importance to provide efficient ultra-reliable communications. In , the network availability for supporting the quality of service of users is investigated, while some tools for resource optimization addressing the delay and packet loss components in URLLC are presented. Energy-efficient design of fog computing networks supporting Tactile Internet applications is the focus of the research in  where the workload is allocated such that it minimizes the response time under the given power efficiency constraints of fog nodes; while in  authors propose a resource management protocol to meet the stringent delay and reliability requirements while minimizing the bandwidth usage. A power control protocol is presented in  for a single-hop ultra-reliable wireless powered system and the results show the enormous impact on improving the system performance, in terms of error probability and energy consumption. The minimum energy required to transmit information bits over a Rayleigh block-fading channel in a multi-antenna setup with no interference and with a given reliability is investigated in . On the other hand, link adaptation optimization through an adaptive modulation and coding scheme, considering errors in both data and feedback channels, is proposed in , and authors show that the performance of their proposed scheme approximates to the optimal. An energy efficient power allocation strategy for the Chase Combining (CC) Hybrid Automatic Repeat Request (HARQ) and Incremental Redundancy (IR) HARQ setup is suggested in  and , respectively; while allowing to reach any outage probability target in the finite block-length regime. In , a detailed analysis of the effective energy efficiency for delay constrained networks in the finite blocklength regime is presented, and the optimum power allocation strategy is found. Results reveal that Shannon’s model underestimates the optimum power when compared to the exact finite blocklength model. Authors in  formulate a joint power control and discrete rate adaptation problem with the objective of minimizing the time required for the concurrent transmission of a set of sensor nodes while satisfying their delay, reliability and energy consumption requirements. In  we focused on the rate allocation problem in downlink cellular networks with Rayleigh fading and stringent reliability constraints. The allocated rate depends on the target reliability, and on average statistics of the signal and interference and the number of antennas that are available at the receiver side. We have shown the feasibility of the ultra-reliable operation when the number of antennas increases, and also that the results remain valid even when operating with stringent delay constraints as far as the amount of information to be transmitted is not too small. The rate allocation strategy is extended to downlink Non-orthogonal multiple access (NOMA) scenarios in , while we attain the necessary conditions so that NOMA overcomes the orthogonal multiple access (OMA) alternative. Additionally, we discuss the optimum strategies for the 2-user NOMA setup when operating with equal rate or maximum sum-rate goals.
In this paper we develop further 
by generalizing some of its main results to the case where the transmit power is another degree of freedom that is exploited to meet the reliability requirements while maximizing the energy efficiency of the system. Therefore, we focus on joint power control and rate allocation strategies that maximize the system energy efficiency in ultra-reliable system with multiple antennas at receiver side, thus, a Single-Input Multiple-Output (SIMO) system. There is no distinction between uplink and downlink, but notice that SIMO setups match much better uplink scenarios where the receiver is usually equipped with better hardware capabilities, e.g., data aggregators/gateways or base stations in cellular communications222Notice that some URLLC applications, e.g., tactile Internet, may require the joint design of downlink and uplink communications (check for instance ). Such analysis is out of the scope of this paper; however, as future work we intend to extend our results for the Multiple-Input Multiple-Output (MIMO) scenario, while considering the mentioned joint downlink and uplink design.. The system is composed of an ultra-reliable link under Rayleigh fading, being interfered by multiple transmitters operating in the neighborhood, thus, differently from the setups analyzed in [14, 17, 18, 19, 20, 23, 25]. The main contributions of this work can be listed as follows:
we propose a joint power control and rate allocation scheme that meets the stringent reliability constraints of the system while maximizing the energy efficiency. The allocated resources depend only on the target reliability, and on average statistics of the signal and interference and the number of antennas that are available at the receiver side. In addition to Selection Combining (SC) and Maximum Ratio Combining (MRC) schemes, and different from , we also consider the Switch and Stay Combining (SSC) technique; while we do not make distinction between uplink and downlink and our goal is to maximize the energy efficiency of the system by adjusting both the transmit power and rate;
we attain accurate closed-form approximations for the resources, optimum transmit power and rate, to be allocated when the receiver operates using the SC, SSC and MRC schemes;
we show that the optimum transmit rate and power are smaller when operating with SSC than with SC, and the ratio gap tends to be inversely proportional to the square root of a linear function of the number of antennas at the receiver; however, such allocation provides always positive gains in the energy efficiency performance;
we show the superiority of MRC over SC in terms of energy efficiency, since it allows operating with greater/smaller transmit rate/power. We proved that the performance gap between the optimum allocated resources for these schemes in the asymptotic ultra-reliable regime, where the outage probability tends to 0, converges to . Meanwhile, in most cases MRC was also shown to be more energy efficient than SSC, although this does not hold only when operating with extremely large , extremely high/small average signal/interference power and/or highly power consuming receiving circuitry;
we show that the greater the fixed power consumption and/or drain efficiency of the transmit amplifier, the greater the optimum transmit power and rate. However, the energy efficiency decreases/increases with the power consumption/drain efficiency. Numerical results also show the feasibility of the ultra-reliable operation when the number of antennas increases.
Next, Section II overviews the system model and assumptions. Section III introduces the performance metrics and the optimization problem, while in Section IV we characterize the Signal-to-Interference Ratio (SIR) distribution for each of the receive combining schemes. In Section V we find the resource allocation strategy that maximizes the system energy efficiency subject to stringent reliability constraints. Finally, Section VI presents the numerical results and Section VII concludes the paper.
Boldface lowercase letters denote vectors, for instance,, where is the -th element of . , , while
is a Lomax random variable with Probability Density Function (PDF)and CDF , , and is a Pareto I random variable with PDF . is the probability of event A, denotes expectation, while denotes the largest integer that does not exceed . Also, is the inverse Q-function, is the incomplete gamma function, while is the main branch of the Lambert W function , which satisfies for and it is defined in .
Ii System Model
Consider the scenario in Fig. 1, where a collection of nodes, , are spatially distributed in a given area and using the same spectrum resources, e.g., time and frequency, when transmitting to their corresponding receivers. We focus on the performance of link , which we refer to as the typical link, and denote and as its transmitter and receiver node, respectively; while the transmit rate is denoted as .
Meanwhile with denotes each of the interfering links.
We assume a SIMO setup where is equipped with antennas sufficiently separated such that the fading affecting the received signal in each antenna can be assumed independent and Channel State Information (CSI) is available at ,333 may send some pilot symbols as overhead when transmitting to for the latter be able to estimate the CSI. be able to estimate the CSI. Notice that this overhead can be accounted as part of the constraint
for the latter be able to estimate the CSI. be able to estimate the CSI. Notice that this overhead can be accounted as part of the constraint(check Section III); while although we assume perfect CSI, the imperfectness may be modeled as a loss in the SIR as in [26, 12]. hence full gain from spatial diversity can be attained444Diversity is an important building block for supporting URLLC , and herein we focus simply on spatial diversity taking advantage of the multiple receive antennas. Notice that other diversity sources such as frequency, time and/or polarization could also be available , and our results and methodology can be easily re-utilized/extended to cover such scenario.. Particularly, one of the following combining schemes is utilized at :
SC: The combiner outputs the signal on the antenna branch with the highest SIR. Since only one branch is used at a time, SC could require just one receiver that is switched into the active antenna branch. However, a dedicated receiver on each branch may be needed for systems that transmit continuously in order to simultaneously and continuously monitor SIR on each branch. In this work we refer specifically to the latter SC implementation. Notice that with SC the resulting SIR is equal to the maximum SIR of all branches ;
SSC: This scheme strictly avoids the need for a dedicated receiver on each branch, thus reducing the power consumption, by scanning each of the branches in sequential order and outputting the first signal with SIR above a threshold. Once a branch is chosen, as long as the SIR on that branch remains above the desired threshold, the combiner outputs that signal; while when the SIR on the selected branch falls below the threshold, the combiner switches to another branch ;
MRC: The combiner outputs a weighted sum of the signals coming from all branches. We assume that can perfectly estimate also the interference power level in every branch, thus, the optimum combining weight for each branch using such information is obtained by correcting the phase-mismatch of the received signal and scaling it by the interference level. In this case the resulting SIR is equal to the sum of SIRs on each branch .
We focus our attention to above combining schemes, while other possibilities include the Equal Gain Combining (EGC), which co-phases the signals on each branch and then combines them with equal weighting; and several hybrid schemes . In general, these schemes are easier to implement compared to MRC but also perform slightly worse in terms of reliability.555For instance, the error performance of EGC typically exhibits less than 1 dB of power penalty compared to MRC . In any case, such schemes lead to cumbersome analytical analysis, which we leave for future work.
Additionally, each link is characterized by a triplet , where is the transmit power of which is constrained to be not smaller and not greater than and , respectively; is the power channel gain vector with normalized and exponentially distributed entries such that , e.g., Rayleigh fading; while is the path-loss of the link. Meanwhile, we consider an interference-limited wireless system given a dense spatial deployment where the impact of noise is neglected666However, the impact of the noise could easily be incorporated without substantial changes.; thus, the SIR perceived in the th antenna of is
Iii System Performance Targets
Our goal in this work is to allocate power and rate at in order to maximize the system energy efficiency while meeting the URLLC requirements. Therefore, let us define these performance metrics.
Iii-a Reliability & Latency
Reliability is defined as the probability that a data of given size is successfully transferred within a given time period . Hence, reliability and latency are intrinsically connected concepts. In fact, the typical URLLC use case demands transmitting a layer 2 protocol data unit of 32 bytes within 1 ms with success probability .
During the last years, significant progress has been made within the information theory community to address the problem of quantifying the achievable rate while accounting for stringent reliability and latency constraints in a satisfactory way. In that sense, works in [33, 34] have identified these trade-offs for both Additive White Gaussian Noise (AWGN) and fading channels, respectively. Specifically, authors in  show that to sustain a desired error probability at a finite blocklength , one pays a penalty on the rate (compared to the Shannon’s channel capacity) that is proportional to ; while under quasi-static fading impairments authors in  show through numerical evaluation that the convergence to the outage capacity is much faster as increases than in the AWGN case. In fact, it has been shown in  for Nakagami-m and Rice channels that quasi-static fading makes disappear the effect of the finite blocklength when i) the rate is not extremely small and ii) line of sight parameter is not extremely large. For the scenario under discussion in the current work we have already corroborated in  that by using the asymptotic outage probability instead of the finite blocklength error probability as the reliability performance metric, the results remain valid as far as the transmission rate is not too small. Therefore, in this work we leave aside the finite blocklength formulation (although the same methodology as in  can be utilized) and just consider the outage probability. Notice that by limiting to be above some , the latency constraint is implicitly considered.
Considering the receive diversity schemes discussed in previous section, an outage event as a function of and is defined as , where
Notice that in delay-limited systems with fixed transmit rate as in our case both SC, and SSC with threshold , share the same outage performance. This is because iff the maximum SIR exceeds the threshold , SSC will find at least one antenna branch with SIR above it, hence, no outage. Finally, the outage probability can not exceed a given reliability constraint specified by the maximum outage probability . This is .
Iii-B Energy Efficiency
The energy efficiency is defined as the ratio between the throughput and the power consumption and it tells us the number of bits that can be transmitted per Hertz while consuming a joule unit. Considering a linear power consumption model as in [36, 22], we can write the energy efficiency of the system as
where is the drain efficiency of the amplifier at , is the power consumption value for the frequency synthesizers at and ,777For the case of we assume that the frequency synthesizer is shared among all the antenna paths, thus, the consumption of this block does not depend on . while and are the power consumed by the remaining internal circuitry for transmitting and receiving, respectively. Additionally,
since for SC and MRC the consumption of the internal circuitry grows linearly with because all the antenna branches are active, while for SSC only one is active888For SSC we do not take into account the sleep-mode power consumption of the circuitry in the inactive antenna branches, neither the power consumption when scanning the antennas trying to find one that provides a SIR value above the threshold . Hence, the real power consumption of SSC may exhibit a weak dependence on but we ignore it here for simplicity, then, the energy efficiency performance of the SSC discussed here can be seen as an upper bound for the performance of a practical SSC implementation..
Iii-C Problem Formulation
According to the performance metrics specified in Subsection III-A and III-B we present in (5) the joint power control and rate allocation problem that maximizes the energy efficiency subject to an ultra-reliability constraint.
We would like to point out that the constraints on may be given by hardware limitations but also/alternatively could be chosen to guarantee that certain interference thresholds on neighboring networks are not overpassed. Additionally, and as commented before in Subsection III-A, a delay constraint can be implicitly considered within by setting where (Hz) is the bandwidth and (bits) is the data to be transmitted.
Fig. 2 shows the feasible region when solving P1. As increases is capable of transmitting with a larger bit rate for the same reliability target, thus, the curve vs with is increasing on as shown in the figure. Let us focus the attention on the red point on the curve , and notice that for any positive and , holds, but according to (3) and based on the fact that we have that , thus, the solution of P1 lies on the curve . Additionally, P1 has a non-empty solution when .
Notice that the solutions of P1, named and , must depend on information easy to obtain for . For instance, it is not practical if and/or are set according to the interference contribution of each interfering node separately.
Iv SIR Distribution
Instantaneous channel fluctuations are unknown at , thus, and are chosen fixed. Notice that in order to solve P1 we first need to characterize the SIR distribution under each of the diversity schemes since
We proceed by finding the distribution of the at each antenna and then we extend the results for multiple antennas at the receiver and under the SC, SSC and MRC schemes.
The CDF of the SIR at each antenna is given by
which is upper-bounded by
We proceed as follows 
Both, (8) and (9), converge in the left tail. This becomes evident from the proof of Theorem 1. Therein notice that when operating in the left tail should be close to 1, therefore each of the terms is expected to approximate to the unity. Hence, all of these terms are very similar among one another, and geometric mean approximates heavily to arithmetic mean in such scenarios.
is expected to approximate to the unity. Hence, all of these terms are very similar among one another, and geometric mean approximates heavily to arithmetic mean in such scenarios.
The convergence of the approximations in the left tail is clearly illustrated in Fig. 3 for three different setups, thus, validating our findings. Additionally, notice that the exact CDF of is upper-bounded by the approximation in the entire region, but this does not hold for the PDF in the right tail, for which the approximation lies under the exact curve and diverging fast.
Obtaining the PDF of the SIR directly from (8) seems intractable for large , which is the case in dense network deployments. Also, since the upper bound is extremely tight in the left tail of the distribution, its utility is enormous because it is in that region where typical reliability constraints are, e.g., .
Iv-a Selection Combining – SC & Switch and Stay Combining – SSC
Under the SC and SSC schemes, (6) transforms into
where follows from the fact that is distributed independently on each antenna, and comes from using the definition of the CDF of .
Iv-B Maximal Radio Combining – MRC
Under the MRC scheme, (6) transforms into
where . From Remark 1, can be represented as where
where , while its CDF is given by [37, Eq.(4.13)]
and is the Eulers’s constant.
Unfortunately is very difficult to evaluate, therefore, very time-consuming. In fact, it is also impossible to be evaluated for many combinations of parameter values , e.g, relatively small and relatively large and/or , for which calculation does not converge due to software/hardware limitations. Additionally, since requires to be solved, the inversion of is needed, which is an even more cumbersome task. For those reasons, we provide next an accurate approximation for in the left tail, and then we dedicate our attention to find .
We have that
where follows from adding and then dividing by on each side. The left term is the arithmetic mean of , thus, we are going to use again the relation between the arithmetic and geometric means. But first notice that according to (15) the mean of , , decreases with and already for its value is below , thus, is expected to be smaller than with high probability when is not too small. Therefore, all results that comes next from using the geometric mean in the left term of (22) are tighter when increases and converge to the exact expression. But most importantly, the expressions converge in the left tail where , for which each of the summands is expected to take much smaller values while getting far from . We proceed as follows
where , and with PDF
Now we are going to prove by induction that the PDF of is given by
The proof proceed as follows.
Therefore, (25) holds. Now, the CDF of is given by
Fig. 4 shows the incredible accuracy of (21) in the left tail. Only a slight divergence from the exact expression is observable when is relatively small, e.g., , at the same time that the reliability is not too restrictive, . This is in-line with the arguments we used when proving Theorem 2. Using expressions (20) and (21) is twofold advantageous: i) they are relatively easy to evaluate and ii) they can be evaluated in regions where the exact expressions cannot. 999Regarding this last aspect, notice that (17) does not converge for and also for , just for mentioning two examples.
Although an easy-to-evaluate expression for was given in (21), it is not analytically invertible, thus, requires to be computed numerically101010Note that there software packages to evaluate the inverse gamma function, e.g., gammaincinv in MatLab and InverseGammaRegularized in Wolfram Mathematica.. Following result aims at alleviating this issue.
specially when is very restrictive and is not too large.
According to [38, Eq. (8.10.11)] we have that
where equality holds for and diverges slowly when increases. Additionally, this lower bound is very tight in the left tail of the curve, e.g., when is more restrictive. We require to isolate from , and notice that for we have , thus, we can take , which makes (30) even more accurate when is not too small. The tightness of the lower bound is clearly shown in Fig. 5. Finally we attain (29) straightforwardly. ∎
V Optimum Joint Power Control and Rate Allocation
As highlighted at the end of Subsection III-C, the optimum resource allocation lies on the curve . Specifically, for SC and SSC and based on (13), the exact relation between and is given by , while for MRC we were unable to find it. Notice that using such exact intricate relation, even more intricate for large , is additionally not advisable because the solution pairs are expected to depend on and each separately, which is not suitable since such information is difficult to obtain for . Following result aims at addressing these issues by providing a relatively simple, yet practically useful, relation between and for all the diversity schemes.
When the curve is tightly approximated by
Again, notice that the significance of (31) is undeniable since it shows that rather than depending on each separately, ultimately depends on the number of interfering nodes, the number of receive antennas, the reliability constraint and the ratio between the average signal and average interference powers, which are easy/viable to estimate/know. Now we are in condition to make the following proposition.
Solving is equivalent to solve
for and .
It is required that and according to (5c) and (5d), respectively, and combining them yields . The objective function can be written now as a function of . Since the resultant objective function is not concave we use the fact that optimizing it conduces to the same result as optimizing and the optimization over is equivalent to optimize over . Hence, such transformation yields P2. ∎
the optimum resource allocation is given by
Notice that we have ignored the term since by design it is equal to , and we have used instead of , which does not affect the optimization of . Now, the first and second derivatives of are
where the second derivative comes from taking the derivative of in the first line of (39). Notice that since , thus, is concave on and it has a global maximum on the solution of which is obtained as follows