## I Introduction

Providing efficient vehicle-to-vehicle (V2V) communications is a necessary stepping stone for enabling autonomous and intelligent transportation systems (ITS) [1, 2, 3, 4, 5]. V2V communications can extend drivers’ field of view, thus enhancing traffic safety and driving experience, while enabling new transportation features such as platooning, real-time navigation, collision avoidance, and autonomous driving [4, 1]. However, the performance of emerging transportation applications heavily rely on the availability of V2V communication links with extremely low errors and delays. In this regard, achieving ultra-reliable low-latency communication (URLLC) for V2V networks is necessary for realizing the vision of intelligent transportation [1]. Since over-the-air latency and queuing latency are coupled, ensuring low queuing latency is required to achieve the much coveted target end-to-end latency of 1 ms. This, in turn, necessitates efficient radio resource management (RRM) techniques [6, 7, 5]. Furthermore, the increased energy consumption and its negative impact on the environment due to the large number of vehicles in modern transportation system, and improving energy-efficiency/energy savings need to be addressed within RRM in V2V communications [8, 9].

Several existing RRM techniques have been proposed for enabling ultra-reliable low-latency vehicular communications while factoring in several challenges such as rate maximization, delay minimization, improving energy-efficiency, energy saving, and vehicle clustering/platooning [4, 5, 3, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. In [4], the performance of vehicular platooning is optimized while jointly considering the delay of the wireless network and the stability of the vehicle’s control system. By grouping vehicles into clusters, the work in [5] minimizes the total transmission power in a vehicular network while considering queuing latency and reliability. In [8], an energy-efficient resource allocation algorithm is proposed for cooperative V2V communication systems. The work in [9] proposed an energy saving sleep mode strategy for access points serving motorway vehicular traffic. The problem of vehicle network clustering is studied in [10] to reduce the power consumption of V2V communications. In [11], a joint resource allocation and power control algorithm is proposed to maximize the V2V sum rate. The authors in [12] optimize the beam alignment and scheduling among vehicles to reduce the V2V communication delays. In [13], the tradeoff between service delay and transmission success in V2V communications is optimized. The URLLC aspects of this prior art [4, 5, 8, 9, 10, 11, 12, 13, 14] are captured by either improving the average latencies or imposing a probabilistic constraint to maintain small queue lengths. Although such a probabilistic constraint on the queue length improves network reliability, it fails to control rare events in which large queue lengths occur with low probability, i.e., the tail distribution of queue lengths. As a result, if the network relies on these existing schemes, some of the vehicular users (VUEs) may experience unacceptable latencies yielding degraded performance [18, 3, 19, 20, 21].

In practice, to enable a truly URLLC experience, it is imperative to model and capture extreme, low probability events.
To this end, *extreme value theory* (EVT), a powerful tool from statistics that characterizes the occurrences of extreme, low probability events instrumental in enabling URLLC [22].
In [19], EVT is used to model the distributions of data rates exceeding a threshold for few traffic traces and then, the accuracy of the analytical model is evaluated using simulations.
The work in [20] studies the statistical distributions of inter-beacon delays in safety applications for vehicular adhoc networks (VANETs) using EVT.
The authors in [21] use EVT to model the peak distribution of the orthogonal frequency division multiplexing envelope while characterizing the variations in peak-to-average-power ratios.
The work in [3] employs EVT to characterize the statistics of maximal queue length so as to control the worst-case latency of V2V communication links therein.
Characterizing the distribution of extreme events using EVT, i.e., determining the location, shape, and scale parameters of the tail distribution, in the above works necessitates the acquisition of sufficient samples capturing extreme events.
Depending on the network size and the quality of the communication within the network, the process of gathering samples over the network may introduce unacceptable overheads that are not investigated in the aforementioned works.
In a real-time system such as a V2V communication network, VUEs may have access to limited number of queue length samples (particularly those that are locally in excess over a high threshold) and hence they are unable to estimate the tail distribution of the network-wide queue lengths.
Therefore, roadside units (RSUs) can assist in gathering samples over the network at a cost of additional data exchange overheads.
Furthermore, due to the resource limitations available for V2V communication, VUEs may be unwilling to allocate their resources to share their individual queue state information (QSI) with an RSU and other VUEs.
This shortcoming warrants a collaborative learning model that does not rely on sharing individual QSI.

Recently, *federated learning* (FL) was proposed as a decentralized learning technique where training data is distributed (possibly unevenly) across learners, instead of being centralized [23, 24].
FL allows each learner to derive a set of local learning parameters from the available training data, referred to as *local model*.
Instead of sharing the training data, learners share their local models with a central entity, which in turn does model averaging then sharing a *global model* with the learners.
In [23], the applicability of several existing algorithms for FL are studied and a novel algorithm is proposed to handle the sparse data available at individual learners.
The means of minimizing the communication cost by sharing a reduced number of parameters of FL models are discussed in [25].
In [24], FL is used to develop distributed learning models for multiple related tasks simultaneously, referred to as multi-task learning.
The recent work in [26], proposes a new FL protocol that solves a client selection problem with resource constraints in mobile edge computing. Our prior work in [17] proposes a distributed FL-based algorithm for VUEs based on a maximum likelihood estimation (MLE).
However, this prior work does not consider sharing wireless resources for FL and V2V communications, whereby the impact of FL over shared wireless resources on V2V URLLC is not investigated.
To the best of our knowledge, with the exception of [17], no work has studied the use of federated learning in the context of URLLC.

The main contribution of this paper is to propose a distributed, FL-based, joint transmit power and resource allocation framework for enabling ultra-reliable and low-latency vehicular communication.
We formulate a network-wide power minimization problem while ensuring low latency and high reliability in terms of probabilistic queue lengths.
To model reliability, we first obtain the statistics of the queue lengths exceeding a high threshold by using the EVT notion of a *generalized Pareto distribution* (GPD) [22].
Using the statistics of the GPD, we impose a local constraint on extreme events pertaining to queue lengths exceeding a predefined threshold for each VUE.
Here, the characteristic parameters of the GPD are known as scale and shape, which are obtained by using the MLE.
In contrast to the classical MLE design which requires a central controller (e.g., RSU) to collect samples of queue lengths exceeding a threshold from all VUEs in the network, using FL every vehicle builds and shares its own local model (two gradient values) with the RSU.
The RSU aggregates the local models, does model averaging across vehicles, and feeds back the global model to VUEs.
Leveraging different time scales, using our proposed approach, each VUE learns its GPD parameters locally in a short time scale while the model averaging (global learning) takes place in a longer time scale.
In our model, we take into account the communication overheads of URLLC due to the model exchange over shared wireless resources.
Then, we propose a distributed algorithm that allows all VUEs to simultaneously learn the GPD parameters using FL.
To further reduce the overhead due to the need of synchronization and simultaneous model sharing, next we develop an asynchronous FL algorithm for MLE that allows VUEs to model and independently learn the tail distribution of queue lengths in a distributed manner.
Finally, Lyapunov optimization is used to decouple and solve the network-wide optimization problem per VUE.
Simulation results show that the proposed solutions estimate the GPD parameters very accurately compared to a centralized learning module and yields significant gains in terms of reducing the number of VUEs with large queue lengths while minimizing power consumption.
For dense systems with 100 VUE pairs, the proposed solution yields about reduction of VUEs with large queue lengths by reducing the power consumption by two folds, compared to a baseline model that controls the reliability using a probabilistic constraint on average queue lengths.
Furthermore, and reductions in averages and fluctuations of extreme queue lengths, respectively, can be seen in the proposed solution compared to the aforementioned baseline.

The rest of the paper is organized as follows. Section II describes the system model and the network-wide power minimization problem. The distributed solution based on EVT and Lyapunov optimization is presented in Section III. In Section IV, estimation of the extreme value distribution using FL and the cost of enabling FL for both synchronous and asynchronous approaches are discussed. Section V evaluates the proposed solution by extensive set of simulations. Finally, conclusions are drawn in Section VI.

## Ii System Model and Problem Definition

Consider a vehicular network consisting of a set of communicating VUE pairs, using an RSU that allocates a set of resource blocks (RBs) over a partition of the network defined as zones.
Here, a *zone* consists of VUE pairs that can reuse the same RBs with low-to-no interference on one another. The RSU allocates RBs orthogonally across the zones to reduce the interference among nearby VUE pairs.
Hence, a VUE pair is only allowed to use the subset of RBs allocated to its corresponding zone at time .
We denote the VUE transmitter (vTx) and receiver (vRx) that belong to VUE pair by vTx and vRx , hereinafter.
An illustration of our system model is presented in Fig. 1.

Let and

be, respectively, the transmit power vector of vTx

, and the channel gain vector between vTx and vRx over the subset of allocated RBs at time . Depending on whether the vTx and vRx are located in the same lane or separately in perpendicular lanes, the channel model is categorized into three types:*i) Line-of-sight*(LOS): both vTx and vRx are located in the same lane,

*ii) Weak-line-of-sight*(WLOS): vTx and vRx are in perpendicular lanes and at least one of them is located at a distance of no more than from the corresponding intersection, and

*iii) Non-line-of-sight*(NLOS), otherwise. Let and be the Cartesian coordinates of vTx and vRx , respectively. The channel includes a fast fading component following a Rayleigh distribution with a unit scale parameter and a path loss that uses the following model for urban areas using 5.9 GHz carrier frequency [27]:

(1) |

where is the -th norm of vector , is the path loss exponent, and the path loss coefficients and satisfy . The transmission rate between the vTx-vRx pair is given by,

(2) |

where is the interference from other vTxs, is the bandwidth of each RB, and is the noise power spectral density. At each time , data bits are randomly generated with a mean of at vTx that must be delivered to its corresponding vRx. Thus, at the vTx, a data queue is maintained and has the following dynamics:

(3) |

where .

The number of vehicles is expected to grow continuously, in which improving energy efficiency and saving energy in vehicular networks is a key requirement. Our goal is therefore to minimize the network-wide power consumption while ensuring URLLC. Here, reliability is achieved by guaranteeing queue stability for each vTx while keeping outages below a predefined threshold, i.e., the probability that the queue length exceeding a threshold is below a certain probability . The reliability conditions can now be formally defined as:

(4) | |||

(5) |

Note that the above reliability constraints cannot cope with extreme cases where . Such extreme cases essentially correspond to the worst case network queuing latency (as well as end-to-end latency [6, 7, 5]) which are a key determinant of the URLLC performance and, hence, must be properly addressed. In this regard, let be a sample of queue length exceeding the threshold observed over the network at time and . By imposing the constraints,

(6) | |||

(7) |

each VUE can better control the fluctuations of its queue and maintain its extreme values below the desired threshold. Here, is an indicator function with when , and , otherwise. We can now formally pose our network-wide power minimization problem:

(8a) | |||||

s.t. | (8c) | ||||

Here, (8c) ensures queue dynamics and reliability while controlling the worst-case latency over all VUEs and is the transmit power budget of a VUE.
Solving (8) to obtain the optimal transmission control policy over time is challenging due to two reasons:
*i)* A decision at time relies on future network states,
and
*ii)* The characteristics of the distribution of for constraint (6) are unavailable.
Moreover, solving (8) using a centralized approach requires exchanging channel state information (CSI) and QSI over the whole network resulting in unacceptable signaling overheads.
Therefore, a distributed solution that requires minimal coordination within the vehicular network is needed.

## Iii Proposed Distributed Framework using EVT and Lyapunov Optimization

Developing a distributed solution for solving (8) requires decoupling the optimization problem over VUE pairs. Therefore, next, we propose new solutions to decouple the objective function (8a) and the constraints (6) and (7) based on the statistics of queue lengths exceeding over the vehicular network.

### Iii-a Modeling Extreme Queue Lengths Using Extreme Value Theory

The samples of queue lengths exceeding the threshold are seen as extreme statistics of the system, and can be characterized using EVT. Assume that the individual queues at a given time are samples of independent and identical distributions (i.i.d.) and the queue threshold is large. Then, the distribution of can be modeled as a GPD using [22, Theorem 3.2.5]. This fundamental EVT result mainly shows that, as

, the conditional probability distribution of

is given by,(9) |

with , and and are called the shape and scale parameters, respectively. Here, if while when . Moreover, and are bounded and equivalent to and , respectively, only if . In this regard, constraints (6) and (7) for all can be rewritten as follows:

(10) | |||

(11) |

Assisted by the RSU, each VUE pair can estimate and locally without sharing its QSI, hence effectively decoupling the constraints (6) and (7), and imposing them locally as in (10) and (11), respectively.

### Iii-B Lyapunov Optimization for Power Allocation

By using EVT to model

and its first two moments, we recast the original problem into an equivalent form:

(12a) | |||||

subject to | (12b) |

To devise a tractable solution for the modified stochastic optimization problem in (12), we resort to Lyapunov optimization [28]. To this end, first, we should model the time average constraints as virtual queues. As such, the reliability constraint in (5) can be recast as for each VUE . Our next goal is to introduce a virtual queue for the aforementioned constraint instead of (4) and (5). To maintain the virtual queue ’s order of magnitude close to the order of magnitude of the actual queue size , both sides of are scaled by the queues, and thus, we will have: . Now, the time average constraints in (12b) for all are modeled by virtual queues as follows:

(13a) | |||

(13b) | |||

(13c) |

Let be the combined queue with and its quadratic Lyapunov function . The one-slot drift of the Lyapunov function is defined as .

###### Proposition 1

###### Proof:

See Appendix A.

By controlling the upper bound given in Proposition 1, the network can ensure the stability of both actual and virtual queues.

The conditional expected Lyapunov drift at time is defined as . We define as a parameter that controls the tradeoff between the queue length and the accuracy of the optimal solution of (12). We then find the network policies by introducing a penalty term to the expected drift and minimizing the upper bound of the drift plus penalty (DPP), . As a result, our goal will now be to minimize the following upper bound:

(15) |

at each time . Assuming that VUEs maintain channel-quality indicators (CQIs), each VUE can estimate the interference based on past observations (time averaged interference) [29]. Hence, the minimization of the above upper bound can be decoupled among VUEs as follows:

(16a) | ||||

subject to | (16b) | |||

(16c) |

where and .
The optimal solution of the convex optimization problem of (16) is obtained by a *water-filling algorithm* [30] where , with being the Lagrangian dual coefficient corresponding to constraint (16b).
Since the first two moments, and , of the distribution of queue lengths exceeding impact the optimal solution , in what follows we propose a mechanism to estimate the GPD parameters accurately.

## Iv Learning the Parameters of the Maximum Queue Distribution

The optimal power allocation problem in (16) relies on the characteristics of the excess queue distribution . Hence, estimating the parameters and with high accuracy using QSI samples gathered over the network is imperative. In this regard, modeling the distribution of queue lengths exceeding the threshold requires a central controller (e.g., the RSU) to compute and communicate with all VUEs at each time .

### Iv-a Queue Sampling via Block Maxima (BM)

Let be the block length (or time window) during which each VUE draws at most one (the maximum) queue length sample if the queue length exceeds the threshold . The size of should be sufficiently large to minimize correlation between QSI samples while being sufficiently small to avoid undersampling. We now define as the set of time instants during block . Then, the set of queue samples at time is with a sample size . Note that for all and the total number of samples may vary across VUEs since each VUE can independently perform its own QSI sampling process. Fig. 2 illustrates each VUE’s QSI sampling process.

### Iv-B RSU-Centric GPD Parameter Estimation

As shown in Section III-A, the distribution of the queue lengths exceeding the threshold is characterized by two parameters which need to be accurately estimated. For this purpose, we use MLE [31] whose objective is to find the best set of parameters that fits the GPD to the samples via maximizing the log likelihood function (or minimizing its negative) as follows:

(17) |

where is the feasible set, and is the set of network queue length samples. Here, we omit the time index for simplicity. Note that the likelihood function is a smooth function of and a summation over all the samples in , and thus, its gradient over a sample can be derived as follows.

###### Proposition 2

The derivative coefficient of the negative log-likelihood function of GPD at the queue length sample w.r.t. is,

(18) |

###### Proof:

see Appendix B.

Using the stochastic variance reduced gradient descent (SVRGD) technique

[32] alongside at the RSU, the optimal can be derived in an iterative manner (iterating over the sample set) with fast convergence. For a given predefined step size , the evaluation procedure of the GPD parameters using SVRGD over a sample at iteration is defined as follows:(19) |

where is an average estimate of over previous iterations and is an estimate of the gradient, respectively. After computing the GPD parameters by iterating over the sample set, RSU shares the optimal GPD parameters with all the VUEs. This RSU-centric GPD parameter estimation is referred to as “CEN”, hereinafter, and it is summarized in Algorithm 1.

In CEN, all VUEs in the network need to frequently upload their local queue length samples to the RSU by reusing the RBs available for V2V communication. This sample uploading over wireless links introduces an additional overhead in which a significant performance degradation can be expected in URLLC. As the VUE density increases, the sample size grows and so does VUEs’ access to the RSU causing congestion resulting in increased network latency. Henceforth, the need for a distributed learning technique for MLE that does not require sharing all the local samples at VUEs with the RSU or one another is crucial.

### Iv-C FL-Based GPD Parameter Estimation

Towards developing a distributed learning mechanism for GPD parameter estimation, first we rewrite the likelihood function as follows:

(20) |

where .
In (20), we express the likelihood function of the network as a weighted sum of likelihood functions per VUE.
Hereinafter, for simplicity, we use and instead of and , respectively.
The idea behind FL is to use to evaluate and locally, where is the local estimate of at VUE , and update the local estimations via sharing the individual learning *models* where .
Note that sharing over the network through RSU is sufficient to determine the domain , which is needed for the SVRGD procedure.

To evaluate the gradients and GPD parameters locally, VUE uses the SVRGD with a step size [23]. In this case, given the local and global copies of the GPD parameters and gradients at a time , , , , and , respectively, the local GPD parameters and gradients are updated for each QSI sample , process A in Fig. 2, as follows:

(21) |

After computing the gradients and GPD parameters locally, each VUE uploads its model at time , , to the RSU as illustrated in Fig. 2 by process B.

The RSU will then perform model averaging over the network while calculating the global GPD parameters and gradients as per process C in Fig. 2:

(22) |

Then, the global model is shared with the network (process D in Fig. 2).

The evaluation and sharing of parameters at the VUEs and the RSU can be done in either a synchronous or asynchronous manner.
In the *synchronous* approach, at the end of a predefined time interval , all VUEs evaluate their local gradients and simultaneously upload their local models to the RSU.
Then the RSU averages out all the local models after which all VUEs download the global model .
Here, synchronization may improve the accuracy of the estimation of global gradients.
However, the simultaneous transmissions to the RSU by all VUEs degrades the VUE-RSU data rates and thus, introduces significant delays to ongoing V2V communication.
This synchronous FL approach presented above is dubbed “sync-FL” hereinafter, and summarized in Algorithm 2.

In contrast, in the *asynchronous* approach, each VUE must wait until a predefined number of new QSI samples are collected.
In essence, at time with , VUE evaluates and uploads its local model to the RSU.
At the RSU, the newly received local model is averaged out with the existing local models of other VUEs, and the updated global model is fed back to VUE .
Note that the delay for the upload and download processes will be very small due to the fact that the likelihood of multiple VUEs simultaneously sharing their model is very low.
We designate the asynchronous approach as “async-FL”, as seen in Algorithm 3.

### Iv-D Cost of Communication with the RSU

In all three methods used to estimate the GPD parameters, CEN, sync-FL, and async-FL, the QSI samples or local/global models are exchanged between VUEs and RSU by reusing the RBs available for V2V communication. This communication between VUEs and the RSU for GPD parameter estimation introduces additional latencies to the ongoing V2V communication. Such latencies from GPD parameter learning can be seen as additional costs for URLLC applications. In this regard, modeling the cost of uploading/downloading the learning models or queue samples in terms of an additional delay on V2V communication is illustrated in Fig. 3 and discussed next.

Let , , and be the sizes (in bits) of gradient values, GPD parameters, and queue samples of any VUE, respectively. Suppose VUE has new samples, and its uplink and downlink rate between RSU are and , respectively. In the CEN approach, VUE dedicates time to upload all its new queue samples to the RSU while time to download the GPD parameters from the RSU. Since all VUEs access the RSU simultaneously, the RSU will schedule VUEs over the RBs that are already allocated for their V2V communication links. As a result, an additional delay of is introduced for ’s V2V communication.

Similar to CEN, in sync-FL, the RSU schedules VUEs due to their simultaneous access to the RSU. However, in sync-FL, only the learning models are shared. Therefore, the corresponding uplink and downlink durations and are introduced as additional delays for VUE ’s V2V communication. Similar delays can be observed for async-FL approach. However, in async-FL, VUEs independently access the RSU in which lower interference on VUE-RSU communication links compared to sync-FL can be expected. Therefore, higher rates for and , and lower delays on V2V communication can be expected in async-FL compared to the other two methods.

## V Simulation Results and Analysis

For our simulations, we consider a network based on a 250 m250 m Manhattan mobility model with nine intersections. In this setting, a road consists of two lanes with 4 m width in each direction. We uniformly deploy VUE pairs within each lane with the vRx always following the vTx with a speed of 60 kmph and a fixed gap of 50 m. VUEs share 60 RBs and have a maximum transmit power of W. The RB allocation per zone is adopted from [5] and [3]. The rest of the parameter values are presented in Table I.

Para. | Value | Para. | Value | Para. | Value |
---|---|---|---|---|---|

-68.5 dBm | 180 kHz | 10 | |||

-54.5 dBm | -174 dBm/Hz | (50,0.005) | |||

1.61 | 46.29 kb | (1,1000) | |||

15 m | 0.001 | (1,0) |

### V-a Centralized vs distributed GPD parameter estimation

Fig. 4 compares the accuracy of GDP parameter estimation using CEN and async-FL. In Fig. (a)a, the estimated GPDs for CEN and async-FL approaches for 20, 60, and 100 are shown. Here, the original samples are plotted along the estimated distributions. From Fig. (a)a it can be noted that the estimations of async-FL are almost equivalent to the CEN estimations. To evaluate the accuracy of GPD parameter estimation numerically, we use MLE-based cost function in (17) and the corresponding results are illustrated in Fig. (b)b. Furthermore, in Fig. (b)b, the impact of number of iterations used in SVRGD on the accuracy of GPD parameter estimation in CEN and async-FL are observed. Here, the selected scenarios have about number of queue length samples exceeding in which has the lowest number of samples while has the highest. When one iteration is used for SVRGD (at the RSU in CEN and at VUEs in async-FL), Fig. (b)b shows that higher samples yielding lower cost, i.e. better accuracy of GPD parameter estimation. Therein, the cost of async-FL is about , , and higher than of CEN for , 40, and 60, respectively. Increasing the number of iterations used in SVRGD reduces the cost rapidly at first, then the reductions are insignificant. When two iterations are used for SVRGD, FL yields lower cost and thus, a higher accuracy in parameter estimation compared to CEN. For larger number of iterations () per SVRGD is used, the costs of async-FL is only about higher than the costs of CEN when while for , async-FL yields about lower cost compared to CEN. It highlights that the performance of FL improves over a centralized SVRGD-based estimator with the increasing sample size.

In Fig. 5, we compare the amount of data exchange and the achieved reliability in terms of maintaining the queue length below for different VUE densities. As the reliability decreases with increasing the number of VUEs, async-FL achieves a reliability that is slightly lower to the one resulting from the CEN approach for , while outperforms CEN when . Note that the CEN method requires all VUEs to upload all their queue length samples to the RSU and to receive the estimated GPD parameters. In contrast, in async-FL, VUEs upload their locally estimated learning model and receive the global estimation of the model. For fewer number of VUEs, , the sample size of the network is small, and, thus, CEN can operate efficiently using very few data samples. In contrast, in async-FL, VUEs must upload and download both parameters and gradients yielding higher data exchange compared to CEN. However, as the number of VUEs increases (beyond 28), the sample size grows, and thus, CEN incurs higher amount of data exchanged between the RSU and VUEs compared to async-FL. The reductions of the exchanged data in async-FL compared to CEN is about 27% for and improves up to 79% when . Finally, Fig. 5 clearly demonstrates that the async-FL approach is particularly effective for large-scale and dense vehicular networks.

### V-B Performance Evaluation

Next, the proposed approaches, CEN, sync-FL, and async-FL, that utilize EVT to characterize the tail distribution of queue lengths are compared with three other baseline models namely:
*i)* FP:
a V2V network where vTxs use fixed transmit power,
*ii)* QSO:
a V2V network with the objective of power minimization while ensuring only the queue stability (3)-(4),
and
*iii)* QSR:
a V2V network that minimizes transmit power while focusing on the probabilistic constraint on average queue length and the queue stability (3)-(5).

Fig. (a)a compares the average transmit power of all approaches for different VUE densities. For a fair comparison, the transmit power of vTxs in FP is chosen as the average of transmit powers from all other five methods. The baseline QSO, which is oblivious to reliability, consumes a minimum transmit power out of all other methods. QSR baseline takes into account reliability while neglecting VUEs with extreme queue lengths, exhibits lower power consumption compared to all three proposed approaches for total VUEs . For the cases with , QSR consumes higher power compared to async-FL on average, and beyond , it is the most power consuming method. In QSR, there is no control on the number of VUEs with extreme queue lengths that increases with , and thus, their power consumption degrades the performance of QSR. Both CEN and sync-FL methods exhibit almost equal average power consumption while async-FL uses less transmission power compared to CEN and sync-FL. The requirement of lower transmit power to upload/download learning models due to asynchronous communication between the RSU and VUEs in async-FL results the power reductions therein. The power reductions in async-FL compared to CEN and sync-FL are negligible for , improves up to 31.6% when , and remains around 35% for .

The average queue length as a function of total VUE pairs for all baseline and proposed methods are shown in Fig. (b)b. Here, the average queue length reflects the average queuing latency. In FP, due to the low and fixed transmit power, the VUE queues grow large even for few VUE pairs. Since the fixed power is increased with as shown in Fig. (a)a, the average queue length decreases with first, then rises again due to the increased interference of the network. Although QSO has the lowest power consumption, it yields higher queuing latency compared to all other methods except FP. All three proposed techniques exhibit similar queue lengths on average while QSR results in the lowest average queuing latency. Compared to QSR, all three proposed methods that control VUEs with extreme queue lengths suffer up to three times in average queuing latency when .

Fig. 7 plots the maximum queue length that is proportional to the worst-case latency observed for all methods as a function of the total number of VUE pairs. Similar to average queue lengths, FP and QSO exhibit the highest worst-case latencies. QSR which has the lowest average queue lengths displays higher worst-case queue lengths compared to CEN and sync-FL for while it fails to outperform async-FL for all . Although QSR limits the fraction of VUE queue lengths exceeding and provides the best average latency, Fig. 7 shows that QSR neglects VUEs with extreme large queue lengths (worst-case VUEs). All three proposed methods CEN, sync-FL, and async-FL that have control over the tail distribution of the queue lengths yield almost equal worst-case queuing latencies up to . The reductions in worst-case latencies for all proposed methods are about for and about for compared to QSR. Further increasing increases the number of queue length samples exceeding in which frequent communications between VUEs and RSU take place. As a result, the learning procedure imposes undesirable delays on V2V communication in which high worst-case latencies can be observed in the proposed methods. However, due to the asynchronous nature of async-FL, VUEs communicate with the RSU independently in which the delay imposed by model sharing is reduced in async-FL compared to CEN and sync-FL. Hence, async-FL yields , , and reductions in worst-case latencies compared to QSR, CEN, and sync-FL, respectively, for .

The reliability in terms of the probability that the queue lengths are maintained below for all methods is presented in Fig. 8 as a function of the total number of VUE pairs. It can be noted that FP and QSO, which have no interest in improving V2V communication reliability are the first and second most unreliable methods, respectively. Since QSR has a reliability constraint, it yields greatly improved reliability over FP and QSO. CEN, sync-FL, and async-FL control the tail distribution of queue lengths along with the reliability constraint and thus, exhibit further improvements in reliability, i.e. outage reductions, compared to QSR. Similar to the explanation of the behavior of maximum queue lengths, asynchronous model sharing in async-FL reduces the delays introduced by the RSU-VUE communications compared to CEN and sync-FL methods. As a result, for , lower queue lengths and thus, reduced outages in async-FL method over CEN and sync-FL can be observed in Fig. 8. The reductions in outages (or reliability gains) of async-FL compared to QSR are and for and 76, respectively. At , async-FL yields about , , and reductions in outages compared to QSR, CEN, and sync-FL, respectively.

Fig. 9 illustrates the mean and standard deviation of the queue length tail distributions of QSR, CEN, sync-FL, and async-FL methods for different number of VUE pairs. The standard deviation at a given is drawn on top of the corresponding mean value to clearly highlight the fluctuations of queue lengths above . Note that FP and QSO are neglected since they have large means and standard deviations that do not scale well with the other four methods. CEN exhibits the lowest means and standard deviations of extreme queue lengths up to . For async-FL displays the lowest mean and fluctuations of queue lengths exceeding proving to be the best candidate for URLLC with large number of VUE pairs. The reductions in average extreme queue lengths in async-FL are about , , and compared to QSR, CEN, and sync-FL methods. From Fig. 9, we can see that QSR has the highest averages of extreme queue lengths compared to CEN, sync-FL, and async-FL methods. However, the fluctuations of queue lengths above are high in QSR only for . Beyond , highest fluctuations in extreme queue lengths are seen in both CEN and sync-FL. At , the fluctuation reductions in async-FL are about , , and compared to QSR, CEN, and sync-FL methods.

The queue length CCDF and transmit power cumulative density function (CDF) of QSR and async-FL for different vTx-vRx distances are shown in Fig. 10. According to Fig. (a)a, as vTx-vRx distance increases, the queue lengths increase in both QSR and async-FL methods due to the reduced over-the-air data rates. QSR which essentially neglects the queue lengths exceeding exhibits longer tails compared to the tail distribution-aware async-FL method. In async-FL, reductions of average queue lengths are and compared to QSR for vTx-vRx distances of 20 m and 80 m, respectively. For 50 m, QSR yields a reduction of in the average queue length over async-FL. In terms of reductions of the worst-case queue lengths, VUEs with queue lengths exceeding , in async-FL compared to QSR are , , and for vTx-vRx distances of 20 m, 50 m, and 80 m, respectively. Fig. (b)b shows that both QSR and async-FL methods consume lower power for the networks with close vTxs and their corresponding vRxs. For larger vTX-vRx distances, vTxs need higher transmit powers to serve their vRxs, yielding increased transmit powers in both QSR and async-FL methods. async-FL utilizes the characteristics of the queue length tail distribution to reduce the number of VUEs with large queue lengths and thus, minimizes the communications that need high data rates to meet the target reliability. In contrast, QSR has no control on the queue lengths exceeding , and thus, requires high data rates to serve VUEs with extreme large queue lengths yielding higher transmit power consumption compared to async-FL. Fig. (b)b shows that async-FL method reduces the transmit power consumption on average by , , and for vTx-vRx distances of 20 m, 50 m, and 80 m, respectively, compared to QSR method.

## Vi Conclusions

In this paper, we have formulated the problem of joint power control and resource allocation for V2V communication network as a network-wide power minimization problem subject to ultra reliability and low latency constraints. The constraints in terms of URLLC are characterized using extreme value theory and modeled as the tail distribution of the network-wide queue lengths over a predefined threshold. Leveraging concepts of federated learning, a distributed learning mechanism is proposed where VUEs estimate the tail distribution locally with the assistance of a RSU. Here, FL enables VUEs to learn the tail distribution of the network-wide queues locally without sharing the actual queue length samples reducing unnecessary overheads. Combining both EVT and FL approaches, we have proposed a Lyapunov-based distributed transmit power and resource allocation procedure for VUEs. Using simulations, we have shown that the proposed method learns the statistics of the network-wide queues with high accuracy. Furthermore, the proposed method shows considerable gains in reducing extreme events where the queue lengths grow beyond a predefined threshold compared to systems that account for reliability by imposing probabilistic constraints on the average queue lengths.

## Appendix A Proof of Proposition 1

First, consider the one-slot drift of the Lyapunov function.

(23) |

Using the relation for the -th terms in (23), upper bounds for each of the above terms can be derived as follows:

(24a) | ||||

(24b) | ||||

Comments

There are no comments yet.