With the proliferation of ubiquitous wireless devices and services, wireless communication networks are becoming increasingly complex. In particular, the arrival of generation mobile networks (5G) will provide connectivity to devices ranging from sensors and cell phones to vehicles and drones, shifting the paradigm of how things connect together. This will give rise to ultra-dense deployment scenarios, where a massive number of transmissions compete to obtain access to a limited amount of wireless resources.
One of the main drivers of higher throughput in 5G networks and beyond is leveraging the bandwidth that is available at higher frequency bands, such as the mmWave band. However, given the fact that the physical wireless resources are limited in nature, another way to enhance the performance of wireless networks is to improve the spectral efficiency. This becomes extremely challenging as the networks become denser, since the interference among concurrent transmissions can significantly hurt the network performance.
To deal with these challenges, there has been a plethora of work on radio resource management
in wireless networks. The approaches proposed in the literature use a wide variety of techniques in optimization, information, and game theories in order to attack various radio resource management sub-problems, including power control, link scheduling, cell association, sub-carrier assignment, and beamforming[1, 2, 3, 4, 5, 6, 7, 8].
Nevertheless, solving the radio resource management problem in its most general form is NP-hard, implying that as the network size increases, it becomes more challenging to derive an optimal solution [9, 10]
. That is why most prior works in the literature devise approximate solutions in various regimes of system parameters. With the recent success of machine learning, and particularly deep learning, over the past few years, learning-based algorithms have also been shown to result in promising solutions for resource management in wireless networks[11, 12, 13, 14]. More recently, the natural graph structure of wireless interference patterns has been leveraged in graph neural network (GNN) architectures [15, 16, 17] that are more suited to scalability and transference.
In this paper, we consider a wireless interference channel comprising multiple transmitter-receiver pairs, and seek a power control policy that mitigates the interference among concurrent transmissions with respect to both overall system performance and fairness across pairs. We model the network topology by a conflict graph, where each edge represents the interference between pairs that are strong compared to the signal power levels, while the absent edges correspond to interference links that are weak enough to be treated as noise . We then leverage the instantaneous conflict graph in a GNN that outputs a power allocation decision for each transmitter. We pose the power control problem as the optimization of the filter weights of the GNN such that a network-wide convex utility function is maximized subject to some minimum rate constraints for all receivers.
Channel values in wireless networks fluctuate from time to time and from topology to topology. Therefore, even for a given density of transmitters and receivers, a fixed and strict minimum rate constraint may not be satisfiable for some of the receivers with poor channel conditions and is hard to define a priori. Hence, we introduce a counterfactual optimization formulation, in which an adaptive slack variable is subtracted from the minimum rate constraints . We then utilize a primal-dual optimization algorithm to learn optimal policies and their associated optimal constraint slacks. We demonstrate through simulation results how our proposed framework learns a power control strategy that strikes a balance between the sum-rate and cell-edge performance—quantified by the percentile rate achieved by the users. In addition, we illustrate how the algorithm adaptively tunes the slack variable, hence the minimum rate constraints for the receivers, given the density of the network.
The rest of this paper is organized as follows. In Section II, we present the system model and formulate the problem. In Section III, we provide the details of the GNN architecture. In Section IV, we show how counterfactual optimization adapts the constraints as needed. In Section V, we present our simulation results. Finally, we conclude the paper in Section VI.
Ii System Model and Problem Formulation
We consider a wireless interference network with a set of transmitters and a set of receivers , where each transmitter intends to communicate to its corresponding receiver . The channel gain between each transmitter and each receiver
in the network is a random variable denoted by. We collect all the channel gains across the network in a square matrix, denoted by . Each channel gain in is composed of a constant long-term component, resulting from path loss and shadowing—due to signal attenuation from the physical distance between the transmitter and receiver nodes, alongside deviations thanks to obstacles in the environment—and a short-term fast fading component—a result of multi-path propagation in the channel and node mobility. In general, we assume that
is drawn from a joint probability distribution.
Assuming that all transmissions occur at the same time and on the same frequency band, they will cause interference on each other. Therefore, it is imperative for each transmitter to set its transmit power such that a global network-wide objective function is optimized. In particular, for each channel realization
, we denote the vector of power allocation variables by, whose component, , represents the transmit power allocated to transmitter . This implies that the signal-to-interference-plus-noise ratio (SINR) at each receiver can be written as
denotes the noise variance, andis defined as . The Shannon capacity of the link between transmitter and receiver is then given by
Due to the aforementioned short-term fading phenomenon, channel realizations vary over time, implying that the power allocation variables also need to be modified temporally. This motivates considering the ergodic average , to capture the throughput experienced by each receiver over a long period of time. The goal is to determine a power allocation policy parameterized by a fixed parameter vector , where, for each channel realization , the transmit powers are determined by . We formulate the power allocation problem to find the parameter vector that provides the best performing policy, i.e.,
In the above optimization problem, denotes a convex function of the receivers’ ergodic rates throughout the network, denotes a minimum capacity that each receiver needs to satisfy, and denotes the maximum transmit power. The minimum capacity constraints are included so as to avoid allocating all resources to “cell-center” receivers, hence balancing the power control policy to treat “cell-center” and “cell-edge” receivers fairly.
The problem in (3) is generally challenging to solve, mainly due to the non-convexity of the constraints. Moreover, aside from the effort in solving (3), the choice of parameterization function is critical in achieving an optimal policy with good practical performance. Fully-connected deep neural networks (DNNs) are a proper choice here, due to their universality property, which states that given enough depth and/or width, they have sufficient expressive power to approximate any function with any desired accuracy [20, 13]. However, despite their theoretical properties, such a parameterization does not scale well—as the parameter dimension grows with number of transmitter-receiver pairs in the network—and more critically does not generalize over varying network topologies. In the next section, we discuss and develop a graph neural network architecture suitable for solving the power allocation problem in networks of any size.
Iii Random Edge Graph Neural Networks
We present the random edge graph neural network (REGNN)
architecture as a parameterization for the resource management policy. Broadly speaking, graph neural networks (GNNs) can be viewed as a generalization of convolutional neural network (CNN) architectures, whose popularity and practical benefits stem largely from their significantly reduced parameter dimension relative to traditional DNNs, their invariance to input size, and their so-called translation equivariance.
Graph neural networks generalize the convolutional operations performed in CNNs with a convolution performed on arbitrarily structured data . This structure is given in the form of a graph , where are the nodes of the graph connected by weighted edges . We further use the matrix as a graph shift operator, that encodes the weights of edges . The elements take on higher values when node is closely related to node , smaller values when they are less related, and a value of 0 if they are unrelated. The graph convolution of input signal —whose element is the signal value at transmitter —and filter with respect to the graph encoded in is a vector , whose component is defined as
Observe that the term shifts the elements of in turns according to the weights and structure defined in .
A GNN is constructed with a sequence of so-called hidden layers, where the output of layer is fed as an input to layer . Denote by the input to layer , and by the graph filter at layer . With shift operator , the output of layer , denoted by , is computed as a composition of the graph filter and a pointwise, nonlinear function , i.e.,
The full GNN is then formed as the composition of layer operations as in (5) for . The input to the GNN is given as the initial graph input signal , defined on the nodes . While standard applications feature a fixed graph, we may also consider more generically an input graph —i.e., an input signal on the edges . When such inputs are drawn randomly from some distribution, this may otherwise be considered as a graph with random edge weights.
In the wireless interference network defined in Section II, a graph can be readily formed using the transmitter-receiver channel gains contained in the channel matrix . We define the graph , where is some function that preserves the sparsity pattern and node ordering of the channel matrix . Simple choices for may include element-wise magnitude or squared magnitude . In this work, we use the information-theoretic optimality condition for treating interference as noise, derived in 
, to classify the interference links between all non-associated transmitter-receiver pairs as strong or weak. In particular, we take an approach similar to, where for each interference link between and , we define indicator variables
where and are design parameters, controlling the sparsity of the graph. We then define as , where for each direct link between and , we set . We finally normalize the resulting matrix by its norm.
As the edge weights of
are derived from the random channel gain values, we consider the previously described case of GNNs with random input graphs—called the random edge graph neural network (REGNN)—with edges drawn from joint distribution . The full REGNN parameterization of the resource management policy can then be described as a GNN with a constant input , i.e.,
where the parameter contains the sets of filter weights, i.e., . The final nonlinear activation can be chosen to scale the output between . Note that with a filter length of at the layer, the total number of parameters in a GNN is , a number significantly smaller than that of a fully connected DNN and invariant to the size of the input graph, i.e., number of transmitter-receiver pairs.
We point out that a key feature of REGNNs that make them well suited for learning in wireless networks lies in a structural property called permutation equivariance. Permutation equivariance implies that any permutation of the rows and columns of the channel matrix —i.e., a relabeling of the indices of transmitter-receiver pairs in the wireless network—will result in an equally permuted output for any REGNN as defined in (7)—see . This property is valuable in wireless networks because it can facilitate the training of a REGNN to operate over many different geometric configurations of the transmitters and receivers in the network, which will invariably change over time in practice. We will demonstrate the effectiveness in using REGNNs to achieve strong performance over a wide range of network configurations in Section V.
Iv Counterfactual Optimization
While we may utilize the REGNN to parameterize the resource allocation policy, a training algorithm must be used to find the proper set of GNN filter weights that performs well under the metrics and constraints defined in (3). Training the filter weights here is not straightforward in that the resulting policy must not only maximize the utility function , but also satisfy the minimum rate constraints . While constraints can generally be satisfied with a Lagrangian dual function, this requires explicit a priori knowledge of the minimum achievable rate . However, this is generally not known in practice, as complex interference patterns between concurrent transmissions in different network densities may make some lower bounds infeasible.
We address this problem with what may be referred to as a counterfactual . That is, we consider a slack term for the minimum capacity constraint, and try to find the optimal policy under the loosened constraint. Any increase in slack will render a solution further from the intended solution of (3); as such, we further seek to minimize under the condition that the problem remains feasible. More formally, we augment (3) with the counterfactual slack variable as
In (8), along with optimizing the REGNN parameters and ergodic average rates , we also minimize the value of the slack that makes the problem feasible. Increasing will lessen the achieved objective value in (8). However, too small a slack may make the constraints too tight to satisfy, rendering the problem unsolvable. The value in the counterfactual formulation lies in the fact that, should the preferred be unrealizable, the optimization of slack variables will implicitly loosen this requirement just enough to find a solution.
We proceed to derive the training algorithm by introducing the Lagrangian function, with non-negative dual multipliers associated with each constraint in (8), as
The Lagrangian in (9) provides a single, unconstrained objective function, which we can optimize using gradient-based methods. In particular, we seek to maximize over the so-called primal variables , while subsequently minimizing over the dual variables , i.e.
We can now define the updates over an iteration index for each primal and dual variable by either adding or subtracting the partial gradient of with respect to that variable. For the primal variables, this gives us the updates,
where denote learning rates corresponding to the primal variables. Note that in addition to updating and in (11)-(12), the counterfactual formulation updates the slack variable as the difference between the current slack and dual variables. Likewise, we descend on the dual variables using the associated partial gradients of the Lagrangian, i.e.,
with representing learning rates corresponding to the dual variables. The primal-dual gradient updates in (11)-(15) successively move the primal and dual variables towards maximum and minimum points of the Lagrangian dual function, respectively. The complete counterfactual primal-dual learning algorithm is summarized in Algorithm 1.
Our proposed method is unsupervised in the sense that we train the REGNN weights to optimize the utility and constraints in (8) directly rather than with labeled solutions. Therefore, this algorithm can be applied to all different types of radio resource management problems, whose objectives and constraints can be formulated as in (8), without the need to have any optimal solutions beforehand.
We point out that evaluating the updates in (11)-(15) may require computing potentially challenging gradients and expectations. The gradients in these updates can be replaced with well-known model-free
gradient estimation methods that can be obtained with function evaluations and channel sampling—see for details on these approaches.
V Simulation Results
We consider wireless networks with
transmitter-receiver pairs, dropped randomly within a square area of side length 500m. We drop the transmitters uniformly at random within the network area, and ensure a minimum pairwise distance of 35m between them. Afterwards, for each transmitter, a receiver is dropped within an annulus centered at the transmitter, with inner and outer radii of 10m and 100m respectively, according to a skewed distribution that biases the receiver’s location towards its serving transmitter. Each drop is then run for 200 steps. The long-term channel model consists of a standard dual-slope path-loss model[22, 23]
and log-normal shadowing with 7 dB standard deviation. We also model short-term Rayleigh fading using the sum of sinusoids (SoS) technique proposed in. The bandwidth is taken to be 10 MHz, the noise power spectral density is assumed to be -174 dBm/Hz, and the maximum transmit power is taken to be dBm. We utilize a sum-rate network utility function , and we set the minimum capacity to bps/Hz for all receivers.
Figure 1 illustrates how the slack variable evolves during the course of training for different network densities. Increasing the number of transmitter-receiver pairs gives rise to higher levels of interference and lowers the average achievable rate of each receiver, making it harder to satisfy its minimum rate constraint. Therefore, as Figure 1 shows, our proposed algorithm indeed learns how to adaptively elevate the slack variable for denser deployments so as to make the optimization problem feasible and maximize the desired utility function.
Moreover, Figure 2 shows the achievable sum-rates and percentile rates of our proposed algorithm for different numbers of transmitter-receiver pairs, as compared with two baselines of time division multiplexing or TDM (transmitters activated in a round-robin fashion), and weighted minimum mean-squared error or WMMSE . As the figure shows, these two baselines represent two ends of a spectrum: TDM is completely fair across all pairs in the network, hence achieving excellent percentile rate performance, at the expense of poor sum-rate. WMMSE, on the other hand, merely optimizes sum-rate, hence sacrificing most of the pairs that are experiencing poor channel conditions. Our proposed method, however, demonstrates a superior trade-off between sum-rate and percentile rate, balancing the rates experienced by “cell-center” and “cell-edge” receivers. In particular, it achieves sum-rate gains of up to 110% and percentile rate gains of up to 2740% over TDM and WMMSE, respectively.
Vi Concluding Remarks
In this paper, we considered the problem of downlink power control in wireless networks with multiple transmitter-receiver pairs. We parametrized the power control policy as a graph neural network, whose edge weights are derived from the channel gains between the transmitters and receivers. We then proposed a primal-dual gradient-based optimization algorithm based on counterfactuals, which learns a power control policy that maximizes a convex network utility function with adaptive minimum rate constraints tuned to the actual network conditions. Simulation results show the superiority of our proposed algorithm compared to baseline methods in terms of the trade-off between average and percentile user rates.
-  R. Madan, J. Borran, A. Sampath, N. Bhushan, A. Khandekar, and T. Ji, “Cell association and interference coordination in heterogeneous LTE-A cellular networks,” IEEE Journal on Selected Areas in Communications, vol. 28, no. 9, pp. 1479–1489, 2010.
-  Q. Shi, M. Razaviyayn, Z.-Q. Luo, and C. He, “An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp. 4331–4340, 2011.
-  W. Yu, T. Kwon, and C. Shin, “Multicell coordination via joint scheduling, beamforming, and power spectrum adaptation,” IEEE Transactions on Wireless Communications, vol. 12, no. 7, pp. 1–14, 2013.
-  X. Wu, S. Tavildar, S. Shakkottai, T. Richardson, J. Li, R. Laroia, and A. Jovicic, “FlashLinQ: A synchronous distributed scheduler for peer-to-peer ad hoc networks,” IEEE/ACM Transactions on Networking, vol. 21, no. 4, pp. 1215–1228, 2013.
-  N. Naderializadeh and A. S. Avestimehr, “ITLinQ: A new approach for spectrum sharing in device-to-device communication systems,” IEEE journal on Selected Areas in Communications, vol. 32, no. 6, pp. 1139–1151, 2014.
-  X. Yi and G. Caire, “ITLinQ+: An improved spectrum sharing mechanism for device-to-device communications,” in 2015 49th Asilomar Conference on Signals, Systems and Computers. IEEE, 2015, pp. 1310–1314.
-  L. Song, Y. Li, and Z. Han, “Game-theoretic resource allocation for full-duplex communications,” IEEE Wireless Communications, vol. 23, no. 3, pp. 50–56, 2016.
-  K. Shen and W. Yu, “FPLinQ: A cooperative spectrum sharing strategy for device-to-device communications,” in 2017 IEEE International Symposium on Information Theory (ISIT). IEEE, 2017, pp. 2323–2327.
-  Z.-Q. Luo and S. Zhang, “Dynamic spectrum management: Complexity and duality,” IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 1, pp. 57–73, 2008.
-  Y.-F. Liu and Y.-H. Dai, “On the complexity of joint subcarrier and power allocation for multi-user OFDMA systems,” IEEE Transactions on Signal Processing, vol. 62, no. 3, pp. 583–596, 2013.
-  H. Lee, S. H. Lee, and T. Q. Quek, “Deep learning for distributed optimization: Applications to wireless resource management,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2251–2266, 2019.
-  L. Liang, H. Ye, G. Yu, and G. Y. Li, “Deep-learning-based wireless resource allocation with application to vehicular networks,” Proceedings of the IEEE, 2019.
-  M. Eisen, C. Zhang, L. F. Chamon, D. D. Lee, and A. Ribeiro, “Learning optimal resource allocations in wireless systems,” IEEE Transactions on Signal Processing, vol. 67, no. 10, pp. 2775–2790, 2019.
-  N. Naderializadeh, J. Sydir, M. Simsek, H. Nikopour, and S. Talwar, “When multiple agents learn to schedule: A distributed radio resource management framework,” arXiv preprint arXiv:1906.08792, 2019.
-  M. Eisen and A. Ribeiro, “Optimal wireless resource allocation with random edge graph neural networks,” arXiv preprint arXiv:1909.01865, 2019.
-  M. Lee, G. Yu, and G. Y. Li, “Graph embedding based wireless link scheduling with few training samples,” arXiv preprint arXiv:1906.02871, 2019.
-  Y. Shen, Y. Shi, J. Zhang, and K. B. Letaief, “A graph neural network approach for scalable wireless power control,” arXiv preprint arXiv:1907.08487, 2019.
-  C. Geng, N. Naderializadeh, A. S. Avestimehr, and S. A. Jafar, “On the optimality of treating interference as noise,” IEEE Transactions on Information Theory, vol. 61, no. 4, pp. 1753–1767, 2015.
-  L. F. Chamon, S. Paternain, and A. Ribeiro, “Counterfactual programming for optimal control,” arXiv preprint arXiv:2001.11116, 2020.
-  K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, 1989.
-  M. Henaff, J. Bruna, and Y. LeCun, “Deep convolutional networks on graph-structured data,” arXiv preprint arXiv:1506.05163, 2015.
-  X. Zhang and J. G. Andrews, “Downlink cellular network analysis with multi-slope path loss models,” IEEE Transactions on Communications, vol. 63, no. 5, pp. 1881–1894, 2015.
-  J. G. Andrews, X. Zhang, G. D. Durgin, and A. K. Gupta, “Are we approaching the fundamental limits of wireless network densification?” IEEE Communications Magazine, vol. 54, no. 10, pp. 184–190, 2016.
-  Y. Li and X. Huang, “The simulation of independent Rayleigh faders,” IEEE Transactions on Communications, vol. 50, no. 9, pp. 1503–1514, 2002.