1 Introduction
The design of efficient scheduling algorithms is a fundamental problem in wireless networks. In each time slot, a scheduling algorithm aims to determine a subset of noninterfering links such that the system of queues in the network is stabilized. Depending on the interference model and the network topology, it is known that there exists a ‘rate region’  a maximal set of arrival rates  for which the network can be stabilized. A scheduling algorithm that can support any arrival rate in the rate region is said to be throughput optimal. A wellknown algorithm called the MaxWeight scheduling algorithm [182479] is said to be throughput optimal. However, the MaxWeight scheduler is not practical for distributed implementation due to the following reasons: (i) global network state information is required, and (ii) requires the computation of maximumweighted independent set problem in each time slot, which is an NPhard problem.
There have been several efforts in the literature to design lowcomplex, distributed approximations to the MaxWeight algorithm [11222334, max_w_2]. Greedy approximation algorithms such as the maximal scheduling policies, which can support a fraction of the maximum throughput, are one such class of approximations [Wan2013]. On the other hand, we have algorithms like carrier sense multiple access (CSMA) algorithms [csma_1, csma_2], which are known to be nearoptimal in terms of the throughput performance but known to suffer from poor delay performance.
Inspired by the success of deeplearningbased algorithms in various fields like image processing and natural language processing, recently, there has been a growing interest in their application in wireless scheduling as well
[ml_1, ml_2, ml_3]. Initial research in this direction focused on the adaption of widely used neural architectures like multilayer perceptrons or convolutional neural networks (CNNs)
[cnn_1] to solve wireless scheduling problems. However, these architectures are not wellsuited for the scheduling problem because they do not explicitly consider the network graph topology. Hence, some of the recent works in wireless networks study the application of the Graph Neural Network (GNN) architectures for solving the scheduling problem [gnn_1]. For instance, a recent work [9414098] has proposed a GNN based algorithm, where it has been observed that the help of Graph Neural networks can improve the performance of simple greedy scheduling algorithms like LongestQueueFirst (LQF) scheduling.However, this result is observed on a simple interference model called the conflict graph model, which captures only binary relationships between links. Nevertheless, in real wireless networks, the interference among the links is additive, and the cumulative effect of all the interfering links decides the feasibility of any transmission. Hence, it is essential to study whether the GNN based approach will improve the performance of greedy LQF scheduling under a realistic interference model like the (Signaltointerferenceplusnoise ratio) SINR model, which captures the cumulative nature of interference.
One of the challenges in conducting such a study is that the concept of graph neural networks is not readily applicable for the SINR interference model since a graph cannot represent it. Hence, we introduce a new interference model which retains the cumulative interference nature yet is amenable to a graphbased representation and conduct our study on the proposed interference model. This approach will provide insights into whether the GNNbased improvement for LQF will work for practical interference models.
To that end, in this paper, we study whether GNN based algorithms can be used for designing efficient scheduling under this general interference model. Specifically, we consider a tolerant conflict graph model, where a node can successfully transmit during a time slot if not more than of its neighbors are transmitting in that time slot. Moreover, when is set to zero, the tolerance model can be reduced to the standard conflict graph model, in which a node cannot transmit if any of its neighbors is transmitting. We finally tabulate our results and compare them with other GNNbased distributed scheduling algorithms under a standard conflictgraphbased interference model. In sum, our contributions are as follows:

We propose a GCNbased distributed scheduling algorithm for a generalized interference model called the tolerant conflict graph model.

The training of the proposed GCN does not require a labeled data set (involves solving an NPhard problem). Instead, we design a loss function that utilizes an existing greedy approach and trains a GCN that improves the performance of the greedy approach by to percent.
The remainder of the paper is organized as follows. In Sec. 2, we briefly present our network model. In Sec. 3, an optimal scheduling policy for tolerance conflict graph interference model, a GCNbased tolerant independent set solver, is presented. In Sec. 4, we conduct experiments on different data sets and show the numerical results of the GCNbased scheduling approach. Finally, the paper is concluded in Sec. 5.
Motivation: In the SINR interference model, a link can successfully transmit if the cumulative interference from all nodes within a radius is less than some fixed threshold value. The conflict graph model insists that all the neighbours should not transmit when a link is transmitting. However, in a realworld situation, a link can successfully transmit as long as the cumulative interference from all its neighbours (the links which can potentially interfere with a given link) is less than a threshold value. As a special case, in this paper, we consider a conservative SINR model called ktolerance model in which, if
is the estimated strongest interference that a link can cause to another and let
be the cumulative threshold interference that a link can tolerate, then a conservative estimate of how many neighbouring links can be allowed to transmit without violating the threshold interference is given by . In other words, neighbours can transmit while a given link is transmitting. It can be seen that this conservative model retains the cumulative nature of the SINR interference model. Hence a study on this model should give us insights into the applicability of GNN based solutions for realistic interference models.2 Network Model
We model the wireless network as an undirected graph with nodes. Here, the set of nodes of the graph represents links in the wireless network i.e., a transmitterreceiver pair. We assume an edge between two nodes, if the corresponding links could potentially interfere with each other. Let and A denote the set of edges and the adjacency matrix of graph respectively. We denote the set of neighbors of node by i.e., a node , if the nodes and share an edge between them. We say a node is tolerant, if it can tolerate at most of its transmitting neighbors. In other words, a tolerant node can successfully transmit, if the number of neighbors transmitting at the same time is at most . We define a tolerant conflict graph as a graph in which each node is tolerant, and model the wireless network as a tolerant conflict graph. Note that this is a generalization of the popular conflict graph model, where a node can tolerate none of its transmitting neighbors. The conflict graph model corresponds to tolerant conflict graph ().
We assume that the time is slotted. In each time slot, the scheduler has to decide on the set of links to transmit in that time slot. A feasible schedule is a set of links that can successfully transmit at the same time. At any given time , a set of links can successfully transmit, if the corresponding nodes form a independent set (defined below) in graph . Thus, a feasible schedule corresponds to a independent set in .
Definition 1
(independent set) A subset of vertices of a graph is independent, if it induces in , a subgraph of maximum degree at most .
A scheduler has to choose a feasible schedule at any given time. Let denotes the collection of all possible independent sets i.e., the feasible schedules. We denote the schedule at time by an
length vector
. We say if at time , node is scheduled to transmit and , otherwise. Depending on the scheduling decision taken at time , node (a link in the original wireless network) gets a rate of . We assume that packets arriving at node can be stored in an infinite buffer. At time , let be the number of packets that arrive at node . We then have the following queuing dynamics at node :(1) 
The set of arrival rates for which there exist a scheduler that can keep the queues stable is known as the rate region of the wireless network.
2.1 MaxWeight Scheduler
A well known scheduler that stabilises the network is the MaxWeight algorithm [182479]. The MaxWeight algorithm chooses a schedule that maximizes the sum of queue length times the service rate, i.e.,
(2) 
We state below one of the celebrated results in radio resource allocation.
Theorem 2.1
[182479] Let the arrival process be an ergodic process with mean . If the mean arrival rates () are within the rate region, then the MaxWeight scheduling algorithm is throughput optimal.
In spite of such an attractive result, the MaxWeight algorithm is seldom implemented in practice. This is because, the scheduling decision in (2) has complexity that is exponential in the number of nodes. Even with the simplistic assumption of a conflict graph model, (2) reduces to the NPhard problem of finding the maximum weighted independent set. At the timescale of these scheduling decisions, finding the exact solution to (2) is practically infeasible. Hence, we resort to solving (2) using a Graph Neural Network (GNN) model. Before we explain our GNN based algorithm, we shall rephrase the problem in (2) for the ktolerant conflict graph model below.
2.2 Maximum weighted kindependent set
In the tolerant conflict graph model , the MaxWeight problem is equivalent to the following integer program:
(3) 
Here is the weight vector. The constraint in
(3) ensures that whenever a node is
transmitting, at most of its neighbors can
transmit.
It can be observed that the maximum weight
problem in (2)
corresponds to using the weights
in the above formulation.
Henceforth, the rest of this paper is devoted to solving the maximum weighted
independent set problem using a graph neural network.
3 Graph Neural Network based Scheduler
In this section, we present a graph neural network based solution to solve the maximum weighted independent set problem. We use the Graph Convolution Neural network (GCN) architecture from [kipf2017semi, graph_conv].
The GCN architecture is as follows: We use a GCN with layers. The input of each layer is a feature matrix and its output is fed as the input to the next layer. Precisely, at the th layer, the feature matrix is computed using the following graph convolution operation:
(4) 
where are the trainable weights of the neural network, denotes the number of feature channels in th layer,
is a nonlinear activation function and
is the normalized Laplacian of the input graph computed as follows: Here, denotes the identity matrix and D is the diagonal matrix with entries .We take the input feature matrix as the weights of the nodes (hence ) and
as a ReLU activation function for all layers except for the last layer. For the last layer, we apply sigmoid activation function to get the likelihood of the nodes to be included in the
independent set. We represent this likelihood map from the GCN network using an N length vector .In summary, the GCN takes a graph and the node weights as input and returns a length likelihood vector (see Figure 1). However, we need a independent set. In usual classification problems, such a requirement is satisfied by projecting the likelihood maps to a binary vector. Projecting the likelihood map onto the collection of independent sets is not straightforward, since the collection of independent sets are length binary vectors that satisfy the constraints in (3). Such a projection operation by itself might be costly in terms of computation. Instead, by taking inspiration from [NEURIPS2018_8d3bba74], we pass the likelihood map through a greedy algorithm^{1}^{1}1In practice, the greedy algorithm can be replaced with a distributed greedy algorithm [7084695] and train the GCN model w.r.t the distributed greedy algorithm. to get a independent set.
The greedy algorithm requires each node to keep track of the number of its neighbours already added in independent set. We sort the nodes in the descending order of the product of the likelihood and the weight i.e., . We add the node with highest likelihoodweight product to the independent set, if at most of its neighbors are already added in the independent set. We remove the nodes that are neighbours to a node which has already added to the set and also reached a tolerance of . We then repeat the procedure until no further nodes are left to be added.
We use a set of nodeweighted graphs to train the GCN. Since the problem at hand is NPhard, we refrain from finding the true labels (maximum weighted independent set) to train the GCN. Instead, we construct penalty and reward functions using the desirable properties of the output . We then learn the parameters by optimizing over a weighted sum of the constructed penalties and rewards. We desire the output to predict the maximum weighted independent set. With this in mind we construct the following rewards and penalties:

The prediction needs to maximize the sum of the weights. So, our prediction needs to maximize .

The prediction needs to satisfy the independent set constraints. Therefore, we add a penalty, if violates the independent set constraints in (3), i.e., .

Recall that we use the greedy algorithm to predict the independent set from . The greedy algorithm takes as the input and returns a independent set. We desire the total weight of the output , i.e., to be close to the total weight of the independent returned by the greedy algorithm. Let be the total weight of the independent set predicted by the greedy algorithm. Then, we penalise the output if it deviates from , i.e., .
We finally construct our cost function as a weighted sum of the above i.e., we want the GCN to minimize the cost function:
(5) 
where , and denotes the optimization weights of the cost function defined in equation (5).
4 Experiments
We perform our experiments on a single GPU GeForce GTX 1080 Ti ^{2}^{2}2Training the models took around two hours.. The data used for training, validation and testing are described in the subsection below.
4.1 Dataset
We train our GCN using randomly generated graphs. We consider two graph distributions, namely ErdosReyni (ER) and BarbasiAlbert (BA) models. These distributions were also used in [9414098]. Our choice of these graph models is to ensure fair comparison with prior work on conflict graph model [9414098] ().
In ER model with
nodes, an edge is introduced between two nodes with a fixed probability
, independent of the generation of other edges. The BA model generates a graph with nodes (one node at a time), preferentially attaching the node to existing nodes with probability proportional to the degree of the existing nodes.For training purpose, we generate graphs of each of these models. For the ER model, we choose and for the BA model we choose . The weights of the nodes are chosen uniformly at random from the interval . We use an additional graphs for validation and graphs for testing.
4.2 Choice of hyperparameters
We train a GCN with layers consisting i) an input layer with the weights of the nodes as input features ii) a single hidden layer with features and iii) an output layer with features (one for each node) indicating the likelihood of choosing the corresponding node in the independent set. This choice of using a smaller number of layers ensures that the GCN operates with a minimal number of communications with its neighbors. We fix , and experiment training the GCN with different choices of the optimization weights , and . The results obtained are tabulated in Figure 2. Let denote the total weight of the plain greedy algorithm i.e., without any GCN and denote the total weight of the independent set predicted by the GCNgreedy combination. We have tabulated the average ratio between the total weight of the nodes in the independent set obtained from the GCNgreedy and the total weight of the nodes in the independent set obtained from the plain greedy algorithm, i.e., . The average is taken over the test data set.
Test Data = ER  Test Data = BA  
Training Data  Average  Variance  Average  Variance  
BA  5  5  10  1.038  3.047  1.11  10.16 
10  10  1  1.035  3.297  1.11  10.37  
5  5  1  1.035  3.290  1.11  10.14  
1  1  1  1.034  3.253  1.10  10.23  
5  5  30  1.041  3.230  1.10  10.39  
5  5  50  1.041  3.214  1.10  10.28  
5  5  100  1.035  2.838  1.09  10.02  
30  1  1  1.031  2.401  1.07  8.25  
ER  5  5  30  1.040  2.929  1.10  10.12 
5  5  10  1.039  3.145  1.11  10.71  
5  5  50  1.039  2.957  1.09  9.92  
1  1  1  1.038  3.135  1.11  10.74  
1  20  1  1.036  3.070  1.11  10.55  
10  10  1  1.034  3.428  1.11  10.34  
5  5  1  1.034  3.331  1.11  10.34  
5  5  100  1.031  2.420  1.08  8.42  
Distributed scheduling using GNN [9414098]  1.039  3.5  1.11  11.0 
The training was done with BA and ER models separately. We test the trained models also with test data from both models to understand if the trained models are transferable. We see that GCN trained with parameters , and performs well for both ER and BA graph models. The GCN improves the total weight of the greedy algorithm by percent for the ER model and by percent for the BA model. Also, we see that the GCN trained with ER model performs well with BA data and vice versa.
4.3 Performance for different
We also evaluate the performance for different tolerance values . We use the parameters , and in the cost function. Recall that we have come up with this choice using extensive simulations for . In Figure 3, we tabulate the average ratio between the total weight of the independent set obtained using the GCNgreedy combo and that of the plain greedy algorithm i.e., . We have also included the variance from this performance. We observe that the performance for a general is even better as compared to . For example, we see that for , we see percent improvement for the ER model and close to percent improvement for the BA model.
Test Data = ER  Test Data = BA  

Training Data  Average  Variance  Average  Variance  
BA  1  1.056  4.07  1.143  10.22 
2  1.062  5.26  1.193  10.92  
3  1.067  5.55  1.209  20.14  
4  1.063  4.53  1.241  20.57  
ER  1  1.056  3.99  1.143  10.18 
2  1.064  5.12  1.187  10.81  
3  1.066  4.82  1.205  20.13  
4  1.062  4.18  1.225  20.29 
Interestingly, the GCN trained with ER graphs performs well on the BA data set as well. This indicates that the trained GCN is transferable to other models.
5 Conclusion
In this paper, we investigated the wellstudied problem of link scheduling in wireless adhoc networks using the recent developments in graph neural networks. We modelled the wireless network as a tolerant conflict graph and demonstrated that using a GCN, we can improve the performance of existing greedy algorithms. We have shown experimentally that this GCN model improves the performance of the greedy algorithm by at least  percent for the ER model and  percent for the BA model (depending on the value of ).
In future, we would like to extend the model to a node dependent tolerance value and pass the tolerance value as the node features of the GNN in addition to the weights.
Comments
There are no comments yet.