Distributed Link Sparsification for Scalable Scheduling Using Graph Neural Networks

03/27/2022
by   Zhongyuan Zhao, et al.
0

Distributed scheduling algorithms for throughput or utility maximization in dense wireless multi-hop networks can have overwhelmingly high overhead, causing increased congestion, energy consumption, radio footprint, and security vulnerability. For wireless networks with dense connectivity, we propose a distributed scheme for link sparsification with graph convolutional networks (GCNs), which can reduce the scheduling overhead while keeping most of the network capacity. In a nutshell, a trainable GCN module generates node embeddings as topology-aware and reusable parameters for a local decision mechanism, based on which a link can withdraw itself from the scheduling contention if it is not likely to win. In medium-sized wireless networks, our proposed sparse scheduler beats classical threshold-based sparsification policies by retaining almost 70% of the total capacity achieved by a distributed greedy max-weight scheduler with 0.4% of the point-to-point message complexity and 2.6% of the average number of interfering neighbors per link.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset