DeepAI AI Chat
Log In Sign Up

Data Shuffling in Wireless Distributed Computing via Low-Rank Optimization

by   Kai Yang, et al.
University of California-Davis
ShanghaiTech University

Intelligent mobile platforms such as smart vehicles and drones have recently become the focus of attention for onboard deployment of machine learning mechanisms to enable low latency decisions with low risk of privacy breach. However, most such machine learning algorithms are both computation-and-memory intensive, which makes it highly difficult to implement the requisite computations on a single device of limited computation, memory, and energy resources. Wireless distributed computing presents new opportunities by pooling the computation and storage resources among devices. For low-latency applications, the key bottleneck lies in the exchange of intermediate results among mobile devices for data shuffling. To improve communication efficiency, we propose a co-channel communication model and design transceivers by exploiting the locally computed intermediate values as side information. A low-rank optimization model is proposed to maximize the achieved degrees-of-freedom (DoF) by establishing the interference alignment condition for data shuffling. Unfortunately, existing approaches to approximate the rank function fail to yield satisfactory performance due to the poor structure in the formulated low-rank optimization problem. In this paper, we develop an efficient DC algorithm to solve the presented low-rank optimization problem by proposing a novel DC representation for the rank function. Numerical experiments demonstrate that the proposed DC approach can significantly improve the communication efficiency whereas the achievable DoF almost remains unchanged when the number of mobile devices grows.


page 1

page 3


Federated Learning via Over-the-Air Computation

The stringent requirements for low-latency and privacy of the emerging h...

Energy-Efficient Processing and Robust Wireless Cooperative Transmission for Edge Inference

Edge machine learning can deliver low-latency and private artificial int...

Generalized Low-Rank Optimization for Topological Cooperation in Ultra-Dense Networks

Network densification is a natural way to support dense mobile applicati...

Low-rank Gradient Approximation For Memory-Efficient On-device Training of Deep Neural Network

Training machine learning models on mobile devices has the potential of ...

Broadband Analog Aggregation for Low-Latency Federated Edge Learning

The popularity of mobile devices results in the availability of enormous...

Low-Latency Broadband Analog Aggregation for Federated Edge Learning

The popularity of mobile devices results in the availability of enormous...

Blind Demixing for Low-Latency Communication

In the next generation wireless networks, lowlatency communication is cr...