Efficient Multiuser AI Downloading via Reusable Knowledge Broadcasting

by   Hai Wu, et al.

For the 6G mobile networks, in-situ model downloading has emerged as an important use case to enable real-time adaptive artificial intelligence on edge devices. However, the simultaneous downloading of diverse and high-dimensional models to multiple devices over wireless links presents a significant communication bottleneck. To overcome the bottleneck, we propose the framework of model broadcasting and assembling (MBA), which represents the first attempt on leveraging reusable knowledge, referring to shared parameters among tasks, to enable parameter broadcasting to reduce communication overhead. The MBA framework comprises two key components. The first, the MBA protocol, defines the system operations including parameter selection from a model library, power control for broadcasting, and model assembling at devices. The second component is the joint design of parameter-selection-and-power-control (PS-PC), which provides guarantees on devices' model performance and minimizes the downloading latency. The corresponding optimization problem is simplified by decomposition into the sequential PS and PC sub-problems without compromising its optimality. The PS sub-problem is solved efficiently by designing two efficient algorithms. On one hand, the low-complexity algorithm of greedy parameter selection features the construction of candidate model sets and a selection metric, both of which are designed under the criterion of maximum reusable knowledge among tasks. On the other hand, the optimal tree-search algorithm gains its efficiency via the proposed construction of a compact binary tree pruned using model architecture constraints and an intelligent branch-and-bound search. Given optimal PS, the optimal PC policy is derived in closed form. Extensive experiments demonstrate the substantial reduction in downloading latency achieved by the proposed MBA compared to traditional model downloading.


Progressive Feature Transmission for Split Inference at the Wireless Edge

In edge inference, an edge server provides remote-inference services to ...

Joint Parameter-and-Bandwidth Allocation for Improving the Efficiency of Partitioned Edge Learning

To leverage data and computation capabilities of mobile devices, machine...

An Overview of Data-Importance Aware Radio Resource Management for Edge Machine Learning

The 5G network connecting billions of IoT devices will make it possible ...

Semantic Data Sourcing for 6G Edge Intelligence

As a new function of 6G networks, edge intelligence refers to the ubiqui...

Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned Edge Learning Over Broadband Channels

A main edge learning paradigm, called partitioned edge learning (PARTEL)...

Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with Heterogeneous Learning Tasks

The ever-growing popularity and rapid improving of artificial intelligen...

Over-the-Air Multi-View Pooling for Distributed Sensing

Sensing is envisioned as a key network function of the 6G mobile network...

Please sign up or login with your details

Forgot password? Click here to reset