Dynamic Encoding and Decoding of Information for Split Learning in Mobile-Edge Computing: Leveraging Information Bottleneck Theory

09/06/2023
by   Omar Alhussein, et al.
0

Split learning is a privacy-preserving distributed learning paradigm in which an ML model (e.g., a neural network) is split into two parts (i.e., an encoder and a decoder). The encoder shares so-called latent representation, rather than raw data, for model training. In mobile-edge computing, network functions (such as traffic forecasting) can be trained via split learning where an encoder resides in a user equipment (UE) and a decoder resides in the edge network. Based on the data processing inequality and the information bottleneck (IB) theory, we present a new framework and training mechanism to enable a dynamic balancing of the transmission resource consumption with the informativeness of the shared latent representations, which directly impacts the predictive performance. The proposed training mechanism offers an encoder-decoder neural network architecture featuring multiple modes of complexity-relevance tradeoffs, enabling tunable performance. The adaptability can accommodate varying real-time network conditions and application requirements, potentially reducing operational expenditure and enhancing network agility. As a proof of concept, we apply the training mechanism to a millimeter-wave (mmWave)-enabled throughput prediction problem. We also offer new insights and highlight some challenges related to recurrent neural networks from the perspective of the IB theory. Interestingly, we find a compression phenomenon across the temporal domain of the sequential model, in addition to the compression phase that occurs with the number of training epochs.

READ FULL TEXT

page 1

page 5

research
02/20/2020

Balancing Cost and Benefit with Tied-Multi Transformers

We propose and evaluate a novel procedure for training multiple Transfor...
research
06/21/2023

Split Learning in 6G Edge Networks

With the proliferation of distributed edge computing resources, the 6G m...
research
07/02/2022

Unsupervised Recurrent Federated Learning for Edge Popularity Prediction in Privacy-Preserving Mobile Edge Computing Networks

Nowadays wireless communication is rapidly reshaping entire industry sec...
research
10/10/2021

SplitPlace: Intelligent Placement of Split Neural Nets in Mobile Edge Environments

In recent years, deep learning models have become ubiquitous in industry...
research
08/24/2022

A Low-Complexity Approach to Rate-Distortion Optimized Variable Bit-Rate Compression for Split DNN Computing

Split computing has emerged as a recent paradigm for implementation of D...
research
03/16/2022

SC2: Supervised Compression for Split Computing

Split computing distributes the execution of a neural network (e.g., for...
research
03/24/2023

FixFit: using parameter-compression to solve the inverse problem in overdetermined models

All fields of science depend on mathematical models. One of the fundamen...

Please sign up or login with your details

Forgot password? Click here to reset