A Bargaining Game for Personalized, Energy Efficient Split Learning over Wireless Networks

12/12/2022
by   Minsu Kim, et al.
0

Split learning (SL) is an emergent distributed learning framework which can mitigate the computation and wireless communication overhead of federated learning. It splits a machine learning model into a device-side model and a server-side model at a cut layer. Devices only train their allocated model and transmit the activations of the cut layer to the server. However, SL can lead to data leakage as the server can reconstruct the input data using the correlation between the input and intermediate activations. Although allocating more layers to a device-side model can reduce the possibility of data leakage, this will lead to more energy consumption for resource-constrained devices and more training time for the server. Moreover, non-iid datasets across devices will reduce the convergence rate leading to increased training time. In this paper, a new personalized SL framework is proposed. For this framework, a novel approach for choosing the cut layer that can optimize the tradeoff between the energy consumption for computation and wireless transmission, training time, and data privacy is developed. In the considered framework, each device personalizes its device-side model to mitigate non-iid datasets while sharing the same server-side model for generalization. To balance the energy consumption for computation and wireless transmission, training time, and data privacy, a multiplayer bargaining problem is formulated to find the optimal cut layer between devices and the server. To solve the problem, the Kalai-Smorodinsky bargaining solution (KSBS) is obtained using the bisection method with the feasibility test. Simulation results show that the proposed personalized SL framework with the cut layer from the KSBS can achieve the optimal sum utilities by balancing the energy consumption, training time, and data privacy, and it is also robust to non-iid datasets.

READ FULL TEXT
research
12/10/2018

Efficient Training Management for Mobile Crowd-Machine Learning: A Deep Reinforcement Learning Approach

In this letter, we consider the concept of Mobile Crowd-Machine Learning...
research
07/19/2022

Green, Quantized Federated Learning over Wireless Networks: An Energy-Efficient Design

In this paper, a green, quantized FL framework, which represents data wi...
research
04/18/2022

Split Learning over Wireless Networks: Parallel Design and Resource Management

Split learning (SL) is a collaborative learning framework, which can tra...
research
11/15/2021

On the Tradeoff between Energy, Precision, and Accuracy in Federated Quantized Neural Networks

Deploying federated learning (FL) over wireless networks with resource-c...
research
04/04/2023

FAST: Fidelity-Adjustable Semantic Transmission over Heterogeneous Wireless Networks

In this work, we investigate the challenging problem of on-demand semant...
research
04/23/2021

Unsupervised Information Obfuscation for Split Inference of Neural Networks

Splitting network computations between the edge device and a server enab...
research
05/22/2023

EXACT: Extensive Attack for Split Learning

Privacy-Preserving machine learning (PPML) can help us train and deploy ...

Please sign up or login with your details

Forgot password? Click here to reset