Communication and Computation Reduction for Split Learning using Asynchronous Training

07/20/2021
by   Xing Chen, et al.
0

Split learning is a promising privacy-preserving distributed learning scheme that has low computation requirement at the edge device but has the disadvantage of high communication overhead between edge device and server. To reduce the communication overhead, this paper proposes a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives activations/gradients in selected epochs. To further reduce the communication overhead, the activations/gradients are quantized using 8-bit floating point prior to transmission. An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates. Furthermore, the privacy of the proposed communication reduction based split learning method is almost the same as traditional split learning. Simulation results on VGG11, VGG13 and ResNet18 models on CIFAR-10 show that the communication cost is reduced by 1.64x-106.7x and the computations in the client are reduced by 2.86x-32.1x when the accuracy degradation is less than 0.5 cost reduction is 11.9x and 11.3x on VGG11 for 0.5

READ FULL TEXT
research
02/11/2023

Communication and Storage Efficient Federated Split Learning

Federated learning (FL) is a popular distributed machine learning (ML) p...
research
08/30/2023

Split Without a Leak: Reducing Privacy Leakage in Split Learning

The popularity of Deep Learning (DL) makes the privacy of sensitive data...
research
10/24/2022

Real-time Speech Interruption Analysis: From Cloud to Client Deployment

Meetings are an essential form of communication for all types of organiz...
research
03/26/2023

Efficient Parallel Split Learning over Resource-constrained Wireless Edge Networks

The increasingly deeper neural networks hinder the democratization of pr...
research
09/19/2021

Splitfed learning without client-side synchronization: Analyzing client-side split network portion size to overall performance

Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL...
research
10/22/2021

PPSGCN: A Privacy-Preserving Subgraph Sampling Based Distributed GCN Training Method

Graph convolutional networks (GCNs) have been widely adopted for graph r...
research
04/23/2021

Unsupervised Information Obfuscation for Split Inference of Neural Networks

Splitting network computations between the edge device and a server enab...

Please sign up or login with your details

Forgot password? Click here to reset