Communication-Efficient Split Learning via Adaptive Feature-Wise Compression

07/20/2023
by   Yongjeong Oh, et al.
0

This paper proposes a novel communication-efficient split learning (SL) framework, named SplitFC, which reduces the communication overhead required for transmitting intermediate feature and gradient vectors during the SL training process. The key idea of SplitFC is to leverage different dispersion degrees exhibited in the columns of the matrices. SplitFC incorporates two compression strategies: (i) adaptive feature-wise dropout and (ii) adaptive feature-wise quantization. In the first strategy, the intermediate feature vectors are dropped with adaptive dropout probabilities determined based on the standard deviation of these vectors. Then, by the chain rule, the intermediate gradient vectors associated with the dropped feature vectors are also dropped. In the second strategy, the non-dropped intermediate feature and gradient vectors are quantized using adaptive quantization levels determined based on the ranges of the vectors. To minimize the quantization error, the optimal quantization levels of this strategy are derived in a closed-form expression. Simulation results on the MNIST, CIFAR-10, and CelebA datasets demonstrate that SplitFC provides more than a 5.6 state-of-the-art SL frameworks, while they require 320 times less communication overhead compared to the vanilla SL framework without compression.

READ FULL TEXT

page 1

page 4

page 5

research
07/25/2022

C3-SL: Circular Convolution-Based Batch-Wise Compression for Communication-Efficient Split Learning

Most existing studies improve the efficiency of Split learning (SL) by c...
research
10/31/2022

Adaptive Compression for Communication-Efficient Distributed Training

We propose Adaptive Compressed Gradient Descent (AdaCGD) - a novel optim...
research
10/05/2021

FedDQ: Communication-Efficient Federated Learning with Descending Quantization

Federated learning (FL) is an emerging privacy-preserving distributed le...
research
03/15/2023

Communication-Efficient Design for Quantized Decentralized Federated Learning

Decentralized federated learning (DFL) is a variant of federated learnin...
research
04/16/2022

FedVQCS: Federated Learning via Vector Quantized Compressed Sensing

In this paper, a new communication-efficient federated learning (FL) fra...
research
11/08/2014

Stacked Quantizers for Compositional Vector Compression

Recently, Babenko and Lempitsky introduced Additive Quantization (AQ), a...

Please sign up or login with your details

Forgot password? Click here to reset