Communication Efficient DNN Partitioning-based Federated Learning

04/11/2023
by   Di Wu, et al.
0

Efficiently running federated learning (FL) on resource-constrained devices is challenging since they are required to train computationally intensive deep neural networks (DNN) independently. DNN partitioning-based FL (DPFL) has been proposed as one mechanism to accelerate training where the layers of a DNN (or computation) are offloaded from the device to an edge server. However, this creates significant communication overheads since the activation and gradient need to be transferred between the device and the edge server during training. Current techniques reduce the communication introduced by DNN partitioning using local loss-based methods. We demonstrate that these methods adversely impact accuracy and ignore the communication costs incurred when transmitting the activation from the device to the server. This paper proposes ActionFed - a communication efficient framework for DPFL to accelerate training on resource-constrained devices. ActionFed eliminates the transmission of the gradient by developing pre-trained initialization of the DNN model on the device for the first time. This reduces the accuracy degradation seen in local loss-based methods. In addition, ActionFed proposes a novel replay buffer mechanism and implements a quantization-based compression technique to reduce the transmission of the activation. It is experimentally demonstrated that ActionFed can reduce the communication cost by up to 15.77x and accelerates training by up to 3.87x when compared to vanilla DPFL.

READ FULL TEXT

page 1

page 3

page 10

research
09/18/2021

Toward Efficient Federated Learning in Multi-Channeled Mobile Edge Network with Layerd Gradient Compression

A fundamental issue for federated learning (FL) is how to achieve optima...
research
09/19/2023

Toward efficient resource utilization at edge nodes in federated learning

Federated learning (FL) enables edge nodes to collaboratively contribute...
research
01/01/2023

Efficient On-device Training via Gradient Filtering

Despite its importance for federated learning, continuous learning and m...
research
03/10/2022

CoCo-FL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization

Devices participating in federated learning (FL) typically have heteroge...
research
03/09/2022

Update Compression for Deep Neural Networks on the Edge

An increasing number of artificial intelligence (AI) applications involv...
research
04/18/2022

How to Attain Communication-Efficient DNN Training? Convert, Compress, Correct

In this paper, we introduce 𝖢𝖮_3, an algorithm for communication-efficie...
research
11/15/2021

DNN gradient lossless compression: Can GenNorm be the answer?

In this paper, the problem of optimal gradient lossless compression in D...

Please sign up or login with your details

Forgot password? Click here to reset