Stochastic Coded Federated Learning: Theoretical Analysis and Incentive Mechanism Design

11/08/2022
by   Yuchang Sun, et al.
6

Federated learning (FL) has achieved great success as a privacy-preserving distributed training paradigm, where many edge devices collaboratively train a machine learning model by sharing the model updates instead of the raw data with a server. However, the heterogeneous computational and communication resources of edge devices give rise to stragglers that significantly decelerate the training process. To mitigate this issue, we propose a novel FL framework named stochastic coded federated learning (SCFL) that leverages coded computing techniques. In SCFL, before the training process starts, each edge device uploads a privacy-preserving coded dataset to the server, which is generated by adding Gaussian noise to the projected local dataset. During training, the server computes gradients on the global coded dataset to compensate for the missing model updates of the straggling devices. We design a gradient aggregation scheme to ensure that the aggregated model update is an unbiased estimate of the desired global update. Moreover, this aggregation scheme enables periodical model averaging to improve the training efficiency. We characterize the tradeoff between the convergence performance and privacy guarantee of SCFL. In particular, a more noisy coded dataset provides stronger privacy protection for edge devices but results in learning performance degradation. We further develop a contract-based incentive mechanism to coordinate such a conflict. The simulation results show that SCFL learns a better model within the given time and achieves a better privacy-performance tradeoff than the baseline methods. In addition, the proposed incentive mechanism grants better training performance than the conventional Stackelberg game approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2022

Stochastic Coded Federated Learning with Convergence and Privacy Guarantees

Federated learning (FL) has attracted much attention as a privacy-preser...
research
08/21/2022

Fed-FSNet: Mitigating Non-I.I.D. Federated Learning via Fuzzy Synthesizing Network

Federated learning (FL) has emerged as a promising privacy-preserving di...
research
12/31/2020

Coded Machine Unlearning

Models trained in machine learning processes may store information about...
research
06/08/2021

Incentive Mechanism for Privacy-Preserving Federated Learning

Federated learning (FL) is an emerging paradigm for machine learning, in...
research
02/21/2021

Privacy-Preserving Wireless Federated Learning Exploiting Inherent Hardware Impairments

We consider a wireless federated learning system where multiple data hol...
research
09/07/2023

Sparse Federated Training of Object Detection in the Internet of Vehicles

As an essential component part of the Intelligent Transportation System ...
research
11/23/2020

Federated learning with class imbalance reduction

Federated learning (FL) is a promising technique that enables a large am...

Please sign up or login with your details

Forgot password? Click here to reset