Federated Stochastic Primal-dual Learning with Differential Privacy

04/26/2022
by   Yiwei Li, et al.
0

Federated learning (FL) is a new paradigm that enables many clients to jointly train a machine learning (ML) model under the orchestration of a parameter server while keeping the local data not being exposed to any third party. However, the training of FL is an interactive process between local clients and the parameter server. Such process would cause privacy leakage since adversaries may retrieve sensitive information by analyzing the overheard messages. In this paper, we propose a new federated stochastic primal-dual algorithm with differential privacy (FedSPD-DP). Compared to the existing methods, the proposed FedSPD-DP incorporates local stochastic gradient descent (local SGD) and partial client participation (PCP) for addressing the issues of communication efficiency and straggler effects due to randomly accessed clients. Our analysis shows that the data sampling strategy and PCP can enhance the data privacy whereas the larger number of local SGD steps could increase privacy leakage, revealing a non-trivial tradeoff between algorithm communication efficiency and privacy protection. Specifically, we show that, by guaranteeing (ϵ, δ)-DP for each client per communication round, the proposed algorithm guarantees (𝒪(qϵ√(p T)), δ)-DP after T communication rounds while maintaining an 𝒪(1/√(pTQ)) convergence rate for a convex and non-smooth learning problem, where Q is the number of local SGD steps, p is the client sampling probability, q=max_i q_i/√(1-q_i) and q_i is the data sampling probability of each client under PCP. Experiment results are presented to evaluate the practical performance of the proposed algorithm and comparison with state-of-the-art methods.

READ FULL TEXT
research
01/25/2022

Stochastic Coded Federated Learning with Convergence and Privacy Guarantees

Federated learning (FL) has attracted much attention as a privacy-preser...
research
04/09/2023

Gradient Sparsification for Efficient Wireless Federated Learning with Differential Privacy

Federated learning (FL) enables distributed clients to collaboratively t...
research
09/26/2022

Taming Client Dropout for Distributed Differential Privacy in Federated Learning

Federated learning (FL) is increasingly deployed among multiple clients ...
research
05/16/2023

Faster Federated Learning with Decaying Number of Local SGD Steps

In Federated Learning (FL) client devices connected over the internet co...
research
07/13/2020

Privacy Amplification via Random Check-Ins

Differentially Private Stochastic Gradient Descent (DP-SGD) forms a fund...
research
02/17/2021

Differential Private Hogwild! over Distributed Local Data Sets

We consider the Hogwild! setting where clients use local SGD iterations ...
research
02/08/2023

Exploratory Analysis of Federated Learning Methods with Differential Privacy on MIMIC-III

Background: Federated learning methods offer the possibility of training...

Please sign up or login with your details

Forgot password? Click here to reset