Voting-based Approaches For Differentially Private Federated Learning

10/09/2020
by   Yuqing Zhu, et al.
0

While federated learning (FL) enables distributed agents to collaboratively train a centralized model without sharing data with each other, it fails to protect users against inference attacks that mine private information from the centralized model. Thus, facilitating federated learning methods with differential privacy (DPFL) becomes attractive. Existing algorithms based on privately aggregating clipped gradients require many rounds of communication, which may not converge, and cannot scale up to large-capacity models due to explicit dimension-dependence in its added noise. In this paper, we adopt the knowledge transfer model of private learning pioneered by Papernot et al. (2017; 2018) and extend their algorithm PATE, as well as the recent alternative PrivateKNN (Zhu et al., 2020) to the federated learning setting. The key difference is that our method privately aggregates the labels from the agents in a voting scheme, instead of aggregating the gradients, hence avoiding the dimension dependence and achieving significant savings in communication cost. Theoretically, we show that when the margins of the voting scores are large, the agents enjoy exponentially higher accuracy and stronger (data-dependent) differential privacy guarantees on both agent-level and instance-level. Extensive experiments show that our approach significantly improves the privacy-utility trade-off over the current state-of-the-art in DPFL.

READ FULL TEXT
research
11/10/2020

Compression Boosts Differentially Private Federated Learning

Federated Learning allows distributed entities to train a common model c...
research
09/08/2022

Uncovering the Connection Between Differential Privacy and Certified Robustness of Federated Learning against Poisoning Attacks

Federated learning (FL) provides an efficient paradigm to jointly train ...
research
09/11/2020

Federated Model Distillation with Noise-Free Differential Privacy

Conventional federated learning directly averaging model weights is only...
research
10/15/2022

Sketching for First Order Method: Efficient Algorithm for Low-Bandwidth Channel and Vulnerability

Sketching is one of the most fundamental tools in large-scale machine le...
research
12/01/2020

MYSTIKO : : Cloud-Mediated, Private, Federated Gradient Descent

Federated learning enables multiple, distributed participants (potential...
research
02/26/2023

P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless Setups

Distributed (or Federated) learning enables users to train machine learn...
research
10/11/2021

The Skellam Mechanism for Differentially Private Federated Learning

We introduce the multi-dimensional Skellam mechanism, a discrete differe...

Please sign up or login with your details

Forgot password? Click here to reset