Measuring Lower Bounds of Local Differential Privacy via Adversary Instantiations in Federated Learning

06/18/2022
by   Marin Matsumoto, et al.
0

Local differential privacy (LDP) gives a strong privacy guarantee to be used in a distributed setting like federated learning (FL). LDP mechanisms in FL protect a client's gradient by randomizing it on the client; however, how can we interpret the privacy level given by the randomization? Moreover, what types of attacks can we mitigate in practice? To answer these questions, we introduce an empirical privacy test by measuring the lower bounds of LDP. The privacy test estimates how an adversary predicts if a reported randomized gradient was crafted from a raw gradient g_1 or g_2. We then instantiate six adversaries in FL under LDP to measure empirical LDP at various attack surfaces, including a worst-case attack that reaches the theoretical upper bound of LDP. The empirical privacy test with the adversary instantiations enables us to interpret LDP more intuitively and discuss relaxation of the privacy parameter until a particular instantiated attack surfaces. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the worst-case attack is not realistic in FL. In the end, we also discuss the possible relaxation of privacy levels in FL under LDP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2021

DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning

Federated learning (FL) has become an emerging machine learning techniqu...
research
09/08/2022

Uncovering the Connection Between Differential Privacy and Certified Robustness of Federated Learning against Poisoning Attacks

Federated learning (FL) provides an efficient paradigm to jointly train ...
research
10/06/2022

CANIFE: Crafting Canaries for Empirical Privacy Measurement in Federated Learning

Federated Learning (FL) is a setting for training machine learning model...
research
01/09/2023

Is Federated Learning a Practical PET Yet?

Federated learning (FL) is a framework for users to jointly train a mach...
research
04/05/2022

User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning

Many existing privacy-enhanced speech emotion recognition (SER) framewor...
research
06/07/2022

Subject Granular Differential Privacy in Federated Learning

This paper introduces subject granular privacy in the Federated Learning...
research
02/10/2022

PPA: Preference Profiling Attack Against Federated Learning

Federated learning (FL) trains a global model across a number of decentr...

Please sign up or login with your details

Forgot password? Click here to reset