Layer-wise Characterization of Latent Information Leakage in Federated Learning

10/17/2020
by   Fan Mo, et al.
0

Training a deep neural network (DNN) via federated learning allows participants to share model updates (gradients), instead of the data itself. However, recent studies show that unintended latent information (e.g. gender or race) carried by the gradients can be discovered by attackers, compromising the promised privacy guarantee of federated learning. Existing privacy-preserving techniques (e.g. differential privacy) either have limited defensive capacity against the potential attacks, or suffer from considerable model utility loss. Moreover, characterizing the latent information carried by the gradients and the consequent privacy leakage has been a major theoretical and practical challenge. In this paper, we propose two new metrics to address these challenges: the empirical 𝒱-information, a theoretically grounded notion of information which measures the amount of gradient information that is usable for an attacker, and the sensitivity analysis that utilizes the Jacobian matrix to measure the amount of changes in the gradients with respect to latent information which further quantifies private risk. We show that these metrics can localize the private information in each layer of a DNN and quantify the leakage depending on how sensitive the gradients are with respect to the latent information. As a practical application, we design LatenTZ: a federated learning framework that lets the most sensitive layers to run in the clients' Trusted Execution Environments (TEE). The implementation evaluation of LatenTZ shows that TEE-based approaches are promising for defending against powerful property inference attacks without a significant overhead in the clients' computing resources nor trading off the model's utility.

READ FULL TEXT

page 1

page 11

page 16

research
04/29/2021

PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments

We propose and implement a Privacy-preserving Federated Learning (PPFL) ...
research
05/28/2021

Quantifying Information Leakage from Gradients

Sharing deep neural networks' gradients instead of training data could f...
research
02/24/2021

A Quantitative Metric for Privacy Leakage in Federated Learning

In the federated learning system, parameter gradients are shared among p...
research
05/05/2021

Byzantine-Robust and Privacy-Preserving Framework for FedML

Federated learning has emerged as a popular paradigm for collaboratively...
research
02/21/2023

Speech Privacy Leakage from Shared Gradients in Distributed Learning

Distributed machine learning paradigms, such as federated learning, have...
research
10/22/2022

Mixed Precision Quantization to Tackle Gradient Leakage Attacks in Federated Learning

Federated Learning (FL) enables collaborative model building among a lar...
research
12/28/2021

APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers

Federated learning frameworks typically require collaborators to share t...

Please sign up or login with your details

Forgot password? Click here to reset