A Quantitative Metric for Privacy Leakage in Federated Learning

02/24/2021
by   Yong Liu, et al.
0

In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise inference of the original data. By reporting their parameter gradients to the central server, client datasets are exposed to inference attacks from adversaries. In this paper, we propose a quantitative metric based on mutual information for clients to evaluate the potential risk of information leakage in their gradients. Mutual information has received increasing attention in the machine learning and data mining community over the past few years. However, existing mutual information estimation methods cannot handle high-dimensional variables. In this paper, we propose a novel method to approximate the mutual information between the high-dimensional gradients and batched input data. Experimental results show that the proposed metric reliably reflect the extent of information leakage in federated learning. In addition, using the proposed metric, we investigate the influential factors of risk level. It is proven that, the risk of information leakage is related to the status of the task model, as well as the inherent data distribution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2019

Quantification of the Leakage in Federated Learning

With the growing emphasis on users' privacy, federated learning has beco...
research
10/17/2020

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Training a deep neural network (DNN) via federated learning allows parti...
research
02/26/2021

Efficient Client Contribution Evaluation for Horizontal Federated Learning

In federated learning (FL), fair and accurate measurement of the contrib...
research
10/26/2021

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

Recent studies show that private training data can be leaked through the...
research
01/21/2022

TOFU: Towards Obfuscated Federated Updates by Encoding Weight Updates into Gradients from Proxy Data

Advances in Federated Learning and an abundance of user data have enable...
research
02/21/2023

Speech Privacy Leakage from Shared Gradients in Distributed Learning

Distributed machine learning paradigms, such as federated learning, have...
research
04/26/2022

Enhancing Privacy against Inversion Attacks in Federated Learning by using Mixing Gradients Strategies

Federated learning reduces the risk of information leakage, but remains ...

Please sign up or login with your details

Forgot password? Click here to reset