Gradient Obfuscation Gives a False Sense of Security in Federated Learning

06/08/2022
by   Kai Yue, et al.
0

Federated learning has been proposed as a privacy-preserving machine learning framework that enables multiple clients to collaborate without sharing raw data. However, client privacy protection is not guaranteed by design in this framework. Prior work has shown that the gradient sharing strategies in federated learning can be vulnerable to data reconstruction attacks. In practice, though, clients may not transmit raw gradients considering the high communication cost or due to privacy enhancement requirements. Empirical studies have demonstrated that gradient obfuscation, including intentional obfuscation via gradient noise injection and unintentional obfuscation via gradient compression, can provide more privacy protection against reconstruction attacks. In this work, we present a new data reconstruction attack framework targeting the image classification task in federated learning. We show that commonly adopted gradient postprocessing procedures, such as gradient quantization, gradient sparsification, and gradient perturbation, may give a false sense of security in federated learning. Contrary to prior studies, we argue that privacy enhancement should not be treated as a byproduct of gradient compression. Additionally, we design a new method under the proposed framework to reconstruct the image at the semantic level. We quantify the semantic privacy leakage and compare with conventional based on image similarity scores. Our comparisons challenge the image data leakage evaluation schemes in the literature. The results emphasize the importance of revisiting and redesigning the privacy protection mechanisms for client data in existing federated learning algorithms.

READ FULL TEXT

page 6

page 7

page 8

page 9

page 10

page 11

page 17

page 18

research
04/22/2020

A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

Federated learning (FL) is an emerging distributed machine learning fram...
research
06/01/2022

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

Federated learning is considered as an effective privacy-preserving lear...
research
12/23/2022

Graph Federated Learning with Hidden Representation Sharing

Learning on Graphs (LoG) is widely used in multi-client systems when eac...
research
08/29/2020

GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model

Federated learning platforms are gaining popularity. One of the major be...
research
08/12/2022

Dropout is NOT All You Need to Prevent Gradient Leakage

Gradient inversion attacks on federated learning systems reconstruct cli...
research
01/21/2022

TOFU: Towards Obfuscated Federated Updates by Encoding Weight Updates into Gradients from Proxy Data

Advances in Federated Learning and an abundance of user data have enable...
research
01/04/2022

Semantics-Preserved Distortion for Personal Privacy Protection

Privacy protection is an important and concerning topic in Federated Lea...

Please sign up or login with your details

Forgot password? Click here to reset