RecUP-FL: Reconciling Utility and Privacy in Federated Learning via User-configurable Privacy Defense

04/11/2023
by   Yue Cui, et al.
0

Federated learning (FL) provides a variety of privacy advantages by allowing clients to collaboratively train a model without sharing their private data. However, recent studies have shown that private information can still be leaked through shared gradients. To further minimize the risk of privacy leakage, existing defenses usually require clients to locally modify their gradients (e.g., differential privacy) prior to sharing with the server. While these approaches are effective in certain cases, they regard the entire data as a single entity to protect, which usually comes at a large cost in model utility. In this paper, we seek to reconcile utility and privacy in FL by proposing a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes while obtaining significant improvements in utility over traditional defenses. Moreover, we observe that existing inference attacks often rely on a machine learning model to extract the private information (e.g., attributes). We thus formulate such a privacy defense as an adversarial learning problem, where RecUP-FL generates slight perturbations that can be added to the gradients before sharing to fool adversary models. To improve the transferability to un-queryable black-box adversary models, inspired by the idea of meta-learning, RecUP-FL forms a model zoo containing a set of substitute models and iteratively alternates between simulations of the white-box and the black-box adversarial attack scenarios to generate perturbations. Extensive experiments on four datasets under various adversarial settings (both attribute inference attack and data reconstruction attack) show that RecUP-FL can meet user-specified privacy constraints over the sensitive attributes while significantly improving the model utility compared with state-of-the-art privacy defenses.

READ FULL TEXT
research
09/13/2022

Defense against Privacy Leakage in Federated Learning

Federated Learning (FL) provides a promising distributed learning paradi...
research
06/13/2023

Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios

Federated learning (FL) naturally faces the problem of data heterogeneit...
research
04/05/2022

User-Level Differential Privacy against Attribute Inference Attack of Speech Emotion Recognition in Federated Learning

Many existing privacy-enhanced speech emotion recognition (SER) framewor...
research
03/29/2022

Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

Federated Learning (FL) framework brings privacy benefits to distributed...
research
04/06/2023

Quantifying and Defending against Privacy Threats on Federated Knowledge Graph Embedding

Knowledge Graph Embedding (KGE) is a fundamental technique that extracts...
research
10/08/2022

FedDef: Robust Federated Learning-based Network Intrusion Detection Systems Against Gradient Leakage

Deep learning methods have been widely applied to anomaly-based network ...
research
12/26/2021

Attribute Inference Attack of Speech Emotion Recognition in Federated Learning Settings

Speech emotion recognition (SER) processes speech signals to detect and ...

Please sign up or login with your details

Forgot password? Click here to reset