Protection Against Reconstruction and Its Applications in Private Federated Learning

by   Abhishek Bhowmick, et al.

Federated learning has become an exciting direction for both research and practical training of models with user data. Although data remains decentralized in federated learning, it is common to assume that the model updates are sent in the clear from the devices to the server. Differential privacy has been proposed as a way to ensure the model remains private, but this does not address the issue that model updates can be seen on the server, and lead to leakage of user data. Local differential privacy is one of the strongest forms of privacy protection so that each individual's data is privatized. However, local differential privacy, as it is traditionally used, may prove to be too stringent of a privacy condition in many high dimensional problems, such as in distributed model fitting. We propose a new paradigm for local differential privacy by providing protections against certain adversaries. Specifically, we ensure that adversaries with limited prior information cannot reconstruct, with high probability, the original data within some prescribed tolerance. This interpretation allows us to consider larger privacy parameters. We then design (optimal) DP mechanisms in this large privacy parameter regime. In this work, we combine local privacy protections along with central differential privacy to present a practical approach to do model training privately. Further, we show that these privacy restrictions maintain utility in image classification and language models that is comparable to federated learning without these privacy restrictions.


page 1

page 2

page 3

page 4


Adaptive Local Steps Federated Learning with Differential Privacy Driven by Convergence Analysis

Federated Learning (FL) is a distributed machine learning technique that...

LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy

Train machine learning models on sensitive user data has raised increasi...

Personalization Improves Privacy-Accuracy Tradeoffs in Federated Optimization

Large-scale machine learning systems often involve data distributed acro...

LDP-Fed: Federated Learning with Local Differential Privacy

This paper presents LDP-Fed, a novel federated learning system with a fo...

Randomized Quantization is All You Need for Differential Privacy in Federated Learning

Federated learning (FL) is a common and practical framework for learning...

Training a Tokenizer for Free with Private Federated Learning

Federated learning with differential privacy, i.e. private federated lea...

Decentralized Wireless Federated Learning with Differential Privacy

This paper studies decentralized federated learning algorithms in wirele...

Please sign up or login with your details

Forgot password? Click here to reset