Client-specific Property Inference against Secure Aggregation in Federated Learning

03/07/2023
by   Raouf Kerkouche, et al.
0

Federated learning has become a widely used paradigm for collaboratively training a common model among different participants with the help of a central server that coordinates the training. Although only the model parameters or other model updates are exchanged during the federated training instead of the participant's data, many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data. Although differential privacy is considered an effective solution to protect against privacy attacks, it is also criticized for its negative effect on utility. Another possible defense is to use secure aggregation which allows the server to only access the aggregated update instead of each individual one, and it is often more appealing because it does not degrade model quality. However, combining only the aggregated updates, which are generated by a different composition of clients in every round, may still allow the inference of some client-specific information. In this paper, we show that simple linear models can effectively capture client-specific properties only from the aggregated model updates due to the linearity of aggregation. We formulate an optimization problem across different rounds in order to infer a tested property of every client from the output of the linear models, for example, whether they have a specific sample in their training data (membership inference) or whether they misbehave and attempt to degrade the performance of the common model by poisoning attacks. Our reconstruction technique is completely passive and undetectable. We demonstrate the efficacy of our approach on several scenarios which shows that secure aggregation provides very limited privacy guarantees in practice. The source code will be released upon publication.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2021

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

We show that aggregated model updates in federated learning may be insec...
research
11/14/2021

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Federated learning allows a set of users to train a deep neural network ...
research
07/29/2020

SAFER: Sparse secure Aggregation for FEderated leaRning

Federated learning enables one to train a common machine learning model ...
research
07/07/2021

RoFL: Attestable Robustness for Secure Federated Learning

Federated Learning is an emerging decentralized machine learning paradig...
research
09/26/2021

MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers

Machine Learning (ML) has emerged as a core technology to provide learni...
research
02/04/2021

SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation

For model privacy, local model parameters in federated learning shall be...
research
06/05/2023

Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

Malicious server (MS) attacks have enabled the scaling of data stealing ...

Please sign up or login with your details

Forgot password? Click here to reset