Eluding Secure Aggregation in Federated Learning via Model Inconsistency

11/14/2021
by   Dario Pasquini, et al.
9

Federated learning allows a set of users to train a deep neural network over their private training datasets. During the protocol, datasets never leave the devices of the respective users. This is achieved by requiring each user to send "only" model updates to a central server that, in turn, aggregates them to update the parameters of the deep neural network. However, it has been shown that each model update carries sensitive information about the user's dataset (e.g., gradient inversion attacks). The state-of-the-art implementations of federated learning protect these model updates by leveraging secure aggregation: A cryptographic protocol that securely computes the aggregation of the model updates of the users. Secure aggregation is pivotal to protect users' privacy since it hinders the server from learning the value and the source of the individual model updates provided by the users, preventing inference and data attribution attacks. In this work, we show that a malicious server can easily elude secure aggregation as if the latter were not in place. We devise two different attacks capable of inferring information on individual private training datasets, independently of the number of users participating in the secure aggregation. This makes them concrete threats in large-scale, real-world federated learning applications. The attacks are generic and do not target any specific secure aggregation protocol. They are equally effective even if the secure aggregation protocol is replaced by its ideal functionality that provides the perfect level of security. Our work demonstrates that secure aggregation has been incorrectly combined with federated learning and that current implementations offer only a "false sense of security".

READ FULL TEXT
research
02/07/2022

Preserving Privacy and Security in Federated Learning

Federated learning is known to be vulnerable to security and privacy iss...
research
06/10/2021

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

We show that aggregated model updates in federated learning may be insec...
research
11/14/2016

Practical Secure Aggregation for Federated Learning on User-Held Data

Secure Aggregation protocols allow a collection of mutually distrust par...
research
09/30/2020

Secure Aggregation with Heterogeneous Quantization in Federated Learning

Secure model aggregation across many users is a key component of federat...
research
10/17/2022

Verifiable and Provably Secure Machine Unlearning

Machine unlearning aims to remove points from the training dataset of a ...
research
07/07/2021

RoFL: Attestable Robustness for Secure Federated Learning

Federated Learning is an emerging decentralized machine learning paradig...
research
03/07/2023

Client-specific Property Inference against Secure Aggregation in Federated Learning

Federated learning has become a widely used paradigm for collaboratively...

Please sign up or login with your details

Forgot password? Click here to reset