MixNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers

09/26/2021
by   Antoine Boutet, et al.
0

Machine Learning (ML) has emerged as a core technology to provide learning models to perform complex tasks. Boosted by Machine Learning as a Service (MLaaS), the number of applications relying on ML capabilities is ever increasing. However, ML models are the source of different privacy violations through passive or active attacks from different entities. In this paper, we present MixNN a proxy-based privacy-preserving system for federated learning to protect the privacy of participants against a curious or malicious aggregation server trying to infer sensitive attributes. MixNN receives the model updates from participants and mixes layers between participants before sending the mixed updates to the aggregation server. This mixing strategy drastically reduces privacy without any trade-off with utility. Indeed, mixing the updates of the model has no impact on the result of the aggregation of the updates computed by the server. We experimentally evaluate MixNN and design a new attribute inference attack, Sim, exploiting the privacy vulnerability of SGD algorithm to quantify privacy leakage in different settings (i.e., the aggregation server can conduct a passive or an active attack). We show that MixNN significantly limits the attribute inference compared to a baseline using noisy gradient (well known to damage the utility) while keeping the same level of utility as classic federated learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2021

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

We show that aggregated model updates in federated learning may be insec...
research
12/15/2022

White-box Inference Attacks against Centralized Machine Learning and Federated Learning

With the development of information science and technology, various indu...
research
04/07/2023

Efficient Secure Aggregation for Privacy-Preserving Federated Machine Learning

Federated learning introduces a novel approach to training machine learn...
research
03/07/2023

Client-specific Property Inference against Secure Aggregation in Federated Learning

Federated learning has become a widely used paradigm for collaboratively...
research
07/27/2020

VFL: A Verifiable Federated Learning with Privacy-Preserving for Big Data in Industrial IoT

Due to the strong analytical ability of big data, deep learning has been...
research
07/13/2022

Enhanced Security and Privacy via Fragmented Federated Learning

In federated learning (FL), a set of participants share updates computed...
research
02/10/2023

Privacy Against Agnostic Inference Attacks in Vertical Federated Learning

A novel form of inference attack in vertical federated learning (VFL) is...

Please sign up or login with your details

Forgot password? Click here to reset