Privacy Leakage of Real-World Vertical Federated Learning

11/18/2020
by   Haiqin Weng, et al.
0

Federated learning enables mutually distrusting participants to collaboratively learn a distributed machine learning model without revealing anything but the model's output. Generic federated learning has been studied extensively, and several learning protocols, as well as open-source frameworks, have been developed. Yet, their over pursuit of computing efficiency and fast implementation might diminish the security and privacy guarantees of participant's training data, about which little is known thus far. In this paper, we consider an honest-but-curious adversary who participants in training a distributed ML model, does not deviate from the defined learning protocol, but attempts to infer private training data from the legitimately received information. In this setting, we design and implement two practical attacks, reverse sum attack and reverse multiplication attack, neither of which will affect the accuracy of the learned model. By empirically studying the privacy leakage of two learning protocols, we show that our attacks are (1) effective - the adversary successfully steal the private training data, even when the intermediate outputs are encrypted to protect data privacy; (2) evasive - the adversary's malicious behavior does not deviate from the protocol specification and deteriorate any accuracy of the target model; and (3) easy - the adversary needs little prior knowledge about the data distribution of the target participant. We also experimentally show that the leaked information is as effective as the raw training data through training an alternative classifier on the leaked information. We further discuss potential countermeasures and their challenges, which we hope may lead to several promising research directions.

READ FULL TEXT

page 1

page 8

research
10/12/2019

Quantification of the Leakage in Federated Learning

With the growing emphasis on users' privacy, federated learning has beco...
research
02/06/2022

BEAS: Blockchain Enabled Asynchronous Secure Federated Machine Learning

Federated Learning (FL) enables multiple parties to distributively train...
research
08/01/2023

Enhanced Security with Encrypted Vision Transformer in Federated Learning

Federated learning is a learning method for training models over multipl...
research
04/18/2023

BadVFL: Backdoor Attacks in Vertical Federated Learning

Federated learning (FL) enables multiple parties to collaboratively trai...
research
12/01/2022

All You Need Is Hashing: Defending Against Data Reconstruction Attack in Vertical Federated Learning

Vertical federated learning is a trending solution for multi-party colla...
research
09/01/2018

What's a little leakage between friends?

This paper introduces a new attack on recent messaging systems that prot...
research
12/02/2018

Model-Reuse Attacks on Deep Learning Systems

Many of today's machine learning (ML) systems are built by reusing an ar...

Please sign up or login with your details

Forgot password? Click here to reset