Data Leakage in Tabular Federated Learning

10/04/2022
by   Mark Vero, et al.
0

While federated learning (FL) promises to preserve privacy in distributed training of deep learning models, recent work in the image and NLP domains showed that training updates leak private data of participating clients. At the same time, most high-stakes applications of FL (e.g., legal and financial) use tabular data. Compared to the NLP and image domains, reconstruction of tabular data poses several unique challenges: (i) categorical features introduce a significantly more difficult mixed discrete-continuous optimization problem, (ii) the mix of categorical and continuous features causes high variance in the final reconstructions, and (iii) structured data makes it difficult for the adversary to judge reconstruction quality. In this work, we tackle these challenges and propose the first comprehensive reconstruction attack on tabular data, called TabLeak. TabLeak is based on three key ingredients: (i) a softmax structural prior, implicitly converting the mixed discrete-continuous optimization problem into an easier fully continuous one, (ii) a way to reduce the variance of our reconstructions through a pooled ensembling scheme exploiting the structure of tabular data, and (iii) an entropy measure which can successfully assess reconstruction quality. Our experimental evaluation demonstrates the effectiveness of TabLeak, reaching a state-of-the-art on four popular tabular datasets. For instance, on the Adult dataset, we improve attack accuracy by 10 of 32 and further obtain non-trivial reconstructions for batch sizes as large as 128. Our findings are important as they show that performing FL on tabular data, which often poses high privacy risks, is highly vulnerable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2023

Approximate and Weighted Data Reconstruction Attack in Federated Learning

Federated Learning (FL) is a distributed learning paradigm that enables ...
research
06/13/2023

Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios

Federated learning (FL) naturally faces the problem of data heterogeneit...
research
02/14/2022

Do Gradient Inversion Attacks Make Federated Learning Unsafe?

Federated learning (FL) allows the collaborative training of AI models w...
research
07/02/2022

FL-Defender: Combating Targeted Attacks in Federated Learning

Federated learning (FL) enables learning a global machine learning model...
research
09/13/2021

Source Inference Attacks in Federated Learning

Federated learning (FL) has emerged as a promising privacy-aware paradig...
research
02/17/2022

LAMP: Extracting Text from Gradients with Language Model Priors

Recent work shows that sensitive user data can be reconstructed from gra...

Please sign up or login with your details

Forgot password? Click here to reset