Privacy Leakage of Adversarial Training Models in Federated Learning Systems

02/21/2022
by   Jingyang Zhang, et al.
0

Adversarial Training (AT) is crucial for obtaining deep neural networks that are robust to adversarial attacks, yet recent works found that it could also make models more vulnerable to privacy attacks. In this work, we further reveal this unsettling property of AT by designing a novel privacy attack that is practically applicable to the privacy-sensitive Federated Learning (FL) systems. Using our method, the attacker can exploit AT models in the FL system to accurately reconstruct users' private training images even when the training batch size is large. Code is available at https://github.com/zjysteven/PrivayAttack_AT_FL.

READ FULL TEXT
research
02/14/2022

Do Gradient Inversion Attacks Make Federated Learning Unsafe?

Federated learning (FL) allows the collaborative training of AI models w...
research
07/21/2021

Defending against Reconstruction Attack in Vertical Federated Learning

Recently researchers have studied input leakage problems in Federated Le...
research
12/25/2020

Robustness, Privacy, and Generalization of Adversarial Training

Adversarial training can considerably robustify deep neural networks to ...
research
01/29/2022

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models

A central tenet of Federated learning (FL), which trains models without ...
research
12/20/2021

Certified Federated Adversarial Training

In federated learning (FL), robust aggregation schemes have been develop...
research
05/25/2022

Breaking the Chain of Gradient Leakage in Vision Transformers

User privacy is of great concern in Federated Learning, while Vision Tra...
research
11/10/2022

Robust Smart Home Face Recognition under Starving Federated Data

Over the past few years, the field of adversarial attack received numero...

Please sign up or login with your details

Forgot password? Click here to reset