Certified Federated Adversarial Training

12/20/2021
by   Giulio Zizzo, et al.
0

In federated learning (FL), robust aggregation schemes have been developed to protect against malicious clients. Many robust aggregation schemes rely on certain numbers of benign clients being present in a quorum of workers. This can be hard to guarantee when clients can join at will, or join based on factors such as idle system status, and connected to power and WiFi. We tackle the scenario of securing FL systems conducting adversarial training when a quorum of workers could be completely malicious. We model an attacker who poisons the model to insert a weakness into the adversarial training such that the model displays apparent adversarial robustness, while the attacker can exploit the inserted weakness to bypass the adversarial training and force the model to misclassify adversarial examples. We use abstract interpretation techniques to detect such stealthy attacks and block the corrupted model updates. We show that this defence can preserve adversarial robustness even against an adaptive attacker.

READ FULL TEXT
research
10/26/2021

Ensemble Federated Adversarial Training with Non-IID data

Despite federated learning endows distributed clients with a cooperative...
research
02/21/2022

Privacy Leakage of Adversarial Training Models in Federated Learning Systems

Adversarial Training (AT) is crucial for obtaining deep neural networks ...
research
06/05/2022

Federated Adversarial Training with Transformers

Federated learning (FL) has emerged to enable global model training over...
research
12/03/2020

FAT: Federated Adversarial Training

Federated learning (FL) is one of the most important paradigms addressin...
research
08/07/2022

Federated Adversarial Learning: A Framework with Convergence Analysis

Federated learning (FL) is a trending training paradigm to utilize decen...
research
09/18/2020

Robust Decentralized Learning for Neural Networks

In decentralized learning, data is distributed among local clients which...
research
12/21/2021

Improving Robustness with Image Filtering

Adversarial robustness is one of the most challenging problems in Deep L...

Please sign up or login with your details

Forgot password? Click here to reset