Can collaborative learning be private, robust and scalable?

05/05/2022
by   Dmitrii Usynin, et al.
0

We investigate the effectiveness of combining differential privacy, model compression and adversarial training to improve the robustness of models against adversarial samples in train- and inference-time attacks. We explore the applications of these techniques as well as their combinations to determine which method performs best, without a significant utility trade-off. Our investigation provides a practical overview of various methods that allow one to achieve a competitive model performance, a significant reduction in model's size and an improved empirical adversarial robustness without a severe performance degradation.

READ FULL TEXT

page 3

page 6

research
12/09/2021

Mutual Adversarial Training: Learning together is better than going alone

Recent studies have shown that robustness to adversarial attacks can be ...
research
12/25/2020

Robustness, Privacy, and Generalization of Adversarial Training

Adversarial training can considerably robustify deep neural networks to ...
research
10/15/2020

Federated Learning in Adversarial Settings

Federated Learning enables entities to collaboratively learn a shared pr...
research
08/16/2020

Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks

Adversarial training has been proven to be an effective technique for im...
research
08/15/2023

Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks

Poisoning attacks can disproportionately influence model behaviour by ma...
research
02/10/2019

Adversarially Trained Model Compression: When Robustness Meets Efficiency

The robustness of deep models to adversarial attacks has gained signific...
research
07/02/2020

Decoder-free Robustness Disentanglement without (Additional) Supervision

Adversarial Training (AT) is proposed to alleviate the adversarial vulne...

Please sign up or login with your details

Forgot password? Click here to reset