DeepAI AI Chat
Log In Sign Up

Can collaborative learning be private, robust and scalable?

05/05/2022
by   Dmitrii Usynin, et al.
0

We investigate the effectiveness of combining differential privacy, model compression and adversarial training to improve the robustness of models against adversarial samples in train- and inference-time attacks. We explore the applications of these techniques as well as their combinations to determine which method performs best, without a significant utility trade-off. Our investigation provides a practical overview of various methods that allow one to achieve a competitive model performance, a significant reduction in model's size and an improved empirical adversarial robustness without a severe performance degradation.

READ FULL TEXT

page 3

page 6

12/09/2021

Mutual Adversarial Training: Learning together is better than going alone

Recent studies have shown that robustness to adversarial attacks can be ...
12/25/2020

Robustness, Privacy, and Generalization of Adversarial Training

Adversarial training can considerably robustify deep neural networks to ...
10/15/2020

Federated Learning in Adversarial Settings

Federated Learning enables entities to collaboratively learn a shared pr...
02/10/2019

Adversarially Trained Model Compression: When Robustness Meets Efficiency

The robustness of deep models to adversarial attacks has gained signific...
08/16/2020

Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks

Adversarial training has been proven to be an effective technique for im...
04/15/2022

Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning

Adversarial training (i.e., training on adversarially perturbed input da...
01/25/2023

A Study on FGSM Adversarial Training for Neural Retrieval

Neural retrieval models have acquired significant effectiveness gains ov...