DeepAI AI Chat
Log In Sign Up

Mutual Adversarial Training: Learning together is better than going alone

12/09/2021
by   Jiang Liu, et al.
6

Recent studies have shown that robustness to adversarial attacks can be transferred across networks. In other words, we can make a weak model more robust with the help of a strong teacher model. We ask if instead of learning from a static teacher, can models "learn together" and "teach each other" to achieve better robustness? In this paper, we study how interactions among models affect robustness via knowledge distillation. We propose mutual adversarial training (MAT), in which multiple models are trained together and share the knowledge of adversarial examples to achieve improved robustness. MAT allows robust models to explore a larger space of adversarial samples, and find more robust feature spaces and decision boundaries. Through extensive experiments on CIFAR-10 and CIFAR-100, we demonstrate that MAT can effectively improve model robustness and outperform state-of-the-art methods under white-box attacks, bringing ∼8 training (AT) under PGD-100 attacks. In addition, we show that MAT can also mitigate the robustness trade-off among different perturbation types, bringing as much as 13.1 l_2 and l_1 attacks. These results show the superiority of the proposed method and demonstrate that collaborative learning is an effective strategy for designing robust models.

READ FULL TEXT
08/18/2021

Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better

Adversarial training is one effective approach for training robust deep ...
05/05/2022

Can collaborative learning be private, robust and scalable?

We investigate the effectiveness of combining differential privacy, mode...
09/21/2020

Feature Distillation With Guided Adversarial Contrastive Learning

Deep learning models are shown to be vulnerable to adversarial examples....
01/31/2019

Improving Model Robustness with Transformation-Invariant Attacks

Vulnerability of neural networks under adversarial attacks has raised se...
05/07/2019

Towards Evaluating and Understanding Robust Optimisation under Transfer

This work evaluates the efficacy of adversarial robustness under transfe...
07/24/2022

Can we achieve robustness from data alone?

Adversarial training and its variants have come to be the prevailing met...
10/04/2022

A Study on the Efficiency and Generalization of Light Hybrid Retrievers

Existing hybrid retrievers which integrate sparse and dense retrievers, ...