MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

11/09/2021
by   Muhammad Awais, et al.
0

Deep neural networks are susceptible to adversarially crafted, small and imperceptible changes in the natural inputs. The most effective defense mechanism against these examples is adversarial training which constructs adversarial examples during training by iterative maximization of loss. The model is then trained to minimize the loss on these constructed examples. This min-max optimization requires more data, larger capacity models, and additional computing resources. It also degrades the standard generalization performance of a model. Can we achieve robustness more efficiently? In this work, we explore this question from the perspective of knowledge transfer. First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation. Second, we propose a novel robustness transfer method called Mixup-Based Activated Channel Maps (MixACM) Transfer. MixACM transfers robustness from a robust teacher to a student by matching activated channel maps generated without expensive adversarial perturbations. Finally, extensive experiments on multiple datasets and different learning scenarios show our method can transfer robustness while also improving generalization on natural images.

READ FULL TEXT

page 5

page 17

research
08/18/2021

Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better

Adversarial training is one effective approach for training robust deep ...
research
12/11/2019

What it Thinks is Important is Important: Robustness Transfers through Input Gradients

Adversarial perturbations are imperceptible changes to input pixels that...
research
02/21/2022

Transferring Adversarial Robustness Through Robust Representation Matching

With the widespread use of machine learning, concerns over its security ...
research
02/25/2021

Understanding Robustness in Teacher-Student Setting: A New Perspective

Adversarial examples have appeared as a ubiquitous property of machine l...
research
12/14/2020

Achieving Adversarial Robustness Requires An Active Teacher

A new understanding of adversarial examples and adversarial robustness i...
research
10/25/2022

Accelerating Certified Robustness Training via Knowledge Transfer

Training deep neural network classifiers that are certifiably robust aga...
research
03/28/2023

Improving the Transferability of Adversarial Samples by Path-Augmented Method

Deep neural networks have achieved unprecedented success on diverse visi...

Please sign up or login with your details

Forgot password? Click here to reset