Understanding Robustness in Teacher-Student Setting: A New Perspective

02/25/2021
by   Zhuolin Yang, et al.
13

Adversarial examples have appeared as a ubiquitous property of machine learning models where bounded adversarial perturbation could mislead the models to make arbitrarily incorrect predictions. Such examples provide a way to assess the robustness of machine learning models as well as a proxy for understanding the model training process. Extensive studies try to explain the existence of adversarial examples and provide ways to improve model robustness (e.g. adversarial training). While they mostly focus on models trained on datasets with predefined labels, we leverage the teacher-student framework and assume a teacher model, or oracle, to provide the labels for given instances. We extend Tian (2019) in the case of low-rank input data and show that student specialization (trained student neuron is highly correlated with certain teacher neuron at the same layer) still happens within the input subspace, but the teacher and student nodes could differ wildly out of the data subspace, which we conjecture leads to adversarial examples. Extensive experiments show that student specialization correlates strongly with model robustness in different scenarios, including student trained via standard training, adversarial training, confidence-calibrated adversarial training, and training with robust feature dataset. Our studies could shed light on the future exploration about adversarial examples, and enhancing model robustness via principled data augmentation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/14/2020

Achieving Adversarial Robustness Requires An Active Teacher

A new understanding of adversarial examples and adversarial robustness i...
research
02/21/2022

Transferring Adversarial Robustness Through Robust Representation Matching

With the widespread use of machine learning, concerns over its security ...
research
11/09/2021

MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps

Deep neural networks are susceptible to adversarially crafted, small and...
research
06/21/2018

Gradient Adversarial Training of Neural Networks

We propose gradient adversarial training, an auxiliary deep learning fra...
research
09/21/2020

Feature Distillation With Guided Adversarial Contrastive Learning

Deep learning models are shown to be vulnerable to adversarial examples....
research
01/08/2020

To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers

Transfer learning — transferring learned knowledge — has brought a parad...
research
04/14/2022

Planting Undetectable Backdoors in Machine Learning Models

Given the computational cost and technical expertise required to train m...

Please sign up or login with your details

Forgot password? Click here to reset