Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy

10/26/2020
by   Philipp Benz, et al.
0

Recently, convolutional neural networks (CNNs) have made significant advancement, however, they are widely known to be vulnerable to adversarial attacks. Adversarial training is the most widely used technique for improving adversarial robustness to strong white-box attacks. Prior works have been evaluating and improving the model average robustness without per-class evaluation. The average evaluation alone might provide a false sense of robustness. For example, the attacker can focus on attacking the vulnerable class, which can be dangerous, especially, when the vulnerable class is a critical one, such as "human" in autonomous driving. In this preregistration submission, we propose an empirical study on the class-wise accuracy and robustness of adversarially trained models. Given that the CIFAR10 training dataset has an equal number of samples for each class, interestingly, preliminary results on it with Resnet18 show that there exists inter-class discrepancy for accuracy and robustness on standard models, for instance, "cat" is more vulnerable than other classes. Moreover, adversarial training increases inter-class discrepancy. Our work aims to investigate the following questions: (a) is the phenomenon of inter-class discrepancy universal for other classification benchmark datasets on other seminal model architectures with various optimization hyper-parameters? (b) If so, what can be possible explanations for the inter-class discrepancy? (c) Can the techniques proposed in the long tail classification be readily extended to adversarial training for addressing the inter-class discrepancy?

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/07/2021

Universal Adversarial Training with Class-Wise Perturbations

Despite their overwhelming success on a wide range of applications, conv...
research
10/26/2022

Improving Adversarial Robustness with Self-Paced Hard-Class Pair Reweighting

Deep Neural Networks are vulnerable to adversarial attacks. Among many d...
research
09/12/2019

Feedback Learning for Improving the Robustness of Neural Networks

Recent research studies revealed that neural networks are vulnerable to ...
research
10/06/2021

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

Convolutional Neural Networks (CNNs) have become the de facto gold stand...
research
08/16/2021

Neural Architecture Dilation for Adversarial Robustness

With the tremendous advances in the architecture and scale of convolutio...
research
12/14/2020

Improving Adversarial Robustness via Probabilistically Compact Loss with Logit Constraints

Convolutional neural networks (CNNs) have achieved state-of-the-art perf...
research
05/31/2021

Adaptive Feature Alignment for Adversarial Training

Recent studies reveal that Convolutional Neural Networks (CNNs) are typi...

Please sign up or login with your details

Forgot password? Click here to reset