DeepAI AI Chat
Log In Sign Up

A Hierarchical Assessment of Adversarial Severity

by   Guillaume Jeanneret, et al.
King Abdullah University of Science and Technology

Adversarial Robustness is a growing field that evidences the brittleness of neural networks. Although the literature on adversarial robustness is vast, a dimension is missing in these studies: assessing how severe the mistakes are. We call this notion "Adversarial Severity" since it quantifies the downstream impact of adversarial corruptions by computing the semantic error between the misclassification and the proper label. We propose to study the effects of adversarial noise by measuring the Robustness and Severity into a large-scale dataset: iNaturalist-H. Our contributions are: (i) we introduce novel Hierarchical Attacks that harness the rich structured space of labels to create adversarial examples. (ii) These attacks allow us to benchmark the Adversarial Robustness and Severity of classification models. (iii) We enhance the traditional adversarial training with a simple yet effective Hierarchical Curriculum Training to learn these nodes gradually within the hierarchical tree. We perform extensive experiments showing that hierarchical defenses allow deep models to boost the adversarial Robustness by 1.85 severity of all attacks by 0.17, on average.


page 8

page 13

page 14

page 15

page 16

page 17

page 18

page 19


Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?

Adversarial training is one of the strongest defenses against adversaria...

Improving Hierarchical Adversarial Robustness of Deep Neural Networks

Do all adversarial examples have the same consequences? An autonomous dr...

Addressing Mistake Severity in Neural Networks with Semantic Knowledge

Robustness in deep neural networks and machine learning algorithms in ge...

All Mistakes Are Not Equal: Comprehensive Hierarchy Aware Multi-label Predictions (CHAMP)

This paper considers the problem of Hierarchical Multi-Label Classificat...

Understanding the Error in Evaluating Adversarial Robustness

Deep neural networks are easily misled by adversarial examples. Although...

Clustering Effect of (Linearized) Adversarial Robust Models

Adversarial robustness has received increasing attention along with the ...