Requirements for Developing Robust Neural Networks

10/04/2019
by   John S. Hyatt, et al.
0

Validation accuracy is a necessary, but not sufficient, measure of a neural network classifier's quality. High validation accuracy during development does not guarantee that a model is free of serious flaws, such as vulnerability to adversarial attacks or a tendency to misclassify (with high confidence) data it was not trained on. The model may also be incomprehensible to a human or base its decisions on unreasonable criteria. These problems, which are not unique to classifiers, have been the focus of a substantial amount of recent research. However, they are not prioritized during model development, which almost always optimizes on validation accuracy to the exclusion of everything else. The product of this approach is likely to fail in unexpected ways outside of the training environment. We believe that, in addition to validation accuracy, the model development process must give added weight to other performance metrics such as explainability, resistance to adversarial attacks, and overconfidence on out-of-distribution data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2019

On the Vulnerability of Capsule Networks to Adversarial Attacks

This paper extensively evaluates the vulnerability of capsule networks t...
research
02/01/2019

Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks

Adversarial attacks and the development of (deep) neural networks robust...
research
09/15/2019

Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors

Deep neural networks (DNNs) are notorious for their vulnerability to adv...
research
04/04/2020

Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks

Recent work has put forth the hypothesis that adversarial vulnerabilitie...
research
08/19/2019

Human uncertainty makes classification more robust

The classification performance of deep neural networks has begun to asym...
research
12/07/2018

Deep-RBF Networks Revisited: Robust Classification with Rejection

One of the main drawbacks of deep neural networks, like many other class...
research
09/04/2023

Toward Defensive Letter Design

A major approach for defending against adversarial attacks aims at contr...

Please sign up or login with your details

Forgot password? Click here to reset