Robust Classification using Robust Feature Augmentation

05/26/2019
by   Kevin Eykholt, et al.
0

Existing deep neural networks, say for image classification, have been shown to be vulnerable to adversarial images that can cause a DNN misclassification, without any perceptible change to an image. In this work, we propose shock absorbing robust features such as binarization, e.g., rounding, and group extraction, e.g., color or shape, to augment the classification pipeline, resulting in more robust classifiers. Experimentally, we show that augmenting ML models with these techniques leads to improved overall robustness on adversarial inputs as well as significant improvements in training time. On the MNIST dataset, we achieved 14x speedup in training time to obtain 90 adversarial accuracy com-pared to the state-of-the-art adversarial training method of Madry et al., as well as retained higher adversarial accuracy over a broader range of attacks. We also find robustness improvements on traffic sign classification using robust feature augmentation. Finally, we give theoretical insights for why one can expect robust feature augmentation to reduce adversarial input space

READ FULL TEXT

page 3

page 12

page 15

research
03/20/2020

Adversarial Robustness on In- and Out-Distribution Improves Explainability

Neural networks have led to major improvements in image classification b...
research
09/12/2019

Transferable Adversarial Robustness using Adversarially Trained Autoencoders

Machine learning has proven to be an extremely useful tool for solving c...
research
02/08/2023

WAT: Improve the Worst-class Robustness in Adversarial Training

Deep Neural Networks (DNN) have been shown to be vulnerable to adversari...
research
05/20/2022

Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification

The idea of robustness is central and critical to modern statistical ana...
research
10/17/2019

Enforcing Linearity in DNN succours Robustness and Adversarial Image Generation

Recent studies on the adversarial vulnerability of neural networks have ...
research
05/02/2022

MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust Classification of Breast Cancer

Robust self-training (RST) can augment the adversarial robustness of ima...
research
10/08/2021

Observations on K-image Expansion of Image-Mixing Augmentation for Classification

Image-mixing augmentations (e.g., Mixup or CutMix), which typically mix ...

Please sign up or login with your details

Forgot password? Click here to reset