Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks

01/09/2018
by   Yongshuai Liu, et al.
0

Deep neural networks are vulnerable to adversarial examples. Prior defenses attempted to make deep networks more robust by either improving the network architecture or adding adversarial examples into the training set, with their respective limitations. We propose a new direction. Motivated by recent research that shows that outliers in the training set have a high negative influence on the trained model, our approach makes the model more robust by detecting and removing outliers in the training set without modifying the network architecture or requiring adversarial examples. We propose two methods for detecting outliers based on canonical examples and on training errors, respectively. After removing the outliers, we train the classifier with the remaining examples to obtain a sanitized model. Our evaluation shows that the sanitized model improves classification accuracy and forces the attacks to generate adversarial examples with higher distortions. Moreover, the Kullback-Leibler divergence from the output of the original model to that of the sanitized model allows us to distinguish between normal and adversarial examples reliably.

READ FULL TEXT
research
05/01/2019

Dropping Pixels for Adversarial Robustness

Deep neural networks are vulnerable against adversarial examples. In thi...
research
10/27/2022

Outlier-Aware Training for Improving Group Accuracy Disparities

Methods addressing spurious correlations such as Just Train Twice (JTT, ...
research
12/26/2021

Perlin Noise Improve Adversarial Robustness

Adversarial examples are some special input that can perturb the output ...
research
02/20/2023

Towards Unbounded Machine Unlearning

Deep machine unlearning is the problem of removing the influence of a co...
research
08/24/2021

Out-of-Distribution Example Detection in Deep Neural Networks using Distance to Modelled Embedding

Adoption of deep learning in safety-critical systems raise the need for ...
research
03/05/2019

L 1-norm double backpropagation adversarial defense

Adversarial examples are a challenging open problem for deep neural netw...
research
05/21/2018

Bidirectional Learning for Robust Neural Networks

A multilayer perceptron can behave as a generative classifier by applyin...

Please sign up or login with your details

Forgot password? Click here to reset