-
Bypassing Feature Squeezing by Increasing Adversary Strength
Feature Squeezing is a recently proposed defense method which reduces th...
read it
-
A Statistical Defense Approach for Detecting Adversarial Examples
Adversarial examples are maliciously modified inputs created to fool dee...
read it
-
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples
Deep Neural Networks (DNNs) have achieved remarkable performance in a my...
read it
-
Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks
Deep neural networks are vulnerable to adversarial examples. Prior defen...
read it
-
Detecting and Recovering Adversarial Examples: An Input Sensitivity Guided Method
Deep neural networks undergo rapid development and achieve notable succe...
read it
-
When Causal Intervention Meets Image Masking and Adversarial Perturbation for Deep Neural Networks
Discovering and exploiting the causality in deep neural networks (DNNs) ...
read it
-
Adversarial ML Attack on Self Organizing Cellular Networks
Deep Neural Networks (DNN) have been widely adopted in self-organizing n...
read it
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Although deep neural networks (DNNs) have achieved great success in many tasks, they can often be fooled by adversarial examples that are generated by adding small but purposeful distortions to natural examples. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or required expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives. This paper explores two feature squeezing methods: reducing the color bit depth of each pixel and spatial smoothing. These simple strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.
READ FULL TEXT
Comments
There are no comments yet.