Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples

08/21/2019
by   Marcus Soll, et al.
1

Adversarial examples are artificially modified input samples which lead to misclassifications, while not being detectable by humans. These adversarial examples are a challenge for many tasks such as image and text classification, especially as research shows that many adversarial examples are transferable between different classifiers. In this work, we evaluate the performance of a popular defensive strategy for adversarial examples called defensive distillation, which can be successful in hardening neural networks against adversarial examples in the image domain. However, instead of applying defensive distillation to networks for image classification, we examine, for the first time, its performance on text classification tasks and also evaluate its effect on the transferability of adversarial text examples. Our results indicate that defensive distillation only has a minimal impact on text classifying neural networks and does neither help with increasing their robustness against adversarial examples nor prevent the transferability of adversarial examples between neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/14/2016

Defensive Distillation is Not Robust to Adversarial Examples

We show that defensive distillation is not secure: it is no more resista...
research
12/01/2018

Discrete Attacks and Submodular Optimization with Applications to Text Classification

Adversarial examples are carefully constructed modifications to an input...
research
08/04/2018

Traits & Transferability of Adversarial Examples against Instance Segmentation & Object Detection

Despite the recent advancements in deploying neural networks for image c...
research
03/20/2020

Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning

Deep learning is currently the most widespread and successful technology...
research
10/13/2016

Assessing Threat of Adversarial Examples on Deep Neural Networks

Deep neural networks are facing a potential security threat from adversa...
research
05/15/2017

Extending Defensive Distillation

Machine learning is vulnerable to adversarial examples: inputs carefully...
research
10/14/2015

Improving Back-Propagation by Adding an Adversarial Gradient

The back-propagation algorithm is widely used for learning in artificial...

Please sign up or login with your details

Forgot password? Click here to reset