Defensive Distillation is Not Robust to Adversarial Examples

07/14/2016
by   Nicholas Carlini, et al.
0

We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks.

READ FULL TEXT

page 1

page 2

page 3

research
08/21/2019

Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples

Adversarial examples are artificially modified input samples which lead ...
research
08/16/2016

Towards Evaluating the Robustness of Neural Networks

Neural networks provide state-of-the-art results for most machine learni...
research
05/15/2017

Extending Defensive Distillation

Machine learning is vulnerable to adversarial examples: inputs carefully...
research
05/25/2019

Adversarial Distillation for Ordered Top-k Attacks

Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, espec...
research
03/14/2018

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Deep Neural Networks (DNNs) have achieved remarkable performance in a my...
research
06/14/2019

Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking

The rise of machine learning as a service and model sharing platforms ha...
research
02/02/2023

Dataset Distillation Fixes Dataset Reconstruction Attacks

Modern deep learning requires large volumes of data, which could contain...

Please sign up or login with your details

Forgot password? Click here to reset