Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks

11/14/2015
by   Nicolas Papernot, et al.
0

Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95 explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800 tested.

READ FULL TEXT

page 1

page 3

page 10

page 12

page 13

research
02/22/2017

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

Recent studies have shown that deep neural networks (DNN) are vulnerable...
research
11/24/2015

The Limitations of Deep Learning in Adversarial Settings

Deep learning takes advantage of large datasets and computationally effi...
research
03/14/2018

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Deep Neural Networks (DNNs) have achieved remarkable performance in a my...
research
03/28/2023

Denoising Autoencoder-based Defensive Distillation as an Adversarial Robustness Algorithm

Adversarial attacks significantly threaten the robustness of deep neural...
research
05/14/2018

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

Recently, it has been shown that deep neural networks (DNN) are subject ...
research
12/14/2018

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing

Deep neural networks (DNN) have been shown to be useful in a wide range ...
research
07/20/2021

Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks

Deep neural network (DNN) classifiers are powerful tools that drive a br...

Please sign up or login with your details

Forgot password? Click here to reset