Enhanced Attacks on Defensively Distilled Deep Neural Networks

11/16/2017
by   Yujia Liu, et al.
0

Deep neural networks (DNNs) have achieved tremendous success in many tasks of machine learning, such as the image classification. Unfortunately, researchers have shown that DNNs are easily attacked by adversarial examples, slightly perturbed images which can mislead DNNs to give incorrect classification results. Such attack has seriously hampered the deployment of DNN systems in areas where security or safety requirements are strict, such as autonomous cars, face recognition, malware detection. Defensive distillation is a mechanism aimed at training a robust DNN which significantly reduces the effectiveness of adversarial examples generation. However, the state-of-the-art attack can be successful on distilled networks with 100 a white-box attack which needs to know the inner information of DNN. Whereas, the black-box scenario is more general. In this paper, we first propose the epsilon-neighborhood attack, which can fool the defensively distilled networks with 100 adversarial examples with good visual quality. On the basis of this attack, we further propose the region-based attack against defensively distilled DNNs in the black-box setting. And we also perform the bypass attack to indirectly break the distillation defense as a complementary method. The experimental results show that our black-box attacks have a considerable success rate on defensively distilled networks.

READ FULL TEXT

page 1

page 8

page 11

page 12

research
08/14/2017

ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

Deep neural networks (DNNs) are one of the most prominent technologies o...
research
05/01/2019

NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks

Powerful adversarial attack methods are vital for understanding how to c...
research
10/14/2021

DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks

White-box Adversarial Example (AE) attacks towards Deep Neural Networks ...
research
01/26/2019

A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm

Neural networks play an increasingly important role in the field of mach...
research
10/26/2021

Disrupting Deep Uncertainty Estimation Without Harming Accuracy

Deep neural networks (DNNs) have proven to be powerful predictors and ar...
research
06/06/2022

PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN Model

Deep neural networks (DNNs) have achieved tremendous success in artifici...
research
01/19/2021

Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization

Fooling deep neural networks (DNNs) with the black-box optimization has ...

Please sign up or login with your details

Forgot password? Click here to reset