GGT: Graph-Guided Testing for Adversarial Sample Detection of Deep Neural Network

07/09/2021
by   Zuohui Chen, et al.
8

Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models. Recently, a number of deep testing methods in software engineering were proposed to find the vulnerability of DNN systems, and one of them, i.e., Model Mutation Testing (MMT), was used to successfully detect various adversarial samples generated by different kinds of adversarial attacks. However, the mutated models in MMT are always huge in number (e.g., over 100 models) and lack diversity (e.g., can be easily circumvented by high-confidence adversarial samples), which makes it less efficient in real applications and less effective in detecting high-confidence adversarial samples. In this study, we propose Graph-Guided Testing (GGT) for adversarial sample detection to overcome these aforementioned challenges. GGT generates pruned models with the guide of graph characteristics, each of them has only about 5 in MMT, and graph guided models have higher diversity. The experiments on CIFAR10 and SVHN validate that GGT performs much better than MMT with respect to both effectiveness and efficiency.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

page 8

page 11

research
05/14/2018

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing

Recently, it has been shown that deep neural networks (DNN) are subject ...
research
05/25/2023

Rethink Diversity in Deep Learning Testing

Deep neural networks (DNNs) have demonstrated extraordinary capabilities...
research
05/05/2017

Detecting Adversarial Samples Using Density Ratio Estimates

Machine learning models, especially based on deep architectures are used...
research
12/14/2018

Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing

Deep neural networks (DNN) have been shown to be useful in a wide range ...
research
10/15/2019

Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling

We propose a generic confidence-based approximation that can be plugged ...
research
11/01/2022

ActGraph: Prioritization of Test Cases Based on Deep Neural Network Activation Graph

Widespread applications of deep neural networks (DNNs) benefit from DNN ...
research
08/23/2020

Ptolemy: Architecture Support for Robust Deep Learning

Deep learning is vulnerable to adversarial attacks, where carefully-craf...

Please sign up or login with your details

Forgot password? Click here to reset