A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks

01/01/2019
by   Long Zhang, et al.
0

Deep neural networks (DNNs) have been widely used in the fields such as natural language processing, computer vision and image recognition. But several studies have been shown that deep neural networks can be easily fooled by artificial examples with some perturbations, which are widely known as adversarial examples. Adversarial examples can be used to attack deep neural networks or to improve the robustness of deep neural networks. A common way of generating adversarial examples is to first generate some noises and then add them into original examples. In practice, different examples have different noise-sensitive. To generate an effective adversarial example, it may be necessary to add a lot of noise to low noise-sensitive example, which may make the adversarial example meaningless. In this paper, we propose a noise-sensitivity-analysis-based test prioritization technique to pick out examples by their noise sensitivity. We construct an experiment to validate our approach on four image sets and two DNN models, which shows that examples are sensitive to noise and our method can effectively pick out examples by their noise sensitivity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2014

Towards Deep Neural Network Architectures Robust to Adversarial Examples

Recent work has shown deep neural networks (DNNs) to be highly susceptib...
research
03/29/2023

Latent Feature Relation Consistency for Adversarial Robustness

Deep neural networks have been applied in many computer vision tasks and...
research
11/14/2018

Deep Neural Networks based Modrec: Some Results with Inter-Symbol Interference and Adversarial Examples

Recent successes and advances in Deep Neural Networks (DNN) in machine v...
research
08/18/2023

Noise Sensitivity and Stability of Deep Neural Networks for Binary Classification

A first step is taken towards understanding often observed non-robustnes...
research
10/25/2021

Generating Watermarked Adversarial Texts

Adversarial example generation has been a hot spot in recent years becau...
research
09/26/2019

Adversarial ML Attack on Self Organizing Cellular Networks

Deep Neural Networks (DNN) have been widely adopted in self-organizing n...
research
11/18/2016

LOTS about Attacking Deep Features

Deep neural networks provide state-of-the-art performance on various tas...

Please sign up or login with your details

Forgot password? Click here to reset