CAGFuzz: Coverage-Guided Adversarial Generative Fuzzing Testing of Deep Learning Systems

11/14/2019
by   Pengcheng Zhang, et al.
7

Deep Learning systems (DL) based on Deep Neural Networks (DNNs) are more and more used in various aspects of our life, including unmanned vehicles, speech processing, and robotics. However, due to the limited dataset and the dependence on manual labeling data, DNNs often fail to detect their erroneous behaviors, which may lead to serious problems. Several approaches have been proposed to enhance the input examples for testing DL systems. However, they have the following limitations. First, they design and generate adversarial examples from the perspective of model, which may cause low generalization ability when they are applied to other models. Second, they only use surface feature constraints to judge the difference between the adversarial example generated and the original example. The deep feature constraints, which contain high-level semantic information, such as image object category and scene semantics are completely neglected. To address these two problems, in this paper, we propose CAGFuzz, a Coverage-guided Adversarial Generative Fuzzing testing approach, which generates adversarial examples for a targeted DNN to discover its potential defects. First, we train an adversarial case generator (AEG) from the perspective of general data set. Second, we extract the depth features of the original and adversarial examples, and constrain the adversarial examples by cosine similarity to ensure that the semantic information of adversarial examples remains unchanged. Finally, we retrain effective adversarial examples to improve neuron testing coverage rate. Based on several popular data sets, we design a set of dedicated experiments to evaluate CAGFuzz. The experimental results show that CAGFuzz can improve the neuron coverage rate, detect hidden errors, and also improve the accuracy of the target DNN.

READ FULL TEXT

page 4

page 7

page 11

page 12

page 17

research
07/19/2021

Feature-Filter: Detecting Adversarial Examples through Filtering off Recessive Features

Deep neural networks (DNNs) are under threat from adversarial example at...
research
08/28/2018

DLFuzz: Differential Fuzzing Testing of Deep Learning Systems

Deep learning (DL) systems are increasingly applied to safety-critical d...
research
12/31/2019

Automated Testing for Deep Learning Systems with Differential Behavior Criteria

In this work, we conducted a study on building an automated testing syst...
research
04/21/2022

Is Neuron Coverage Needed to Make Person Detection More Robust?

The growing use of deep neural networks (DNNs) in safety- and security-c...
research
11/05/2019

DLA: Dense-Layer-Analysis for Adversarial Example Detection

In recent years Deep Neural Networks (DNNs) have achieved remarkable res...
research
04/30/2018

Concolic Testing for Deep Neural Networks

Concolic testing alternates between CONCrete program execution and symbO...
research
06/17/2021

CoCoFuzzing: Testing Neural Code Models with Coverage-Guided Fuzzing

Deep learning-based code processing models have shown good performance f...

Please sign up or login with your details

Forgot password? Click here to reset