You Can't See the Forest for Its Trees: Assessing Deep Neural Network Testing via NeuraL Coverage

12/03/2021
by   Yuanyuan Yuan, et al.
0

This paper summarizes eight design requirements for DNN testing criteria, taking into account distribution properties and practical concerns. We then propose a new criterion, NLC, that satisfies all of these design requirements. NLC treats a single DNN layer as the basic computational unit (rather than a single neuron) and captures four critical features of neuron output distributions. Thus, NLC is denoted as NeuraL Coverage, which more accurately describes how neural networks comprehend inputs via approximated distributions rather than neurons. We demonstrate that NLC is significantly correlated with the diversity of a test suite across a number of tasks (classification and generation) and data formats (image and text). Its capacity to discover DNN prediction errors is promising. Test input mutation guided by NLC result in a greater quality and diversity of exposed erroneous behaviors.

READ FULL TEXT
research
12/20/2021

Black-Box Testing of Deep Neural Networks through Test Case Diversity

Deep Neural Networks (DNNs) have been extensively used in many areas inc...
research
08/05/2022

An Overview of Structural Coverage Metrics for Testing Neural Networks

Deep neural network (DNN) models, including those used in safety-critica...
research
03/10/2021

A Review and Refinement of Surprise Adequacy

Surprise Adequacy (SA) is one of the emerging and most promising adequac...
research
11/24/2019

DeepSmartFuzzer: Reward Guided Test Generation For Deep Learning

Testing Deep Neural Network (DNN) models has become more important than ...
research
10/10/2020

Deep Neural Network Test Coverage: How Far Are We?

DNN testing is one of the most effective methods to guarantee the qualit...
research
06/05/2023

Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization

The out-of-distribution (OOD) problem generally arises when neural netwo...
research
12/03/2021

Enhancing Deep Neural Networks Testing by Traversing Data Manifold

We develop DEEPTRAVERSAL, a feedback-driven framework to test DNNs. DEEP...

Please sign up or login with your details

Forgot password? Click here to reset