Evaluating the Robustness of Test Selection Methods for Deep Neural Networks

07/29/2023
by   Qiang Hu, et al.
0

Testing deep learning-based systems is crucial but challenging due to the required time and labor for labeling collected raw data. To alleviate the labeling effort, multiple test selection methods have been proposed where only a subset of test data needs to be labeled while satisfying testing requirements. However, we observe that such methods with reported promising results are only evaluated under simple scenarios, e.g., testing on original test data. This brings a question to us: are they always reliable? In this paper, we explore when and to what extent test selection methods fail for testing. Specifically, first, we identify potential pitfalls of 11 selection methods from top-tier venues based on their construction. Second, we conduct a study on five datasets with two model architectures per dataset to empirically confirm the existence of these pitfalls. Furthermore, we demonstrate how pitfalls can break the reliability of these methods. Concretely, methods for fault detection suffer from test data that are: 1) correctly classified but uncertain, or 2) misclassified but confident. Remarkably, the test relative coverage achieved by such methods drops by up to 86.85 methods for performance estimation are sensitive to the choice of intermediate-layer output. The effectiveness of such methods can be even worse than random selection when using an inappropriate layer.

READ FULL TEXT
research
07/22/2022

Efficient Testing of Deep Neural Networks via Decision Boundary Analysis

Deep learning plays a more and more important role in our daily life due...
research
04/08/2022

Labeling-Free Comparison Testing of Deep Learning Models

Various deep neural networks (DNNs) are developed and reported for their...
research
04/30/2019

Test Selection for Deep Learning Systems

Testing of deep learning models is challenging due to the excessive numb...
research
11/25/2019

Identifying Model Weakness with Adversarial Examiner

Machine learning models are usually evaluated according to the average c...
research
07/09/2016

Classifier Risk Estimation under Limited Labeling Resources

In this paper we propose strategies for estimating performance of a clas...
research
07/28/2023

Exploring a Test Data-Driven Method for Selecting and Constraining Metamorphic Relations

Identifying and selecting high-quality Metamorphic Relations (MRs) is a ...

Please sign up or login with your details

Forgot password? Click here to reset