On the Need of Removing Last Releases of Data When Using or Validating Defect Prediction Models

03/31/2020
by   Aalok Ahluwalia, et al.
0

To develop and train defect prediction models, researchers rely on datasets in which a defect is attributed to an artifact, e.g., a class of a given release. However, the creation of such datasets is far from being perfect. It can happen that a defect is discovered several releases after its introduction: this phenomenon has been called "dormant defects". This means that, if we observe today the status of a class in its current version, it can be considered as defect-free while this is not the case. We call "snoring" the noise consisting of such classes, affected by dormant defects only. We conjecture that the presence of snoring negatively impacts the classifiers' accuracy and their evaluation. Moreover, earlier releases likely contain more snoring classes than older releases, thus, removing the most recent releases from a dataset could reduce the snoring effect and improve the accuracy of classifiers. In this paper we investigate the impact of the snoring noise on classifiers' accuracy and their evaluation, and the effectiveness of a possible countermeasure consisting in removing the last releases of data. We analyze the accuracy of 15 machine learning defect prediction classifiers on data from more than 4,000 bugs and 600 releases of 19 open source projects from the Apache ecosystem. Our results show that, on average across projects: (i) the presence of snoring decreases the recall of defect prediction classifiers; (ii) evaluations affected by snoring are likely unable to identify the best classifiers, and (iii) removing data from recent releases helps to significantly improve the accuracy of the classifiers. On summary, this paper provides insights on how to create a software defect dataset by mitigating the effect of snoring.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2021

The Impact of Dormant Defects on Defect Prediction: a Study of 19 Apache Projects

Defect prediction models can be beneficial to prioritize testing, analys...
research
06/16/2022

An Empirical Study on the Effectiveness of Data Resampling Approaches for Cross-Project Software Defect Prediction

Crossp-roject defect prediction (CPDP), where data from different softwa...
research
02/24/2022

Investigating the Use of One-Class Support Vector Machine for Software Defect Prediction

Early software defect identification is considered an important step tow...
research
11/11/2020

Leveraging the Defects Life Cycle to Label Affected Versions and Defective Classes

Two recent studies explicitly recommend labeling defective classes in re...
research
05/26/2019

Improving Change Prediction Models with Code Smell-Related Information

Code smells represent sub-optimal implementation choices applied by deve...
research
02/12/2022

Impact of Discretization Noise of the Dependent variable on Machine Learning Classifiers in Software Engineering

Researchers usually discretize a continuous dependent variable into two ...
research
05/29/2021

Investigating the Significance of Bellwether Effect to Improve Software Effort Estimation

Bellwether effect refers to the existence of exemplary projects (called ...

Please sign up or login with your details

Forgot password? Click here to reset