Evaluating software defect prediction performance: an updated benchmarking study

01/07/2019
by   Libo Li, et al.
0

Accurately predicting faulty software units helps practitioners target faulty units and prioritize their efforts to maintain software quality. Prior studies use machine-learning models to detect faulty software code. We revisit past studies and point out potential improvements. Our new study proposes a revised benchmarking configuration. The configuration considers many new dimensions, such as class distribution sampling, evaluation metrics, and testing procedures. The new study also includes new datasets and models. Our findings suggest that predictive accuracy is generally good. However, predictive power is heavily influenced by the evaluation metrics and testing procedure (frequentist or Bayesian approach). The classifier results depend on the software project. While it is difficult to choose the best classifier, researchers should consider different dimensions to overcome potential bias.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset