Explainable Software Defect Prediction: Are We There Yet?

11/21/2021
by   Jiho Shin, et al.
0

Explaining the prediction results of software defect prediction models is a challenging while practical task, which can provide useful information for developers to understand and fix the predicted bugs. To address this issue, recently, Jiarpakdee et al. proposed to use two state-of-the-art model-agnostic techniques (i.e., LIME and BreakDown) to explain the prediction results of bug prediction models. Their experiments show these tools can generate promising results and the generated explanations can assist developers understand the prediction results. However, the fact that LIME and BreakDown were only examined on a single software defect prediction model setting calls into question about their consistency and reliability across software defect prediction models with various settings. In this paper, we set out to investigate the consistency and reliability of model-agnostic technique based explanation generation approaches (i.e., LIME and BreakDown) on software defect prediction models with different settings , e.g., different data sampling techniques, different machine learning classifiers, and different prediction scenarios. Specifically, we use both LIME and BreakDown to generate explanations for the same instance under software defect prediction models with different settings and then check the consistency of the generated explanations for the instance. We reused the same defect data from Jiarpakdee et al. in our experiments. The results show that both LIME and BreakDown generate inconsistent explanations under different software defect prediction settings for the same test instances, which makes them unreliable for explanation generation. Overall, with this study, we call for more research in explainable software defect prediction towards achieving consistent and reliable explanation generation.

READ FULL TEXT
research
02/24/2021

Practitioners' Perceptions of the Goals and Visual Explanations of Defect Prediction Models

Software defect prediction models are classifiers that are constructed f...
research
12/07/2021

Training Deep Models to be Explained with Fewer Examples

Although deep models achieve high predictive performance, it is difficul...
research
09/15/2022

Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP

Context: The identification of bugs within the reported issues in an iss...
research
06/14/2023

Explainable Software Defect Prediction from Cross Company Project Metrics Using Machine Learning

Predicting the number of defects in a project is critical for project te...
research
06/19/2018

Instance-Level Explanations for Fraud Detection: A Case Study

Fraud detection is a difficult problem that can benefit from predictive ...
research
01/26/2021

Better sampling in explanation methods can prevent dieselgate-like deception

Machine learning models are used in many sensitive areas where besides p...
research
04/07/2023

A roadmap to fair and trustworthy prediction model validation in healthcare

A prediction model is most useful if it generalizes beyond the developme...

Please sign up or login with your details

Forgot password? Click here to reset