Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP

09/15/2022
by   Benjamin Ledel, et al.
0

Context: The identification of bugs within the reported issues in an issue tracker is crucial for the triage of issues. Machine learning models have shown promising results regarding the performance of automated issue type prediction. However, we have only limited knowledge beyond our assumptions how such models identify bugs. LIME and SHAP are popular technique to explain the predictions of classifiers. Objective: We want to understand if machine learning models provide explanations for the classification that are reasonable to us as humans and align with our assumptions of what the models should learn. We also want to know if the prediction quality is correlated with the quality of explanations. Method: We conduct a study where we rate LIME and SHAP explanations based on their quality of explaining the outcome of an issue type prediction model. For this, we rate the quality of the explanations themselves, i.e., if they align with our expectations and if they help us to understand the underlying machine learning model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/11/2020

On the Feasibility of Automated Issue Type Prediction

Context: Issue tracking systems are used to track and describe tasks in ...
research
11/21/2021

Explainable Software Defect Prediction: Are We There Yet?

Explaining the prediction results of software defect prediction models i...
research
12/08/2022

Explaining Software Bugs Leveraging Code Structures in Neural Machine Translation

Software bugs claim approximately 50 economy billions of dollars. Once a...
research
04/30/2021

Explanation-Based Human Debugging of NLP Models: A Survey

To fix a bug in a program, we need to locate where the bug is, understan...
research
01/31/2020

Statistical stability indices for LIME: obtaining reliable explanations for Machine Learning models

Nowadays we are witnessing a transformation of the business processes to...
research
05/28/2021

Do not explain without context: addressing the blind spot of model explanations

The increasing number of regulations and expectations of predictive mach...
research
09/23/2021

Toward a Unified Framework for Debugging Gray-box Models

We are concerned with debugging concept-based gray-box models (GBMs). Th...

Please sign up or login with your details

Forgot password? Click here to reset