Generating and Visualizing Trace Link Explanations

04/25/2022
by   Yalin Liu, et al.
0

Recent breakthroughs in deep-learning (DL) approaches have resulted in the dynamic generation of trace links that are far more accurate than was previously possible. However, DL-generated links lack clear explanations, and therefore non-experts in the domain can find it difficult to understand the underlying semantics of the link, making it hard for them to evaluate the link's correctness or suitability for a specific software engineering task. In this paper we present a novel NLP pipeline for generating and visualizing trace link explanations. Our approach identifies domain-specific concepts, retrieves a corpus of concept-related sentences, mines concept definitions and usage examples, and identifies relations between cross-artifact concepts in order to explain the links. It applies a post-processing step to prioritize the most likely acronyms and definitions and to eliminate non-relevant ones. We evaluate our approach using project artifacts from three different domains of interstellar telescopes, positive train control, and electronic health-care systems, and then report coverage, correctness, and potential utility of the generated definitions. We design and utilize an explanation interface which leverages concept definitions and relations to visualize and explain trace link rationales, and we report results from a user study that was conducted to evaluate the effectiveness of the explanation interface. Results show that the explanations presented in the interface helped non-experts to understand the underlying semantics of a trace link and improved their ability to vet the correctness of the link.

READ FULL TEXT

page 8

page 9

research
01/15/2020

A Formal Approach to Explainability

We regard explanations as a blending of the input sample and the model's...
research
08/15/2018

Domain Knowledge Discovery Guided by Software Trace Links

Software-intensive projects are specified and modeled using domain termi...
research
04/04/2022

ConceptExplainer: Understanding the Mental Model of Deep Learning Algorithms via Interactive Concept-based Explanations

Traditional deep learning interpretability methods which are suitable fo...
research
02/08/2021

Traceability Transformed: Generating more Accurate Links with Pre-Trained BERT Models

Software traceability establishes and leverages associations between div...
research
06/15/2021

Generating Contrastive Explanations for Inductive Logic Programming Based on a Near Miss Approach

In recent research, human-understandable explanations of machine learnin...
research
04/09/2018

Second-Guessing in Tracing Tasks Considered Harmful?

[Context and motivation] Trace matrices are lynch pins for the developme...

Please sign up or login with your details

Forgot password? Click here to reset