Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue
The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trustworthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.
READ FULL TEXT