Cases for Explainable Software Systems:Characteristics and Examples

08/12/2021
by   Mersedeh Sadeghi, et al.
0

The need for systems to explain behavior to users has become more evident with the rise of complex technology like machine learning or self-adaptation. In general, the need for an explanation arises when the behavior of a system does not match the user's expectations. However, there may be several reasons for a mismatch including errors, goal conflicts, or multi-agent interference. Given the various situations, we need precise and agreed descriptions of explanation needs as well as benchmarks to align research on explainable systems. In this paper, we present a taxonomy that structures needs for an explanation according to different reasons. We focus on explanations to improve the user interaction with the system. For each leaf node in the taxonomy, we provide a scenario that describes a concrete situation in which a software system should provide an explanation. These scenarios, called explanation cases, illustrate the different demands for explanations. Our taxonomy can guide the requirements elicitation for explanation capabilities of interactive intelligent systems and our explanation cases build the basis for a common benchmark. We are convinced that both, the taxonomy and the explanation cases, help the community to align future research on explainable systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2023

Explanation Needs in App Reviews: Taxonomy and Automated Detection

Explainability, i.e. the ability of a system to explain its behavior to ...
research
08/13/2019

Towards Self-Explainable Cyber-Physical Systems

With the increasing complexity of CPSs, their behavior and decisions bec...
research
06/02/2021

Towards an Explanation Space to Align Humans and Explainable-AI Teamwork

Providing meaningful and actionable explanations to end-users is a funda...
research
06/09/2023

Interactive Explanation with Varying Level of Details in an Explainable Scientific Literature Recommender System

Explainable recommender systems (RS) have traditionally followed a one-s...
research
05/27/2021

Interactive Explanations: Diagnosis and Repair of Reinforcement Learning Based Agent Behaviors

Reinforcement learning techniques successfully generate convincing agent...
research
01/31/2019

Towards a Characterization of Explainable Systems

Building software-driven systems that are easily understood becomes a ch...
research
03/20/2018

Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges

Issues regarding explainable AI involve four components: users, laws & r...

Please sign up or login with your details

Forgot password? Click here to reset