Towards a Characterization of Explainable Systems

01/31/2019
by   Dimitri Bohlender, et al.
0

Building software-driven systems that are easily understood becomes a challenge, with their ever-increasing complexity and autonomy. Accordingly, recent research efforts strive to aid in designing explainable systems. Nevertheless, a common notion of what it takes for a system to be explainable is still missing. To address this problem, we propose a characterization of explainable systems that consolidates existing research. By providing a unified terminology, we lay a basis for the classification of both existing and future research, and the formulation of precise requirements towards such systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2019

Towards Self-Explainable Cyber-Physical Systems

With the increasing complexity of CPSs, their behavior and decisions bec...
research
06/14/2021

Pitfalls of Explainable ML: An Industry Perspective

As machine learning (ML) systems take a more prominent and central role ...
research
10/02/2017

What Does Explainable AI Really Mean? A New Conceptualization of Perspectives

We characterize three notions of explainable AI that cut across research...
research
10/18/2019

Identifying the Most Explainable Classifier

We introduce the notion of pointwise coverage to measure the explainabil...
research
08/12/2021

Cases for Explainable Software Systems:Characteristics and Examples

The need for systems to explain behavior to users has become more eviden...
research
12/06/2022

Towards Better User Requirements: How to Involve Human Participants in XAI Research

Human-Center eXplainable AI (HCXAI) literature identifies the need to ad...
research
03/06/2023

A System's Approach Taxonomy for User-Centred XAI: A Survey

Recent advancements in AI have coincided with ever-increasing efforts in...

Please sign up or login with your details

Forgot password? Click here to reset