The Need for Standardized Explainability

10/20/2020
by   Othman Benchekroun, et al.
0

Explainable AI (XAI) is paramount in industry-grade AI; however existing methods fail to address this necessity, in part due to a lack of standardisation of explainability methods. The purpose of this paper is to offer a perspective on the current state of the area of explainability, and to provide novel definitions for Explainability and Interpretability to begin standardising this area of research. To do so, we provide an overview of the literature on explainability, and of the existing methods that are already implemented. Finally, we offer a tentative taxonomy of the different explainability methods, opening the door to future research.

READ FULL TEXT
research
03/11/2020

Explainable Agents Through Social Cues: A Review

How to provide explanations has experienced a surge of interest in Human...
research
10/22/2019

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

In the last years, Artificial Intelligence (AI) has achieved a notable m...
research
04/17/2019

Explainability in Human-Agent Systems

This paper presents a taxonomy of explainability in Human-Agent Systems....
research
05/05/2023

Towards Feminist Intersectional XAI: From Explainability to Response-Ability

This paper follows calls for critical approaches to computing and concep...
research
05/13/2022

Grounding Explainability Within the Context of Global South in XAI

In this position paper, we propose building a broader and deeper underst...
research
09/01/2020

Machine Reasoning Explainability

As a field of AI, Machine Reasoning (MR) uses largely symbolic means to ...
research
10/08/2021

Accountability in AI: From Principles to Industry-specific Accreditation

Recent AI-related scandals have shed a spotlight on accountability in AI...

Please sign up or login with your details

Forgot password? Click here to reset