DeepAI AI Chat
Log In Sign Up

Explainability Auditing for Intelligent Systems: A Rationale for Multi-Disciplinary Perspectives

08/05/2021
by   Markus Langer, et al.
Universität Saarland
University of Bonn
0

National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multi-disciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multi-disciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.

READ FULL TEXT
04/17/2019

Explainability in Human-Agent Systems

This paper presents a taxonomy of explainability in Human-Agent Systems....
03/01/2023

Algorithmic Governance for Explainability: A Comparative Overview of Progress and Trends

The explainability of AI has transformed from a purely technical issue t...
09/01/2021

Impossibility Results in AI: A Survey

An impossibility theorem demonstrates that a particular problem or set o...
05/13/2022

Grounding Explainability Within the Context of Global South in XAI

In this position paper, we propose building a broader and deeper underst...
09/01/2020

Explainability Case Studies

Explainability is one of the key ethical concepts in the design of AI sy...
11/03/2021

Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities

Explainable artificial intelligence (xAI) is seen as a solution to makin...
11/17/2022

An Audit Framework for Technical Assessment of Binary Classifiers

Multilevel models using logistic regression (MLogRM) and random forest m...