DeepAI AI Chat
Log In Sign Up

Explainability Auditing for Intelligent Systems: A Rationale for Multi-Disciplinary Perspectives

by   Markus Langer, et al.
Universität Saarland
University of Bonn

National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multi-disciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multi-disciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.


Explainability in Human-Agent Systems

This paper presents a taxonomy of explainability in Human-Agent Systems....

Algorithmic Governance for Explainability: A Comparative Overview of Progress and Trends

The explainability of AI has transformed from a purely technical issue t...

Impossibility Results in AI: A Survey

An impossibility theorem demonstrates that a particular problem or set o...

Grounding Explainability Within the Context of Global South in XAI

In this position paper, we propose building a broader and deeper underst...

Explainability Case Studies

Explainability is one of the key ethical concepts in the design of AI sy...

Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities

Explainable artificial intelligence (xAI) is seen as a solution to makin...

An Audit Framework for Technical Assessment of Binary Classifiers

Multilevel models using logistic regression (MLogRM) and random forest m...