One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

09/06/2019
by   Vijay Arya, et al.
36

As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (http://aix360.mybluemix.net/), an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics. Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability. For data scientists and other users of the toolkit, we have implemented an extensible software architecture that organizes methods according to their place in the AI modeling pipeline. We also discuss enhancements to bring research innovations closer to consumers of explanations, ranging from simplified, more accessible versions of algorithms, to tutorials and an interactive web demo to introduce AI explainability to different audiences and application domains. Together, our toolkit and taxonomy can help identify gaps where more explainability methods are needed and provide a platform to incorporate them as they are developed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/24/2021

AI Explainability 360: Impact and Design

As artificial intelligence and machine learning algorithms become increa...
research
02/14/2022

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

The evaluation of explanation methods is a research topic that has not y...
research
12/29/2022

A Theoretical Framework for AI Models Explainability

Explainability is a vibrant research topic in the artificial intelligenc...
research
03/04/2021

CLAIMED, a visual and scalable component library for Trusted AI

Deep Learning models are getting more and more popular but constraints o...
research
06/09/2022

A taxonomy of explanations to support Explainability-by-Design

As automated decision-making solutions are increasingly applied to all a...
research
03/02/2020

A general framework for scientifically inspired explanations in AI

Explainability in AI is gaining attention in the computer science commun...
research
07/25/2023

ForestMonkey: Toolkit for Reasoning with AI-based Defect Detection and Classification Models

Artificial intelligence (AI) reasoning and explainable AI (XAI) tasks ha...

Please sign up or login with your details

Forgot password? Click here to reset