One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

by   Vijay Arya, et al.

As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (, an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics. Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability. For data scientists and other users of the toolkit, we have implemented an extensible software architecture that organizes methods according to their place in the AI modeling pipeline. We also discuss enhancements to bring research innovations closer to consumers of explanations, ranging from simplified, more accessible versions of algorithms, to tutorials and an interactive web demo to introduce AI explainability to different audiences and application domains. Together, our toolkit and taxonomy can help identify gaps where more explainability methods are needed and provide a platform to incorporate them as they are developed.


page 1

page 2

page 3

page 4


AI Explainability 360: Impact and Design

As artificial intelligence and machine learning algorithms become increa...

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

The evaluation of explanation methods is a research topic that has not y...

A Theoretical Framework for AI Models Explainability

Explainability is a vibrant research topic in the artificial intelligenc...

CLAIMED, a visual and scalable component library for Trusted AI

Deep Learning models are getting more and more popular but constraints o...

A taxonomy of explanations to support Explainability-by-Design

As automated decision-making solutions are increasingly applied to all a...

A general framework for scientifically inspired explanations in AI

Explainability in AI is gaining attention in the computer science commun...

ForestMonkey: Toolkit for Reasoning with AI-based Defect Detection and Classification Models

Artificial intelligence (AI) reasoning and explainable AI (XAI) tasks ha...

Code Repositories


Interpretability and explainability of data and machine learning models

view repo


Some data are not perfectly free

view repo

Please sign up or login with your details

Forgot password? Click here to reset