Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

02/14/2022
by   Anna Hedström, et al.
0

The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness. Until now, no tool exists that exhaustively and speedily allows researchers to quantitatively evaluate explanations of neural network predictions. To increase transparency and reproducibility in the field, we therefore built Quantus - a comprehensive, open-source toolkit in Python that includes a growing, well-organised collection of evaluation metrics and tutorials for evaluating explainable methods. The toolkit has been thoroughly tested and is available under open source license on PyPi (or on https://github.com/understandable-machine-intelligence-lab/quantus/).

READ FULL TEXT
research
09/06/2019

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

As artificial intelligence and machine learning algorithms make further ...
research
02/14/2023

The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus

Explainable AI (XAI) is a rapidly evolving field that aims to improve tr...
research
01/20/2022

From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI

The rising popularity of explainable artificial intelligence (XAI) to un...
research
06/22/2022

OpenXAI: Towards a Transparent Evaluation of Model Explanations

While several types of post hoc explanation methods (e.g., feature attri...
research
12/17/2022

Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design

Deep learning models for learning analytics have become increasingly pop...
research
06/02/2021

Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and Communicating the Uncertainty of AI

In this paper, we describe an open source Python toolkit named Uncertain...
research
05/05/2021

LEGOEval: An Open-Source Toolkit for Dialogue System Evaluation via Crowdsourcing

We present LEGOEval, an open-source toolkit that enables researchers to ...

Please sign up or login with your details

Forgot password? Click here to reset