Thermostat: A Large Collection of NLP Model Explanations and Analysis Tools

08/31/2021
by   Nils Feldhus, et al.
0

In the language domain, as in other domains, neural explainability takes an ever more important role, with feature attribution methods on the forefront. Many such methods require considerable computational resources and expert knowledge about implementation details and parameter choices. To facilitate research, we present Thermostat which consists of a large collection of model explanations and accompanying analysis tools. Thermostat allows easy access to over 200k explanations for the decisions of prominent state-of-the-art models spanning across different NLP tasks, generated with multiple explainers. The dataset took over 10k GPU hours (> one year) to compile; compute time that the community now saves. The accompanying software tools allow to analyse explanations instance-wise but also accumulatively on corpus level. Users can investigate and compare models, datasets and explainers without the need to orchestrate implementation details. Thermostat is fully open source, democratizes explainability research in the language domain, circumvents redundant computations and increases comparability and replicability.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

10/01/2020

A Survey of the State of Explainable AI for Natural Language Processing

Recent years have seen important advances in the quality of state-of-the...
03/29/2021

Efficient Explanations from Empirical Explainers

Amid a discussion about Green AI in which we see explainability neglecte...
09/06/2019

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

As artificial intelligence and machine learning algorithms make further ...
09/28/2021

Who Explains the Explanation? Quantitatively Assessing Feature Attribution Methods

AI explainability seeks to increase the transparency of models, making t...
06/29/2021

Semantic Reasoning from Model-Agnostic Explanations

With the wide adoption of black-box models, instance-based post hoc expl...
12/24/2020

To what extent do human explanations of model behavior align with actual model behavior?

Given the increasingly prominent role NLP models (will) play in our live...
03/31/2021

The User behind the Abuse: A Position on Ethics and Explainability

Abuse on the Internet is an important societal problem of our time. Mill...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.