DeepAI AI Chat
Log In Sign Up

A Survey of the State of Explainable AI for Natural Language Processing

by   Marina Danilevsky, et al.

Recent years have seen important advances in the quality of state-of-the-art models, but this has come at the expense of models becoming less interpretable. This survey presents an overview of the current state of Explainable AI (XAI), considered within the domain of Natural Language Processing (NLP). We discuss the main categorization of explanations, as well as the various ways explanations can be arrived at and visualized. We detail the operations and explainability techniques currently available for generating explanations for NLP model predictions, to serve as a resource for model developers in the community. Finally, we point out the current gaps and encourage directions for future work in this important research area.


Rationalization for Explainable NLP: A Survey

Recent advances in deep learning have improved the performance of many N...

Testing the effectiveness of saliency-based explainability in NLP using randomized survey-based experiments

As the applications of Natural Language Processing (NLP) in sensitive ar...

A Survey for In-context Learning

With the increasing ability of large language models (LLMs), in-context ...

Thermostat: A Large Collection of NLP Model Explanations and Analysis Tools

In the language domain, as in other domains, neural explainability takes...

Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions

Recent natural language processing (NLP) techniques have accomplished hi...

Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey

Despite being responsible for state-of-the-art results in several comput...

RECAST: Interactive Auditing of Automatic Toxicity Detection Models

As toxic language becomes nearly pervasive online, there has been increa...

Code Repositories


ML/DS Learning Path for 2021

view repo