Testing the effectiveness of saliency-based explainability in NLP using randomized survey-based experiments

11/25/2022
by   Adel Rahimi, et al.
0

As the applications of Natural Language Processing (NLP) in sensitive areas like Political Profiling, Review of Essays in Education, etc. proliferate, there is a great need for increasing transparency in NLP models to build trust with stakeholders and identify biases. A lot of work in Explainable AI has aimed to devise explanation methods that give humans insights into the workings and predictions of NLP models. While these methods distill predictions from complex models like Neural Networks into consumable explanations, how humans understand these explanations is still widely unexplored. Innate human tendencies and biases can handicap the understanding of these explanations in humans, and can also lead to them misjudging models and predictions as a result. We designed a randomized survey-based experiment to understand the effectiveness of saliency-based Post-hoc explainability methods in Natural Language Processing. The result of the experiment showed that humans have a tendency to accept explanations with a less critical view.

READ FULL TEXT
research
10/01/2020

A Survey of the State of Explainable AI for Natural Language Processing

Recent years have seen important advances in the quality of state-of-the...
research
08/10/2021

Post-hoc Interpretability for Neural NLP: A Survey

Natural Language Processing (NLP) models have become increasingly more c...
research
10/13/2022

Constructing Natural Language Explanations via Saliency Map Verbalization

Saliency maps can explain a neural model's prediction by identifying imp...
research
06/24/2021

Evaluation of Saliency-based Explainability Method

A particular class of Explainable AI (XAI) methods provide saliency maps...
research
05/08/2021

On Guaranteed Optimal Robust Explanations for NLP Models

We build on abduction-based explanations for ma-chine learning and devel...
research
08/28/2023

Goodhart's Law Applies to NLP's Explanation Benchmarks

Despite the rising popularity of saliency-based explanations, the resear...
research
05/19/2023

Solving NLP Problems through Human-System Collaboration: A Discussion-based Approach

Humans work together to solve common problems by having discussions, exp...

Please sign up or login with your details

Forgot password? Click here to reset