QUACKIE: A NLP Classification Task With Ground Truth Explanations

12/24/2020
by   Yves Rychener, et al.
4

NLP Interpretability aims to increase trust in model predictions. This makes evaluating interpretability approaches a pressing issue. There are multiple datasets for evaluating NLP Interpretability, but their dependence on human provided ground truths raises questions about their unbiasedness. In this work, we take a different approach and formulate a specific classification task by diverting question-answering datasets. For this custom classification task, the interpretability ground-truth arises directly from the definition of the classification problem. We use this method to propose a benchmark and lay the groundwork for future research in NLP interpretability by evaluating a wide range of current state of the art methods.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/23/2019

BIM: Towards Quantitative Evaluation of Interpretability Methods with Ground Truth

Interpretability is rising as an important area of research in machine l...
11/20/2020

Improvement of Classification in One-Stage Detector

RetinaNet proposed Focal Loss for classification task and improved one-s...
11/08/2019

ERASER: A Benchmark to Evaluate Rationalized NLP Models

State-of-the-art models in NLP are now predominantly based on deep neura...
11/28/2020

Understanding How BERT Learns to Identify Edits

Pre-trained transformer language models such as BERT are ubiquitous in N...
05/20/2022

Constructive Interpretability with CoLabel: Corroborative Integration, Complementary Features, and Collaborative Learning

Machine learning models with explainable predictions are increasingly so...
12/05/2020

Understanding Interpretability by generalized distillation in Supervised Classification

The ability to interpret decisions taken by Machine Learning (ML) models...
07/16/2019

Evaluating Explanation Without Ground Truth in Interpretable Machine Learning

Interpretable Machine Learning (IML) has become increasingly important i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.