Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection

05/06/2022
by   Esma Balkir, et al.
2

We present a novel feature attribution method for explaining text classifiers, and analyze it in the context of hate speech detection. Although feature attribution models usually provide a single importance score for each token, we instead provide two complementary and theoretically-grounded scores – necessity and sufficiency – resulting in more informative explanations. We propose a transparent method that calculates these values by generating explicit perturbations of the input text, allowing the importance scores themselves to be explainable. We employ our method to explain the predictions of different hate speech detection models on the same set of curated examples from a test suite, and show that different values of necessity and sufficiency for identity terms correspond to different kinds of false positive errors, exposing sources of classifier bias against marginalized groups.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2020

Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End

To explain a machine learning model, there are two main approaches: feat...
research
09/18/2022

Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection

State-of-the-art approaches for hate-speech detection usually exhibit po...
research
05/25/2020

Demoting Racial Bias in Hate Speech Detection

In current hate speech datasets, there exists a high correlation between...
research
09/27/2017

Case Study: Explaining Diabetic Retinopathy Detection Deep CNNs via Integrated Gradients

In this report, we applied integrated gradients to explaining a neural n...
research
08/31/2021

Explaining Classes through Word Attribution

In recent years, several methods have been proposed for explaining indiv...
research
06/26/2022

Explaining the root causes of unit-level changes

Existing methods of explainable AI and interpretable ML cannot explain c...
research
10/22/2021

Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?

Explaining how important each input feature is to a classifier's decisio...

Please sign up or login with your details

Forgot password? Click here to reset