ToxCCIn: Toxic Content Classification with Interpretability

03/01/2021
by   Tong Xiang, et al.
0

Despite the recent successes of transformer-based models in terms of effectiveness on a variety of tasks, their decisions often remain opaque to humans. Explanations are particularly important for tasks like offensive language or toxicity detection on social media because a manual appeal process is often in place to dispute automatically flagged content. In this work, we propose a technique to improve the interpretability of these models, based on a simple and powerful assumption: a post is at least as toxic as its most toxic span. We incorporate this assumption into transformer models by scoring a post based on the maximum toxicity of its spans and augmenting the training process to identify correct spans. We find this approach effective and can produce explanations that exceed the quality of those provided by Logistic Regression analysis (often regarded as a highly-interpretable model), according to a human study.

READ FULL TEXT

page 2

page 9

research
10/29/2019

Weight of Evidence as a Basis for Human-Oriented Explanations

Interpretability is an elusive but highly sought-after characteristic of...
research
10/13/2021

Automated Essay Scoring Using Transformer Models

Automated essay scoring (AES) is gaining increasing attention in the edu...
research
08/10/2021

Post-hoc Interpretability for Neural NLP: A Survey

Natural Language Processing (NLP) models have become increasingly more c...
research
12/06/2021

HIVE: Evaluating the Human Interpretability of Visual Explanations

As machine learning is increasingly applied to high-impact, high-risk do...
research
05/23/2022

Logical Reasoning with Span Predictions: Span-level Logical Atoms for Interpretable and Robust NLI Models

Current Natural Language Inference (NLI) models achieve impressive resul...
research
06/08/2023

Closing the Loop: Testing ChatGPT to Generate Model Explanations to Improve Human Labelling of Sponsored Content on Social Media

Regulatory bodies worldwide are intensifying their efforts to ensure tra...
research
06/09/2023

Using Foundation Models to Detect Policy Violations with Minimal Supervision

Foundation models, i.e. large neural networks pre-trained on large text ...

Please sign up or login with your details

Forgot password? Click here to reset