ToxVis: Enabling Interpretability of Implicit vs. Explicit Toxicity Detection Models with Interactive Visualization

03/01/2023
by   Uma Gunturi, et al.
0

The rise of hate speech on online platforms has led to an urgent need for effective content moderation. However, the subjective and multi-faceted nature of hateful online content, including implicit hate speech, poses significant challenges to human moderators and content moderation systems. To address this issue, we developed ToxVis, a visually interactive and explainable tool for classifying hate speech into three categories: implicit, explicit, and non-hateful. We fine-tuned two transformer-based models using RoBERTa, XLNET, and GPT-3 and used deep learning interpretation techniques to provide explanations for the classification results. ToxVis enables users to input potentially hateful text and receive a classification result along with a visual explanation of which words contributed most to the decision. By making the classification process explainable, ToxVis provides a valuable tool for understanding the nuances of hateful content and supporting more effective content moderation. Our research contributes to the growing body of work aimed at mitigating the harms caused by online hate speech and demonstrates the potential for combining state-of-the-art natural language processing models with interpretable deep learning techniques to address this critical issue. Finally, ToxVis can serve as a resource for content moderators, social media platforms, and researchers working to combat the spread of hate speech online.

READ FULL TEXT
research
04/07/2023

SSS at SemEval-2023 Task 10: Explainable Detection of Online Sexism using Majority Voted Fine-Tuned Transformers

This paper describes our submission to Task 10 at SemEval 2023-Explainab...
research
09/12/2022

A Review of Challenges in Machine Learning based Automated Hate Speech Detection

The spread of hate speech on social media space is currently a serious i...
research
02/11/2023

Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate Speech

Recent studies have alarmed that many online hate speeches are implicit....
research
02/08/2021

RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization

With the widespread use of toxic language online, platforms are increasi...
research
05/03/2021

Towards A Multi-agent System for Online Hate Speech Detection

This paper envisions a multi-agent system for detecting the presence of ...
research
09/11/2021

Latent Hatred: A Benchmark for Understanding Implicit Hate Speech

Hate speech has grown significantly on social media, causing serious con...
research
01/07/2020

RECAST: Interactive Auditing of Automatic Toxicity Detection Models

As toxic language becomes nearly pervasive online, there has been increa...

Please sign up or login with your details

Forgot password? Click here to reset