Predicting Different Types of Subtle Toxicity in Unhealthy Online Conversations

06/07/2021
by   Shlok Gilda, et al.
0

This paper investigates the use of machine learning models for the classification of unhealthy online conversations containing one or more forms of subtler abuse, such as hostility, sarcasm, and generalization. We leveraged a public dataset of 44K online comments containing healthy and unhealthy comments labeled with seven forms of subtle toxicity. We were able to distinguish between these comments with a top micro F1-score, macro F1-score, and ROC-AUC of 88.76 easier to detect than other types of unhealthy comments. We also conducted a sentiment analysis which revealed that most types of unhealthy comments were associated with a slight negative sentiment, with hostile comments being the most negative ones.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2020

BAN-ABSA: An Aspect-Based Sentiment Analysis dataset for Bengali and it's baseline evaluation

Due to the breathtaking growth of social media or newspaper user comment...
research
06/03/2021

Automatically Detecting Cyberbullying Comments on Online Game Forums

Online game forums are popular to most of game players. They use it to c...
research
05/03/2019

Time-sync Video Tag Extraction Using Semantic Association Graph

Time-sync comments reveal a new way of extracting the online video tags....
research
11/16/2022

A Graph-Based Context-Aware Model to Understand Online Conversations

Online forums that allow for participatory engagement between users have...
research
02/25/2023

STACC: Code Comment Classification using SentenceTransformers

Code comments are a key resource for information about software artefact...
research
06/01/2020

BERT-based Ensembles for Modeling Disclosure and Support in Conversational Social Media Text

There is a growing interest in understanding how humans initiate and hol...
research
05/01/2022

Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation

Machine learning models are commonly used to detect toxicity in online c...

Please sign up or login with your details

Forgot password? Click here to reset