Cross-Platform and Cross-Domain Abusive Language Detection with Supervised Contrastive Learning

The prevalence of abusive language on different online platforms has been a major concern that raises the need for automated cross-platform abusive language detection. However, prior works focus on concatenating data from multiple platforms, inherently adopting Empirical Risk Minimization (ERM) method. In this work, we address this challenge from the perspective of domain generalization objective. We design SCL-Fish, a supervised contrastive learning integrated meta-learning algorithm to detect abusive language on unseen platforms. Our experimental analysis shows that SCL-Fish achieves better performance over ERM and the existing state-of-the-art models. We also show that SCL-Fish is data-efficient and achieves comparable performance with the large-scale pre-trained models upon finetuning for the abusive language detection task.

READ FULL TEXT

page 3

page 15

research
02/09/2021

Leveraging cross-platform data to improve automated hate speech detection

Hate speech is increasingly prevalent online, and its negative outcomes ...
research
01/26/2022

CodeRetriever: Unimodal and Bimodal Contrastive Learning

In this paper, we propose the CodeRetriever model, which combines the un...
research
02/28/2023

Self-Supervised Interest Transfer Network via Prototypical Contrastive Learning for Recommendation

Cross-domain recommendation has attracted increasing attention from indu...
research
12/05/2020

Cross-Domain Sentiment Classification with In-Domain Contrastive Learning

Contrastive learning (CL) has been successful as a powerful representati...
research
11/30/2022

RAFT: Rationale adaptor for few-shot abusive language detection

Abusive language is a concerning problem in online social media. Past re...
research
06/07/2021

Self-Supervision Meta-Learning for One-Shot Unsupervised Cross-Domain Detection

Deep detection models have largely demonstrated to be extremely powerful...
research
05/20/2022

Robust Task-Oriented Dialogue Generation with Contrastive Pre-training and Adversarial Filtering

Data artifacts incentivize machine learning models to learn non-transfer...

Please sign up or login with your details

Forgot password? Click here to reset