Hate Speech Detection and Racial Bias Mitigation in Social Media based on BERT model

08/14/2020
by   Marzieh Mozafari, et al.
0

Disparate biases associated with datasets and trained classifiers in hateful and abusive content identification tasks have raised many concerns recently. Although the problem of biased datasets on abusive language detection has been addressed more frequently, biases arising from trained classifiers have not yet been a matter of concern. Here, we first introduce a transfer learning approach for hate speech detection based on an existing pre-trained language model called BERT and evaluate the proposed model on two publicly available datasets annotated for racism, sexism, hate or offensive content on Twitter. Next, we introduce a bias alleviation mechanism in hate speech detection task to mitigate the effect of bias in training set during the fine-tuning of our pre-trained BERT-based model. Toward that end, we use an existing regularization method to reweight input samples, thereby decreasing the effects of high correlated training set' s n-grams with class labels, and then fine-tune our pre-trained BERT-based model with the new re-weighted samples. To evaluate our bias alleviation mechanism, we employ a cross-domain approach in which we use the trained classifiers on the aforementioned datasets to predict the labels of two new datasets from Twitter, AAE-aligned and White-aligned groups, which indicate tweets written in African-American English (AAE) and Standard American English (SAE) respectively. The results show the existence of systematic racial bias in trained classifiers as they tend to assign tweets written in AAE from AAE-aligned group to negative classes such as racism, sexism, hate, and offensive more often than tweets written in SAE from White-aligned. However, the racial bias in our classifiers reduces significantly after our bias alleviation mechanism is incorporated. This work could institute the first step towards debiasing hate speech and abusive language detection systems.

READ FULL TEXT

page 15

page 18

page 19

page 25

research
05/29/2019

Racial Bias in Hate Speech and Abusive Language Detection Datasets

Technologies for abusive language detection are being developed and appl...
research
10/28/2019

A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media

Generated hateful and toxic content by a portion of users in social medi...
research
10/25/2021

Fine-tuning of Pre-trained Transformers for Hate, Offensive, and Profane Content Detection in English and Marathi

This paper describes neural models developed for the Hate Speech and Off...
research
10/07/2022

A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter

Harmful content detection models tend to have higher false positive rate...
research
10/23/2020

HateBERT: Retraining BERT for Abusive Language Detection in English

In this paper, we introduce HateBERT, a re-trained BERT model for abusiv...
research
05/12/2020

Intersectional Bias in Hate Speech and Abusive Language Datasets

Algorithms are widely applied to detect hate speech and abusive language...
research
09/20/2021

Model Bias in NLP – Application to Hate Speech Classification

This document sums up our results forthe NLP lecture at ETH in the sprin...

Please sign up or login with your details

Forgot password? Click here to reset