Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection

10/09/2022
by   Shantanu Patankar, et al.
0

Pre-training large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. Although this method has proven to be effective for many domains, it might not always provide desirable benefits. In this paper we study the effects of hateful pre-training on low resource hate speech classification tasks. While previous studies on English language have emphasized its importance, we aim to to augment their observations with some non-obvious insights. We evaluate different variations of tweet based BERT models pre-trained on hateful, non-hateful and mixed subsets of 40M tweet dataset. This evaluation is carried for Indian languages Hindi and Marathi. This paper is an empirical evidence that hateful pre-training is not the best pre-training option for hate speech detection. We show that pre-training on non-hateful text from target domain provides similar or better results. Further, we introduce HindTweetBERT and MahaTweetBERT, the first publicly available BERT models pre-trained on Hindi and Marathi tweets respectively. We show that they provide state-of-the-art performance on hate speech classification tasks. We also release a gold hate speech evaluation benchmark HateEval-Hi and HateEval-Mr consisting of manually labeled 2000 tweets each.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/07/2020

Pre-training Polish Transformer-based Language Models at Scale

Transformer-based language models are now widely used in Natural Languag...
research
09/19/2023

Mixed-Distil-BERT: Code-mixed Language Modeling for Bangla, English, and Hindi

One of the most popular downstream tasks in the field of Natural Languag...
research
03/15/2022

Evaluating BERT-based Pre-training Language Models for Detecting Misinformation

It is challenging to control the quality of online information due to th...
research
06/05/2023

On "Scientific Debt" in NLP: A Case for More Rigour in Language Model Pre-Training Research

This evidence-based position paper critiques current research practices ...
research
03/06/2020

Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT

Massive digital data processing provides a wide range of opportunities a...
research
09/13/2022

Automated classification for open-ended questions with BERT

Manual coding of text data from open-ended questions into different cate...
research
12/01/2021

Domain-oriented Language Pre-training with Adaptive Hybrid Masking and Optimal Transport Alignment

Motivated by the success of pre-trained language models such as BERT in ...

Please sign up or login with your details

Forgot password? Click here to reset