Overlapping Word Removal is All You Need: Revisiting Data Imbalance in Hope Speech Detection

Hope Speech Detection, a task of recognizing positive expressions, has made significant strides recently. However, much of the current works focus on model development without considering the issue of inherent imbalance in the data. Our work revisits this issue in hope-speech detection by introducing focal loss, data augmentation, and pre-processing strategies. Accordingly, we find that introducing focal loss as part of Multilingual-BERT's (M-BERT) training process mitigates the effect of class imbalance and improves overall F1-Macro by 0.11. At the same time, contextual and back-translation-based word augmentation with M-BERT improves results by 0.10 over baseline despite imbalance. Finally, we show that overlapping word removal based on pre-processing, though simple, improves F1-Macro by 0.28. In due process, we present detailed studies depicting various behaviors of each of these strategies and summarize key findings from our empirical results for those interested in getting the most out of M-BERT for hope speech detection under real-world conditions of data imbalance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2023

Advancing Stuttering Detection via Data Augmentation, Class-Balanced Loss and Multi-Contextual Deep Learning

Stuttering is a neuro-developmental speech impairment characterized by u...
research
10/25/2022

PolyHope: Two-Level Hope Speech Detection from Tweets

Hope is characterized as openness of spirit toward the future, a desire,...
research
10/11/2022

T5 for Hate Speech, Augmented Data and Ensemble

We conduct relatively extensive investigations of automatic hate speech ...
research
07/27/2020

Learned Pre-Processing for Automatic Diabetic Retinopathy Detection on Eye Fundus Images

Diabetic Retinopathy is the leading cause of blindness in the working-ag...
research
11/07/2022

Using Deep Mixture-of-Experts to Detect Word Meaning Shift for TempoWiC

This paper mainly describes the dma submission to the TempoWiC task, whi...
research
04/21/2021

Disfluency Detection with Unlabeled Data and Small BERT Models

Disfluency detection models now approach high accuracy on English text. ...
research
09/01/2020

Automatic Assignment of Radiology Examination Protocols Using Pre-trained Language Models with Knowledge Distillation

Selecting radiology examination protocol is a repetitive, error-prone, a...

Please sign up or login with your details

Forgot password? Click here to reset