cantnlp@LT-EDI-2023: Homophobia/Transphobia Detection in Social Media Comments using Spatio-Temporally Retrained Language Models

08/20/2023
by   Sidney G. -J. Wong, et al.
0

This paper describes our multiclass classification system developed as part of the LTEDI@RANLP-2023 shared task. We used a BERT-based language model to detect homophobic and transphobic content in social media comments across five language conditions: English, Spanish, Hindi, Malayalam, and Tamil. We retrained a transformer-based crosslanguage pretrained language model, XLMRoBERTa, with spatially and temporally relevant social media language data. We also retrained a subset of models with simulated script-mixed social media language data with varied performance. We developed the best performing seven-label classification system for Malayalam based on weighted macro averaged F1 score (ranked first out of six) with variable performance for other language and class-label conditions. We found the inclusion of this spatio-temporal data improved the classification performance for all language and task conditions when compared with the baseline. The results suggests that transformer-based language classification systems are sensitive to register-specific and language-specific retraining.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2022

bitsa_nlp@LT-EDI-ACL2022: Leveraging Pretrained Language Models for Detecting Homophobia and Transphobia in Social Media Comments

Online social networks are ubiquitous and user-friendly. Nevertheless, i...
research
01/26/2023

A benchmark for toxic comment classification on Civil Comments dataset

Toxic comment detection on social media has proven to be essential for c...
research
04/17/2021

Combating Temporal Drift in Crisis with Adapted Embeddings

Language usage changes over time, and this can impact the effectiveness ...
research
04/19/2022

Optimize_Prime@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil

This paper tries to address the problem of abusive comment detection in ...
research
04/16/2021

Temporal Adaptation of BERT and Performance on Downstream Document Classification: Insights from Social Media

Language use differs between domains and even within a domain, language ...
research
02/03/2021

HeBERT HebEMO: a Hebrew BERT Model and a Tool for Polarity Analysis and Emotion Recognition

The use of Bidirectional Encoder Representations from Transformers (BERT...
research
07/21/2020

XD at SemEval-2020 Task 12: Ensemble Approach to Offensive Language Identification in Social Media Using Transformer Encoders

This paper presents six document classification models using the latest ...

Please sign up or login with your details

Forgot password? Click here to reset