NLP-CUET@DravidianLangTech-EACL2021: Investigating Visual and Textual Features to Identify Trolls from Multimodal Social Media Memes

02/28/2021
by   Eftekhar Hossain, et al.
0

In the past few years, the meme has become a new way of communication on the Internet. As memes are the images with embedded text, it can quickly spread hate, offence and violence. Classifying memes are very challenging because of their multimodal nature and region-specific interpretation. A shared task is organized to develop models that can identify trolls from multimodal social media memes. This work presents a computational model that we have developed as part of our participation in the task. Training data comes in two forms: an image with embedded Tamil code-mixed text and an associated caption given in English. We investigated the visual and textual features using CNN, VGG16, Inception, Multilingual-BERT, XLM-Roberta, XLNet models. Multimodal features are extracted by combining image (CNN, ResNet50, Inception) and text (Long short term memory network) features via early fusion approach. Results indicate that the textual approach with XLNet achieved the highest weighted f_1-score of 0.58, which enabled our model to secure 3^rd rank in this task.

READ FULL TEXT

page 3

page 6

research
08/09/2021

Do Images really do the Talking? Analysing the significance of Images in Tamil Troll meme classification

A meme is an part of media created to share an opinion or emotion across...
research
02/28/2021

NLP-CUET@LT-EDI-EACL2021: Multilingual Code-Mixed Hope Speech Detection using Cross-lingual Representation Learner

In recent years, several systems have been developed to regulate the spr...
research
05/09/2023

A Review of Vision-Language Models and their Performance on the Hateful Memes Challenge

Moderation of social media content is currently a highly manual task, ye...
research
04/19/2022

Multimodal Hate Speech Detection from Bengali Memes and Texts

Numerous works have been proposed to employ machine learning (ML) and de...
research
08/15/2023

MultiSChuBERT: Effective Multimodal Fusion for Scholarly Document Quality Prediction

Automatic assessment of the quality of scholarly documents is a difficul...
research
03/23/2022

Affective Feedback Synthesis Towards Multimodal Text and Image Data

In this paper, we have defined a novel task of affective feedback synthe...
research
03/25/2022

hate-alert@DravidianLangTech-ACL2022: Ensembling Multi-Modalities for Tamil TrollMeme Classification

Social media platforms often act as breeding grounds for various forms o...

Please sign up or login with your details

Forgot password? Click here to reset