Decoding the Underlying Meaning of Multimodal Hateful Memes

05/28/2023
by   Ming Shan Hee, et al.
0

Recent studies have proposed models that yielded promising performance for the hateful meme classification task. Nevertheless, these proposed models do not generate interpretable explanations that uncover the underlying meaning and support the classification output. A major reason for the lack of explainable hateful meme methods is the absence of a hateful meme dataset that contains ground truth explanations for benchmarking or training. Intuitively, having such explanations can educate and assist content moderators in interpreting and removing flagged hateful memes. This paper address this research gap by introducing Hateful meme with Reasons Dataset (HatReD), which is a new multimodal hateful meme dataset annotated with the underlying hateful contextual reasons. We also define a new conditional generation task that aims to automatically generate underlying reasons to explain hateful memes and establish the baseline performance of state-of-the-art pre-trained language models on this task. We further demonstrate the usefulness of HatReD by analyzing the challenges of the new conditional generation task in explaining memes in seen and unseen domains. The dataset and benchmark models are made available here: https://github.com/Social-AI-Studio/HatRed

READ FULL TEXT

page 2

page 6

page 7

research
05/29/2023

Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition

Explainable AI (XAI) techniques have been widely used to help explain an...
research
09/20/2023

Controlled Generation with Prompt Insertion for Natural Language Explanations in Grammatical Error Correction

In Grammatical Error Correction (GEC), it is crucial to ensure the user'...
research
05/28/2023

Evaluating GPT-3 Generated Explanations for Hateful Content Moderation

Recent research has focused on using large language models (LLMs) to gen...
research
03/19/2019

Why Couldn't You do that? Explaining Unsolvability of Classical Planning Problems in the Presence of Plan Advice

Explainable planning is widely accepted as a prerequisite for autonomous...
research
07/11/2023

BLUEX: A benchmark based on Brazilian Leading Universities Entrance eXams

One common trend in recent studies of language models (LMs) is the use o...
research
08/31/2023

Socratis: Are large multimodal models emotionally aware?

Existing emotion prediction benchmarks contain coarse emotion labels whi...
research
08/11/2023

FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods

The field of explainable artificial intelligence (XAI) aims to uncover t...

Please sign up or login with your details

Forgot password? Click here to reset