DeepAI AI Chat
Log In Sign Up

Disentangling Hate in Online Memes

by   Rui Cao, et al.
Singapore Management University
Singapore University of Technology and Design

Hateful and offensive content detection has been extensively explored in a single modality such as text. However, such toxic information could also be communicated via multimodal content such as online memes. Therefore, detecting multimodal hateful content has recently garnered much attention in academic and industry research communities. This paper aims to contribute to this emerging research topic by proposing DisMultiHate, which is a novel framework that performed the classification of multimodal hateful content. Specifically, DisMultiHate is designed to disentangle target entities in multimodal memes to improve hateful content classification and explainability. We conduct extensive experiments on two publicly available hateful and offensive memes datasets. Our experiment results show that DisMultiHate is able to outperform state-of-the-art unimodal and multimodal baselines in the hateful meme classification task. Empirical case studies were also conducted to demonstrate DisMultiHate's ability to disentangle target entities in memes and ultimately showcase DisMultiHate's explainability of the multimodal hateful content classification task.


page 2

page 8


On Explaining Multimodal Hateful Meme Detection Models

Hateful meme detection is a new multimodal task that has gained signific...

Deciphering Implicit Hate: Evaluating Automated Detection Algorithms for Multimodal Hate

Accurate detection and classification of online hate is a difficult task...

Three Steps to Multimodal Trajectory Prediction: Modality Clustering, Classification and Synthesis

Multimodal prediction results are essential for trajectory forecasting t...

Interpretable Multimodal Misinformation Detection with Logic Reasoning

Multimodal misinformation on online social platforms is becoming a criti...

Quantifying Modeling Feature Interactions: An Information Decomposition Framework

The recent explosion of interest in multimodal applications has resulted...

AngryBERT: Joint Learning Target and Emotion for Hate Speech Detection

Automated hate speech detection in social media is a challenging task th...

Look, Read and Feel: Benchmarking Ads Understanding with Multimodal Multitask Learning

Given the massive market of advertising and the sharply increasing onlin...