Multimodal and Explainable Internet Meme Classification

12/11/2022
by   Abhinav Kumar Thakur, et al.
0

Warning: this paper contains content that may be offensive or upsetting. In the current context where online platforms have been effectively weaponized in a variety of geo-political events and social issues, Internet memes make fair content moderation at scale even more difficult. Existing work on meme classification and tracking has focused on black-box methods that do not explicitly consider the semantics of the memes or the context of their creation. In this paper, we pursue a modular and explainable architecture for Internet meme understanding. We design and implement multimodal classification methods that perform example- and prototype-based reasoning over training cases, while leveraging both textual and visual SOTA models to represent the individual cases. We study the relevance of our modular and explainable models in detecting harmful memes on two existing tasks: Hate Speech Detection and Misogyny Classification. We compare the performance between example- and prototype-based methods, and between text, vision, and multimodal models, across different categories of harmfulness (e.g., stereotype and objectification). We devise a user-friendly interface that facilitates the comparative analysis of examples retrieved by all of our models for any given meme, informing the community about the strengths and limitations of these explainable methods.

READ FULL TEXT

page 2

page 3

page 4

page 6

research
05/10/2020

The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes

This work proposes a new challenge set for multimodal classification, fo...
research
04/13/2022

TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the Detection and Classification of Misogynous Memes

The detection of offensive, hateful content on social media is a challen...
research
02/15/2018

Multimodal Explanations: Justifying Decisions and Pointing to the Evidence

Deep models that are both effective and explainable are desirable in man...
research
06/10/2021

Deciphering Implicit Hate: Evaluating Automated Detection Algorithms for Multimodal Hate

Accurate detection and classification of online hate is a difficult task...
research
12/02/2020

Classification of Multimodal Hate Speech – The Winning Solution of Hateful Memes Challenge

Hateful Memes is a new challenge set for multimodal classification, focu...
research
09/25/2018

Explainable PCGML via Game Design Patterns

Procedural content generation via Machine Learning (PCGML) is the umbrel...
research
12/12/2022

Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments

The spread of misinformation, propaganda, and flawed argumentation has b...

Please sign up or login with your details

Forgot password? Click here to reset