Classification of Multimodal Hate Speech – The Winning Solution of Hateful Memes Challenge

12/02/2020
by   Xiayu Zhong, et al.
0

Hateful Memes is a new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes. Difficult examples are added to the dataset to make it hard to rely on unimodal signals, which means only multimodal models can succeed. According to Kiela,the state-of-the-art methods perform poorly compared to humans (64.73 I propose a new model that combined multimodal with rules, which achieve the first ranking of accuracy and AUROC of 86.8 rules are extracted from training set, and focus on improving the classification accuracy of difficult samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2020

The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes

This work proposes a new challenge set for multimodal classification, fo...
research
12/23/2020

Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge

Memes on the Internet are often harmless and sometimes amusing. However,...
research
08/30/2021

N15News: A New Dataset for Multimodal News Classification

Current news datasets merely focus on text features on the news and rare...
research
02/07/2021

EMA2S: An End-to-End Multimodal Articulatory-to-Speech System

Synthesized speech from articulatory movements can have real-world use f...
research
04/04/2023

Multimodal Neural Processes for Uncertainty Estimation

Neural processes (NPs) have brought the representation power of parametr...
research
01/18/2021

MONAH: Multi-Modal Narratives for Humans to analyze conversations

In conversational analyses, humans manually weave multimodal information...
research
12/11/2022

Multimodal and Explainable Internet Meme Classification

Warning: this paper contains content that may be offensive or upsetting....

Please sign up or login with your details

Forgot password? Click here to reset