Multilingual and Multimodal Abuse Detection

04/03/2022
by   Rini Sharon, et al.
0

The presence of abusive content on social media platforms is undesirable as it severely impedes healthy and safe social media interactions. While automatic abuse detection has been widely explored in textual domain, audio abuse detection still remains unexplored. In this paper, we attempt abuse detection in conversational audio from a multimodal perspective in a multilingual social media setting. Our key hypothesis is that along with the modelling of audio, incorporating discriminative information from other modalities can be highly beneficial for this task. Our proposed method, MADA, explicitly focuses on two modalities other than the audio itself, namely, the underlying emotions expressed in the abusive audio and the semantic information encapsulated in the corresponding textual form. Observations prove that MADA demonstrates gains over audio-only approaches on the ADIMA dataset. We test the proposed approach on 10 different languages and observe consistent gains in the range 0.6 by leveraging multiple modalities. We also perform extensive ablation experiments for studying the contributions of every modality and observe the best results while leveraging all the modalities together. Additionally, we perform experiments to empirically confirm that there is a strong correlation between underlying emotions and abusive behaviour.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2017

Multimodal Classification for Analysing Social Media

Classification of social media data is an important approach in understa...
research
05/31/2021

Multimodal Detection of Information Disorder from Social Media

Social media is accompanied by an increasing proportion of content that ...
research
03/28/2022

3MASSIV: Multilingual, Multimodal and Multi-Aspect dataset of Social Media Short Videos

We present 3MASSIV, a multilingual, multimodal and multi-aspect, expertl...
research
07/01/2023

Image Matters: A New Dataset and Empirical Study for Multimodal Hyperbole Detection

Hyperbole, or exaggeration, is a common linguistic phenomenon. The detec...
research
01/11/2023

Few-shot Learning for Cross-Target Stance Detection by Aggregating Multimodal Embeddings

Despite the increasing popularity of the stance detection task, existing...
research
05/02/2020

MultiQT: Multimodal Learning for Real-Time Question Tracking in Speech

We address a challenging and practical task of labeling questions in spe...
research
10/16/2020

Hit Song Prediction Based on Early Adopter Data and Audio Features

Billions of USD are invested in new artists and songs by the music indus...

Please sign up or login with your details

Forgot password? Click here to reset