Transformer-based Context-aware Sarcasm Detection in Conversation Threads from Social Media

05/22/2020
by   Xiangjue Dong, et al.
0

We present a transformer-based sarcasm detection model that accounts for the context from the entire conversation thread for more robust predictions. Our model uses deep transformer layers to perform multi-head attentions among the target utterance and the relevant context in the thread. The context-aware models are evaluated on two datasets from social media, Twitter and Reddit, and show 3.1 F1-scores of 79.0 becoming one of the highest performing systems among 36 participants in this shared task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/21/2020

XD at SemEval-2020 Task 12: Ensemble Approach to Offensive Language Identification in Social Media Using Transformer Encoders

This paper presents six document classification models using the latest ...
research
11/07/2022

A Context-Aware Computational Approach for Measuring Vocal Entrainment in Dyadic Conversations

Vocal entrainment is a social adaptation mechanism in human interaction,...
research
08/22/2018

Sarcasm Analysis using Conversation Context

Computational models for sarcasm detection have often relied on the cont...
research
01/10/2023

Predicting Hateful Discussions on Reddit using Graph Transformer Networks and Communal Context

We propose a system to predict harmful discussions on social media platf...
research
11/06/2022

Improved Target-specific Stance Detection on Social Media Platforms by Delving into Conversation Threads

Target-specific stance detection on social media, which aims at classify...
research
09/24/2018

Context-Aware Attention for Understanding Twitter Abuse

The original goal of any social media platform is to facilitate users to...
research
11/11/2022

CoRAL: a Context-aware Croatian Abusive Language Dataset

In light of unprecedented increases in the popularity of the internet an...

Please sign up or login with your details

Forgot password? Click here to reset