Duluth at SemEval-2020 Task 7: Using Surprise as a Key to Unlock Humorous Headlines

09/06/2020
by   Shuning Jin, et al.
0

We use pretrained transformer-based language models in SemEval-2020 Task 7: Assessing the Funniness of Edited News Headlines. Inspired by the incongruity theory of humor, we use a contrastive approach to capture the surprise in the edited headlines. In the official evaluation, our system gets 0.531 RMSE in Subtask 1, 11th among 49 submissions. In Subtask 2, our system gets 0.632 accuracy, 9th among 32 submissions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2021

Transformer-based Korean Pretrained Language Models: A Survey on Three Years of Progress

With the advent of Transformer, which was used in translation models in ...
research
06/27/2023

Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?

For Pretrained Language Models (PLMs), their susceptibility to noise has...
research
03/31/2022

Misogynistic Meme Detection using Early Fusion Model with Graph Network

In recent years , there has been an upsurge in a new form of entertainme...
research
05/25/2023

UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation

Prior study has shown that pretrained language models (PLM) can boost th...
research
09/13/2023

Pretraining on the Test Set Is All You Need

Inspired by recent work demonstrating the promise of smaller Transformer...
research
08/01/2020

SemEval-2020 Task 7: Assessing Humor in Edited News Headlines

This paper describes the SemEval-2020 shared task "Assessing Humor in Ed...
research
02/23/2022

Short-answer scoring with ensembles of pretrained language models

We investigate the effectiveness of ensembles of pretrained transformer-...

Please sign up or login with your details

Forgot password? Click here to reset