IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What to Emphasize?

07/24/2020
by   Jaeyoul Shin, et al.
0

We propose a novel method that enables us to determine words that deserve to be emphasized from written text in visual media, relying only on the information from the self-attention distributions of pre-trained language models (PLMs). With extensive experiments and analyses, we show that 1) our zero-shot approach is superior to a reasonable baseline that adopts TF-IDF and that 2) there exist several attention heads in PLMs specialized for emphasis selection, confirming that PLMs are capable of recognizing important words in sentences.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/07/2021

Ask2Transformers: Zero-Shot Domain labelling with Pre-trained Language Models

In this paper we present a system that exploits different pre-trained La...
research
10/14/2021

Solving Aspect Category Sentiment Analysis as a Text Generation Task

Aspect category sentiment analysis has attracted increasing research att...
research
09/08/2020

ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model

This paper describes the system designed by ERNIE Team which achieved th...
research
09/01/2023

Learned Visual Features to Textual Explanations

Interpreting the learned features of vision models has posed a longstand...
research
03/25/2019

Recognizing Arrow Of Time In The Short Stories

Recognizing arrow of time in short stories is a challenging task. i.e., ...
research
01/26/2023

DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature

The fluency and factual knowledge of large language models (LLMs) height...
research
06/15/2023

LOVM: Language-Only Vision Model Selection

Pre-trained multi-modal vision-language models (VLMs) are becoming incre...

Please sign up or login with your details

Forgot password? Click here to reset