Probing Simile Knowledge from Pre-trained Language Models

04/27/2022
by   Weijie Chen, et al.
0

Simile interpretation (SI) and simile generation (SG) are challenging tasks for NLP because models require adequate world knowledge to produce predictions. Previous works have employed many hand-crafted resources to bring knowledge-related into models, which is time-consuming and labor-intensive. In recent years, pre-trained language models (PLMs) based approaches have become the de-facto standard in NLP since they learn generic knowledge from a large corpus. The knowledge embedded in PLMs may be useful for SI and SG tasks. Nevertheless, there are few works to explore it. In this paper, we probe simile knowledge from PLMs to solve the SI and SG tasks in the unified framework of simile triple completion for the first time. The backbone of our framework is to construct masked sentences with manual patterns and then predict the candidate words in the masked position. In this framework, we adopt a secondary training process (Adjective-Noun mask Training) with the masked language model (MLM) loss to enhance the prediction diversity of candidate words in the masked position. Moreover, pattern ensemble (PE) and pattern search (PS) are applied to improve the quality of predicted words. Finally, automatic and human evaluations demonstrate the effectiveness of our framework in both SI and SG tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2023

Knowledge Rumination for Pre-trained Language Models

Previous studies have revealed that vanilla pre-trained language models ...
research
01/11/2022

Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs

Estimating the predictive uncertainty of pre-trained language models is ...
research
08/21/2023

Leveraging Large Language Models for Pre-trained Recommender Systems

Recent advancements in recommendation systems have shifted towards more ...
research
10/24/2022

A Unified Framework for Pun Generation with Humor Principles

We propose a unified framework to generate both homophonic and homograph...
research
03/30/2022

Position-based Prompting for Health Outcome Generation

Probing Pre-trained Language Models (PLMs) using prompts has indirectly ...
research
04/29/2020

GePpeTto Carves Italian into a Language Model

In the last few years, pre-trained neural architectures have provided im...
research
08/04/2023

Prompt2Gaussia: Uncertain Prompt-learning for Script Event Prediction

Script Event Prediction (SEP) aims to predict the subsequent event for a...

Please sign up or login with your details

Forgot password? Click here to reset