It's not Rocket Science : Interpreting Figurative Language in Narratives

08/31/2021
by   Tuhin Chakrabarty, et al.
0

Figurative language is ubiquitous in English. Yet, the vast majority of NLP research focuses on literal language. Existing text representations by design rely on compositionality, while figurative language is often non-compositional. In this paper, we study the interpretation of two non-compositional figurative languages (idioms and similes). We collected datasets of fictional narratives containing a figurative expression along with crowd-sourced plausible and implausible continuations relying on the correct interpretation of the expression. We then trained models to choose or generate the plausible continuation. Our experiments show that models based solely on pre-trained language models perform substantially worse than humans on these tasks. We additionally propose knowledge-enhanced models, adopting human strategies for interpreting figurative language: inferring meaning from the context and relying on the constituent word's literal meanings. The knowledge-enhanced models improve the performance on both the discriminative and generative tasks, further bridging the gap from human performance.

READ FULL TEXT

page 2

page 9

research
05/31/2023

Pre-Trained Language-Meaning Models for Multilingual Parsing and Generation

Pre-trained language models (PLMs) have achieved great success in NLP an...
research
03/16/2022

Can Pre-trained Language Models Interpret Similes as Smart as Human?

Simile interpretation is a crucial task in natural language processing. ...
research
05/05/2023

LMs stand their Ground: Investigating the Effect of Embodiment in Figurative Language Interpretation by Language Models

Figurative language is a challenge for language models since its interpr...
research
05/22/2023

Can LLMs facilitate interpretation of pre-trained language models?

Work done to uncover the knowledge encoded within pre-trained language m...
research
03/26/2022

Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages

Human languages are full of metaphorical expressions. Metaphors help peo...
research
08/20/2023

Scaled-up Discovery of Latent Concepts in Deep NLP Models

Pre-trained language models (pLMs) learn intricate patterns and contextu...
research
05/22/2023

DADA: Dialect Adaptation via Dynamic Aggregation of Linguistic Rules

Existing large language models (LLMs) that mainly focus on Standard Amer...

Please sign up or login with your details

Forgot password? Click here to reset