AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models

09/09/2021
by   Harish Tayyar Madabushi, et al.
0

Despite their success in a variety of NLP tasks, pre-trained language models, due to their heavy reliance on compositionality, fail in effectively capturing the meanings of multiword expressions (MWEs), especially idioms. Therefore, datasets and methods to improve the representation of MWEs are urgently needed. Existing datasets are limited to providing the degree of idiomaticity of expressions along with the literal and, where applicable, (a single) non-literal interpretation of MWEs. This work presents a novel dataset of naturally occurring sentences containing MWEs manually classified into a fine-grained set of meanings, spanning both English and Portuguese. We use this dataset in two tasks designed to test i) a language model's ability to detect idiom usage, and ii) the effectiveness of a language model in generating representations of sentences containing idioms. Our experiments demonstrate that, on the task of detecting idiomatic usage, these models perform reasonably well in the one-shot and few-shot scenarios, but that there is significant scope for improvement in the zero-shot scenario. On the task of representing idiomaticity, we find that pre-training is not always effective, while fine-tuning could provide a sample efficient method of learning representations of sentences containing MWEs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2021

Unsupervised Neural Machine Translation with Generative Language Models Only

We show how to derive state-of-the-art unsupervised neural machine trans...
research
05/23/2022

Sample Efficient Approaches for Idiomaticity Detection

Deep neural models, in particular Transformer-based pre-trained language...
research
09/20/2022

A Few-shot Approach to Resume Information Extraction via Prompts

Prompt learning has been shown to achieve near-Fine-tune performance in ...
research
09/30/2022

What Makes Pre-trained Language Models Better Zero/Few-shot Learners?

In this paper, we propose a theoretical framework to explain the efficac...
research
03/22/2023

Salient Span Masking for Temporal Understanding

Salient Span Masking (SSM) has shown itself to be an effective strategy ...
research
05/24/2021

Few-Shot Upsampling for Protest Size Detection

We propose a new task and dataset for a common problem in social science...
research
08/01/2023

Detecting Cloud Presence in Satellite Images Using the RGB-based CLIP Vision-Language Model

This work explores capabilities of the pre-trained CLIP vision-language ...

Please sign up or login with your details

Forgot password? Click here to reset