Unsupervised Paraphrasing of Multiword Expressions

06/02/2023
by   Takashi Wada, et al.
0

We propose an unsupervised approach to paraphrasing multiword expressions (MWEs) in context. Our model employs only monolingual corpus data and pre-trained language models (without fine-tuning), and does not make use of any external resources such as dictionaries. We evaluate our method on the SemEval 2022 idiomatic semantic text similarity task, and show that it outperforms all unsupervised systems and rivals supervised systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2022

Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples

Recent advances in the development of large language models have resulte...
research
02/12/2023

Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language Models in Dialogues

Discourse processing suffers from data sparsity, especially for dialogue...
research
07/13/2020

Do You Have the Right Scissors? Tailoring Pre-trained Language Models via Monte-Carlo Methods

It has been a common approach to pre-train a language model on a large c...
research
02/10/2020

How Much Knowledge Can You Pack Into the Parameters of a Language Model?

It has recently been observed that neural language models trained on uns...
research
09/17/2022

Unsupervised Lexical Substitution with Decontextualised Embeddings

We propose a new unsupervised method for lexical substitution using pre-...
research
09/18/2023

Unsupervised Open-Vocabulary Object Localization in Videos

In this paper, we show that recent advances in video representation lear...
research
02/21/2022

Embarrassingly Simple Performance Prediction for Abductive Natural Language Inference

The task of abductive natural language inference (αnli), to decide which...

Please sign up or login with your details

Forgot password? Click here to reset