Ask2Transformers: Zero-Shot Domain labelling with Pre-trained Language Models

01/07/2021
by   Oscar Sainz, et al.
0

In this paper we present a system that exploits different pre-trained Language Models for assigning domain labels to WordNet synsets without any kind of supervision. Furthermore, the system is not restricted to use a particular set of domain labels. We exploit the knowledge encoded within different off-the-shelf pre-trained Language Models and task formulations to infer the domain label of a particular WordNet definition. The proposed zero-shot system achieves a new state-of-the-art on the English dataset used in the evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2022

IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models

We introduce a new open information extraction (OIE) benchmark for pre-t...
research
06/13/2023

Improving Zero-Shot Detection of Low Prevalence Chest Pathologies using Domain Pre-trained Language Models

Recent advances in zero-shot learning have enabled the use of paired ima...
research
05/04/2022

Language Models in the Loop: Incorporating Prompting into Weak Supervision

We propose a new strategy for applying large pre-trained language models...
research
07/24/2020

IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What to Emphasize?

We propose a novel method that enables us to determine words that deserv...
research
05/02/2022

OPT: Open Pre-trained Transformer Language Models

Large language models, which are often trained for hundreds of thousands...
research
10/21/2020

Latte-Mix: Measuring Sentence Semantic Similarity with Latent Categorical Mixtures

Measuring sentence semantic similarity using pre-trained language models...
research
06/15/2023

LOVM: Language-Only Vision Model Selection

Pre-trained multi-modal vision-language models (VLMs) are becoming incre...

Please sign up or login with your details

Forgot password? Click here to reset