Distilling Hypernymy Relations from Language Models: On the Effectiveness of Zero-Shot Taxonomy Induction

02/10/2022
by   Devansh Jain, et al.
0

In this paper, we analyze zero-shot taxonomy learning methods which are based on distilling knowledge from language models via prompting and sentence scoring. We show that, despite their simplicity, these methods outperform some supervised strategies and are competitive with the current state-of-the-art under adequate conditions. We also show that statistical and linguistic properties of prompts dictate downstream performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/19/2022

TabLLM: Few-shot Classification of Tabular Data with Large Language Models

We study the application of large language models to zero-shot and few-s...
research
07/28/2022

LAD: Language Models as Data for Zero-Shot Dialog

To facilitate zero-shot generalization in taskoriented dialog, this pape...
research
12/20/2022

Go-tuning: Improving Zero-shot Learning Abilities of Smaller Language Models

With increasing scale, large language models demonstrate both quantitati...
research
08/04/2023

Learning to Paraphrase Sentences to Different Complexity Levels

While sentence simplification is an active research topic in NLP, its ad...
research
10/21/2020

Latte-Mix: Measuring Sentence Semantic Similarity with Latent Categorical Mixtures

Measuring sentence semantic similarity using pre-trained language models...
research
11/15/2022

Prompting Language Models for Linguistic Structure

Although pretrained language models (PLMs) can be prompted to perform a ...
research
03/13/2023

Large Language Models in the Workplace: A Case Study on Prompt Engineering for Job Type Classification

This case study investigates the task of job classification in a real-wo...

Please sign up or login with your details

Forgot password? Click here to reset