Zero-Label Prompt Selection

11/09/2022
by   Chonghua Liao, et al.
0

Natural language prompts have been shown to facilitate cross-task generalization for large language models. However, with no or limited labeled examples, the cross-task performance is highly sensitive to the choice of prompts, while selecting a high-performing prompt is challenging given the scarcity of labels. To address the issue, we propose a Zero-Label Prompt Selection (ZPS) method that selects prompts without any labeled data or gradient update. Specifically, given the candidate human-written prompts for a task, ZPS labels a set of unlabeled data with a prompt ensemble and uses the pseudo-labels for prompt selection. Experiments show that ZPS improves over prior methods by a sizeable margin in zero-label performance. We also extend ZPS to a few-shot setting and show its advantages over strong baselines such as prompt tuning and model tuning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2022

Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?

Large language models can perform new tasks in a zero-shot fashion, give...
research
10/17/2022

Learning Instructions with Unlabeled Data for Zero-Shot Cross-Task Generalization

Training language models to learn from human instructions for zero-shot ...
research
09/09/2021

MetaXT: Meta Cross-Task Transfer between Disparate Label Spaces

Albeit the universal representational power of pre-trained language mode...
research
12/03/2021

Neural Pseudo-Label Optimism for the Bank Loan Problem

We study a class of classification problems best exemplified by the bank...
research
09/19/2021

Towards Zero-Label Language Learning

This paper explores zero-label learning in Natural Language Processing (...
research
06/04/2023

ProTeCt: Prompt Tuning for Hierarchical Consistency

Large visual-language models, like CLIP, learn generalized representatio...
research
05/15/2023

Symbol tuning improves in-context learning in language models

We present symbol tuning - finetuning language models on in-context inpu...

Please sign up or login with your details

Forgot password? Click here to reset