Learning To Retrieve Prompts for In-Context Learning

12/16/2021
by   Ohad Rubin, et al.
0

In-context learning is a recent paradigm in natural language understanding, where a large pre-trained language model (LM) observes a test instance and a few training examples as its input, and directly decodes the output without any update to its parameters. However, performance has been shown to strongly depend on the selected training examples (termed prompt). In this work, we propose an efficient method for retrieving prompts for in-context learning using annotated data and a LM. Given an input-output pair, we estimate the probability of the output given the input and a candidate training example as the prompt, and label training examples as positive or negative based on this probability. We then train an efficient dense retriever from this data, which is used to retrieve training examples as prompts at test time. We evaluate our approach on three sequence-to-sequence tasks where language utterances are mapped to meaning representations, and find that it substantially outperforms prior work and multiple baselines across the board.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2022

In-Context Learning for Few-Shot Dialogue State Tracking

Collecting and annotating task-oriented dialogues is time-consuming and ...
research
06/03/2013

Learning from networked examples in a k-partite graph

Many machine learning algorithms are based on the assumption that traini...
research
06/06/2011

Using More Data to Speed-up Training Time

In many recent applications, data is plentiful. By now, we have a rather...
research
10/05/2019

On the Limits of Learning to Actively Learn Semantic Representations

One of the goals of natural language understanding is to develop models ...
research
03/14/2023

Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs

Training data attribution (TDA) methods offer to trace a model's predict...
research
12/12/2018

Adversarial Learning of Semantic Relevance in Text to Image Synthesis

We describe a new approach that improves the training of generative adve...
research
02/27/2023

Finding Supporting Examples for In-Context Learning

In-context learning is a new learning paradigm where a language model ob...

Please sign up or login with your details

Forgot password? Click here to reset