Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

05/25/2022
by   Junyeob Kim, et al.
23

Despite recent explosion in research interests, in-context learning and the precise impact of the quality of demonstrations remain elusive. While, based on current literature, it is expected that in-context learning shares a similar mechanism to supervised learning, Min et al. (2022) recently reported that, surprisingly, input-label correspondence is less important than other aspects of prompt demonstrations. Inspired by this counter-intuitive observation, we re-examine the importance of ground truth labels on in-context learning from diverse and statistical points of view. With the aid of the newly introduced metrics, i.e., Ground-truth Label Effect Ratio (GLER), demo-gain, and label sensitivity, we find that the impact of the correct input-label matching can vary according to different configurations. Expanding upon the previous key finding on the role of demonstrations, the complementary and contrastive results suggest that one might need to take more care when estimating the impact of each component in in-context learning demonstrations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2022

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

Large language models (LMs) are able to in-context learn – perform a new...
research
07/11/2023

Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency Maps

We investigate the role of various demonstration components in the in-co...
research
05/16/2023

What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning

Large language models (LLMs) exploit in-context learning (ICL) to solve ...
research
06/20/2021

Improving Label Quality by Jointly Modeling Items and Annotators

We propose a fully Bayesian framework for learning ground truth labels f...
research
09/14/2023

Ambiguity-Aware In-Context Learning with Large Language Models

In-context learning (ICL) i.e. showing LLMs only a few task-specific dem...
research
01/28/2023

DALI: Dynamically Adjusted Label Importance for Noisy Partial Label Learning

Noisy partial label learning (noisy PLL) is an important branch of weakl...
research
02/24/2018

Water from Two Rocks: Maximizing the Mutual Information

Our goal is to forecast ground truth Y using two sources of information ...

Please sign up or login with your details

Forgot password? Click here to reset