Robustness of Demonstration-based Learning Under Limited Data Scenario

10/19/2022
by   Hongxin Zhang, et al.
0

Demonstration-based learning has shown great potential in stimulating pretrained language models' ability under limited data scenario. Simply augmenting the input with some demonstrations can significantly improve performance on few-shot NER. However, why such demonstrations are beneficial for the learning process remains unclear since there is no explicit alignment between the demonstrations and the predictions. In this paper, we design pathological demonstrations by gradually removing intuitively useful information from the standard ones to take a deep dive of the robustness of demonstration-based sequence labeling and show that (1) demonstrations composed of random tokens still make the model a better few-shot learner; (2) the length of random demonstrations and the relevance of random tokens are the main factors affecting the performance; (3) demonstrations increase the confidence of model predictions on captured superficial patterns. We have publicly released our code at https://github.com/SALT-NLP/RobustDemo.

READ FULL TEXT
research
10/16/2021

Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER

Recent advances in prompt-based learning have shown impressive results o...
research
08/17/2023

Exploring Demonstration Ensembling for In-context Learning

In-context learning (ICL) operates by showing language models (LMs) exam...
research
08/31/2022

Let Me Check the Examples: Enhancing Demonstration Learning via Explicit Imitation

Demonstration learning aims to guide the prompt prediction via providing...
research
05/23/2023

Dr.ICL: Demonstration-Retrieved In-context Learning

In-context learning (ICL), teaching a large language model (LLM) to perf...
research
04/09/2022

Contrastive Demonstration Tuning for Pre-trained Language Models

Pretrained language models can be effectively stimulated by textual prom...
research
09/09/2023

EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets

Large language models (LLMs) have shown promising performance on various...
research
02/28/2022

Pedagogical Demonstrations and Pragmatic Learning in Artificial Tutor-Learner Interactions

When demonstrating a task, human tutors pedagogically modify their behav...

Please sign up or login with your details

Forgot password? Click here to reset