EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets

09/09/2023
by   Hongyuan Lu, et al.
0

Large language models (LLMs) have shown promising performance on various NLP tasks via task prompting. And their performance can be further improved by appending task demonstrations to the head of the prompt. And usually, a better performance can be achieved with more demonstrations. However, asking the users to write the demonstrations can be cumbersome. As a simple yet cost-effective workaround, this paper proposes a novel method called EPA (Easy Prompt Augmentation)[While this paper considers augmenting prompts via demonstrations, we name it EPA as the name EDA is already taken by a well-known NLP method <cit.>.] that effectively minimizes user efforts in writing demonstrations while improving the model performance at the same time. EPA achieves these goals by automatically augmenting the demonstrations with multiple sources/targets, where each of them paraphrases each other. This is well motivated as augmenting data via paraphrasing effectively improves neural language models. EPA thus employs paraphrasing as an augmentation method for in-context learning. Extensive experiments indicate that EPA effectively improves both NLU and NLG tasks, covering from natural language inference to machine translation in translating tens of languages.[Code and data will be released upon publication.]

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2023

Chain-of-Dictionary Prompting Elicits Translation in Large Language Models

Large language models (LLMs) have shown surprisingly good performance in...
research
06/02/2023

MetaVL: Transferring In-Context Learning Ability From Language Models to Vision-Language Models

Large-scale language models have shown the ability to adapt to a new tas...
research
04/16/2021

Language Models are Few-Shot Butlers

Pretrained language models demonstrate strong performance in most NLP ta...
research
02/03/2023

Mitigating Data Scarcity for Large Language Models

In recent years, pretrained neural language models (PNLMs) have taken th...
research
10/19/2022

Robustness of Demonstration-based Learning Under Limited Data Scenario

Demonstration-based learning has shown great potential in stimulating pr...
research
03/20/2023

Context-faithful Prompting for Large Language Models

Large language models (LLMs) encode parametric knowledge about world fac...
research
05/26/2023

Demo2Code: From Summarizing Demonstrations to Synthesizing Code via Extended Chain-of-Thought

Language instructions and demonstrations are two natural ways for users ...

Please sign up or login with your details

Forgot password? Click here to reset