Reframing Instructional Prompts to GPTk's Language

09/16/2021
by   Swaroop Mishra, et al.
9

How can model designers turn task instructions into effective prompts for language models? Backed by extensive empirical analysis on GPT3, we observe important features for successful instructional prompts, and propose several reframing techniques for model designers to create such prompts. For example, a complex task can be decomposed into multiple simpler tasks. We experiment over 12 NLP tasks across 6 diverse categories (question generation, classification, etc.). Our results show that reframing improves few-shot learning performance by 14% while reducing sample complexity over existing few-shot baselines. The performance gains are particularly important on large language models, such as GPT3 where tuning models or prompts on large datasets is not feasible. Furthermore, we observe that such gains are not limited to GPT3; the reframed tasks remain superior over raw instructions across different model architectures, underscoring the cross-model generality of these guidelines. We hope these empirical-driven techniques will pave way for more effective ways to prompt LMs in future.

READ FULL TEXT
research
05/02/2023

How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?

Scaling language models have revolutionized widespread NLP tasks, yet li...
research
04/18/2021

Natural Instructions: Benchmarking Generalization to New Tasks from Natural Language Instructions

Can we enable NLP models to appropriately respond to instructional promp...
research
08/17/2022

HELP ME THINK: A Simple Prompting Strategy for Non-experts to Create Customized Content with Models

Controlling the text generated by language models and customizing the co...
research
05/21/2023

Automated Few-shot Classification with Instruction-Finetuned Language Models

A particularly successful class of approaches for few-shot learning comb...
research
08/01/2022

Few-shot Adaptation Works with UnpredicTable Data

Prior work on language models (LMs) shows that training on a large numbe...
research
06/24/2023

Large Language Models as Sous Chefs: Revising Recipes with GPT-3

With their remarkably improved text generation and prompting capabilitie...
research
06/28/2021

What's in a Measurement? Using GPT-3 on SemEval 2021 Task 8 – MeasEval

In the summer of 2020 OpenAI released its GPT-3 autoregressive language ...

Please sign up or login with your details

Forgot password? Click here to reset