In-Context Instruction Learning

02/28/2023
by   Seonghyeon Ye, et al.
0

Instruction learning of Large Language Models (LLMs) has enabled zero-shot task generalization. However, instruction learning has been predominantly approached as a fine-tuning problem, including instruction tuning and reinforcement learning from human feedback, where LLMs are multi-task fine-tuned on various tasks with instructions. In this paper, we present a surprising finding that applying in-context learning to instruction learning, referred to as In-Context Instruction Learning (ICIL), significantly improves the zero-shot task generalization performance for both pretrained and instruction-fine-tuned models. One of the core advantages of ICIL is that it uses a single fixed prompt to evaluate all tasks, which is a concatenation of cross-task demonstrations. In particular, we demonstrate that the most powerful instruction-fine-tuned baseline (text-davinci-003) also benefits from ICIL by 9.3 fine-tuning.

READ FULL TEXT

page 2

page 5

page 15

page 18

page 20

page 22

page 23

page 24

research
09/03/2021

Finetuned Language Models Are Zero-Shot Learners

This paper explores a simple method for improving the zero-shot learning...
research
05/22/2023

Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A Preliminary Study on Writing Assistance

ChatGPT and GPT-4 have attracted substantial interest from both academic...
research
12/22/2022

OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization

Recent work has shown that fine-tuning large pre-trained language models...
research
04/24/2023

AMR Parsing with Instruction Fine-tuned Pre-trained Language Models

Instruction fine-tuned language models on a collection of instruction an...
research
08/02/2023

Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation

In this work, we evaluate 10 open-source instructed LLMs on four represe...
research
07/19/2023

Can Instruction Fine-Tuned Language Models Identify Social Bias through Prompting?

As the breadth and depth of language model applications continue to expa...
research
09/18/2023

Understanding Catastrophic Forgetting in Language Models via Implicit Inference

Fine-tuning (via methods such as instruction-tuning or reinforcement lea...

Please sign up or login with your details

Forgot password? Click here to reset