The Flan Collection: Designing Data and Methods for Effective Instruction Tuning

01/31/2023
by   Shayne Longpre, et al.
0

We study the design decisions of publicly available instruction tuning methods, and break down the development of Flan 2022 (Chung et al., 2022). Through careful ablation studies on the Flan Collection of tasks and methods, we tease apart the effect of design decisions which enable Flan-T5 to outperform prior work by 3-17 balancing and enrichment techniques are overlooked but critical to effective instruction tuning, and in particular, training with mixed prompt settings (zero-shot, few-shot, and chain-of-thought) actually yields stronger (2 performance in all settings. In further experiments, we show Flan-T5 requires less finetuning to converge higher and faster than T5 on single downstream tasks, motivating instruction-tuned models as more computationally-efficient starting checkpoints for new tasks. Finally, to accelerate research on instruction tuning, we make the Flan 2022 collection of datasets, templates, and methods publicly available at https://github.com/google-research/FLAN/tree/main/flan/v2.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

Large Language Models (LLMs) have shown enhanced capabilities of solving...
research
04/06/2023

Instruction Tuning with GPT-4

Prior work has shown that finetuning large language models (LLMs) using ...
research
05/11/2023

InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning

General-purpose language models that can solve various language-domain t...
research
06/05/2023

InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models

Large language models (LLMs) are instruction followers, but it can be ch...
research
02/07/2023

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Recently, Language Models (LMs) instruction-tuned on multiple tasks, als...
research
05/30/2023

Bigger, Better, Faster: Human-level Atari with human-level efficiency

We introduce a value-based RL agent, which we call BBF, that achieves su...
research
06/28/2023

On the Exploitability of Instruction Tuning

Instruction tuning is an effective technique to align large language mod...

Please sign up or login with your details

Forgot password? Click here to reset