Exploring the Benefits of Training Expert Language Models over Instruction Tuning

02/07/2023
by   Joel Jang, et al.
8

Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks. Previous work has shown that scaling the number of training tasks is the key component in making stronger MT LMs. In this work, we report an unexpected finding that an expert LM fine-tuned on just a single task can outperform an MT LM trained with 300+ different tasks on 11 different unseen datasets and on 13 datasets of the BIG-bench benchmark by a mean accuracy of 3.20 previously held belief that simply scaling the number of tasks makes stronger MT LMs. Leveraging this finding, we further show that this distributed approach of training a separate expert LM per training task instead of a single MT LM for zero-shot inference possesses many benefits including (1) avoiding negative task transfer that often occurs during instruction tuning, (2) being able to continually learn new tasks without having to re-train on previous tasks to avoid catastrophic forgetting, and (3) showing compositional capabilities when merging individual experts together. The code is available at https://github.com/joeljang/ELM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

Large Language Models (LLMs) have shown enhanced capabilities of solving...
research
12/22/2022

OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization

Recent work has shown that fine-tuning large pre-trained language models...
research
10/06/2022

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners

Meta-training, which fine-tunes the language model (LM) on various downs...
research
01/31/2023

The Flan Collection: Designing Data and Methods for Effective Instruction Tuning

We study the design decisions of publicly available instruction tuning m...
research
05/22/2023

Multi-Task Instruction Tuning of LLaMa for Specific Scenarios: A Preliminary Study on Writing Assistance

ChatGPT and GPT-4 have attracted substantial interest from both academic...
research
12/01/2022

Data-Efficient Finetuning Using Cross-Task Nearest Neighbors

Language models trained on massive prompted multitask datasets like T0 (...
research
02/22/2022

RuCLIP – new models and experiments: a technical report

In the report we propose six new implementations of ruCLIP model trained...

Please sign up or login with your details

Forgot password? Click here to reset