Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning

05/25/2022
by   Prakhar Gupta, et al.
0

Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especially interesting area to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. Next, we explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks.

READ FULL TEXT

page 3

page 4

research
09/03/2021

Finetuned Language Models Are Zero-Shot Learners

This paper explores a simple method for improving the zero-shot learning...
research
12/21/2022

MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning

Instruction tuning, a new learning paradigm that fine-tunes pre-trained ...
research
07/19/2023

DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI

Despite advancements in conversational AI, language models encounter cha...
research
05/19/2023

Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning

Recent works on instruction tuning (IT) have achieved great performance ...
research
05/14/2023

STORYWARS: A Dataset and Instruction Tuning Baselines for Collaborative Story Understanding and Generation

Collaborative stories, which are texts created through the collaborative...
research
05/21/2023

Automated Few-shot Classification with Instruction-Finetuned Language Models

A particularly successful class of approaches for few-shot learning comb...
research
09/18/2023

Dynamic-SUPERB: Towards A Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark for Speech

Text language models have shown remarkable zero-shot capability in gener...

Please sign up or login with your details

Forgot password? Click here to reset