In-Context Learning for Few-Shot Dialogue State Tracking

03/16/2022
by   Yushi Hu, et al.
0

Collecting and annotating task-oriented dialogues is time-consuming and costly. Thus, few-shot learning for dialogue tasks presents an exciting opportunity. In this work, we propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST), where a large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates. This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios. We study ways to formulate dialogue context into prompts for LMs and propose an efficient approach to retrieve dialogues as exemplars given a test instance and a selection pool of few-shot examples. To better leverage the pre-trained LMs, we also reformulate DST into a text-to-SQL problem. Empirical results on MultiWOZ 2.1 and 2.4 show that our method IC-DST outperforms previous fine-tuned state-of-the-art models in few-shot settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2023

Diverse Retrieval-Augmented In-Context Learning for Dialogue State Tracking

There has been significant interest in zero and few-shot learning for di...
research
02/12/2023

Stabilized In-Context Learning with Pre-trained Language Models for Few Shot Dialogue State Tracking

Prompt-based methods with large pre-trained language models (PLMs) have ...
research
01/15/2022

Prompt Learning for Few-Shot Dialogue State Tracking

Collecting dialogue state labels, slots and values, for learning dialogu...
research
01/17/2021

What Makes Good In-Context Examples for GPT-3?

GPT-3 has attracted lots of attention due to its superior performance ac...
research
12/16/2021

Learning To Retrieve Prompts for In-Context Learning

In-context learning is a recent paradigm in natural language understandi...
research
05/19/2022

Self-augmented Data Selection for Few-shot Dialogue Generation

The natural language generation (NLG) module in task-oriented dialogue s...
research
02/27/2021

A Simple But Effective Approach to n-shot Task-Oriented Dialogue Augmentation

The collection and annotation of task-oriented conversational data is a ...

Please sign up or login with your details

Forgot password? Click here to reset