Diverse Retrieval-Augmented In-Context Learning for Dialogue State Tracking

07/04/2023
by   Brendan King, et al.
0

There has been significant interest in zero and few-shot learning for dialogue state tracking (DST) due to the high cost of collecting and annotating task-oriented dialogues. Recent work has demonstrated that in-context learning requires very little data and zero parameter updates, and even outperforms trained methods in the few-shot setting (Hu et al. 2022). We propose RefPyDST, which advances the state of the art with three advancements to in-context learning for DST. First, we formulate DST as a Python programming task, explicitly modeling language coreference as variable reference in Python. Second, since in-context learning depends highly on the context examples, we propose a method to retrieve a diverse set of relevant examples to improve performance. Finally, we introduce a novel re-weighting method during decoding that takes into account probabilities of competing surface forms, and produces a more accurate dialogue state prediction. We evaluate our approach using MultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zero and few-shot settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2022

In-Context Learning for Few-Shot Dialogue State Tracking

Collecting and annotating task-oriented dialogues is time-consuming and ...
research
05/02/2020

Zero-Shot Transfer Learning with Synthesized Data for Multi-Domain Dialogue State Tracking

Zero-shot transfer learning for multi-domain dialogue state tracking can...
research
02/25/2023

Choice Fusion as Knowledge for Zero-Shot Dialogue State Tracking

With the demanding need for deploying dialogue systems in new domains wi...
research
02/18/2023

Zero and Few-Shot Localization of Task-Oriented Dialogue Agents with a Distilled Representation

Task-oriented Dialogue (ToD) agents are mostly limited to a few widely-s...
research
09/16/2023

S3-DST: Structured Open-Domain Dialogue Segmentation and State Tracking in the Era of LLMs

The traditional Dialogue State Tracking (DST) problem aims to track user...
research
06/01/2023

Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking

Zero-shot transfer learning for Dialogue State Tracking (DST) helps to h...
research
10/25/2021

Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection

Dialogue disentanglement aims to group utterances in a long and multi-pa...

Please sign up or login with your details

Forgot password? Click here to reset