Log In Sign Up

GOLD: Improving Out-of-Scope Detection in Dialogues using Data Augmentation

by   Derek Chen, et al.

Practical dialogue systems require robust methods of detecting out-of-scope (OOS) utterances to avoid conversational breakdowns and related failure modes. Directly training a model with labeled OOS examples yields reasonable performance, but obtaining such data is a resource-intensive process. To tackle this limited-data problem, previous methods focus on better modeling the distribution of in-scope (INS) examples. We introduce GOLD as an orthogonal technique that augments existing data to train better OOS detectors operating in low-data regimes. GOLD generates pseudo-labeled candidates using samples from an auxiliary dataset and keeps only the most beneficial candidates for training through a novel filtering mechanism. In experiments across three target benchmarks, the top GOLD model outperforms all existing methods on all key metrics, achieving relative gains of 52.4 baseline performance. We also analyze the unique properties of OOS data to identify key factors for optimally applying our proposed method.


DialAug: Mixing up Dialogue Contexts in Contrastive Learning for Robust Conversational Modeling

Retrieval-based conversational systems learn to rank response candidates...

Multilingual Coreference Resolution in Multiparty Dialogue

Existing multiparty dialogue datasets for coreference resolution are nas...

Sequence-to-sequence Pre-training with Data Augmentation for Sentence Rewriting

We study sequence-to-sequence (seq2seq) pre-training with data augmentat...

Out-of-Scope Intent Detection with Self-Supervision and Discriminative Training

Out-of-scope intent detection is of practical importance in task-oriente...

G-DAUG: Generative Data Augmentation for Commonsense Reasoning

Recent advances in commonsense reasoning depend on large-scale human-ann...

On Robust Incremental Learning over Many Multilingual Steps

Recent work in incremental learning has introduced diverse approaches to...