Achieving Fluency and Coherency in Task-oriented Dialog

04/11/2018
by   Rashmi Gangadharaiah, et al.
0

We consider real world task-oriented dialog settings, where agents need to generate both fluent natural language responses and correct external actions like database queries and updates. We demonstrate that, when applied to customer support chat transcripts, Sequence to Sequence (Seq2Seq) models often generate short, incoherent and ungrammatical natural language responses that are dominated by words that occur with high frequency in the training data. These phenomena do not arise in synthetic datasets such as bAbI, where we show Seq2Seq models are nearly perfect. We develop techniques to learn embeddings that succinctly capture relevant information from the dialog history, and demonstrate that nearest neighbor based approaches in this learned neural embedding space generate more fluent responses. However, we see that these methods are not able to accurately predict when to execute an external action. We show how to combine nearest neighbor and Seq2Seq methods in a hybrid model, where nearest neighbor is used to generate fluent responses and Seq2Seq type models ensure dialog coherency and generate accurate external actions. We show that this approach is well suited for customer support scenarios, where agents' responses are typically script-driven, and correct external actions are critically important. The hybrid model on the customer support data achieves a 78 of external calls.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2021

Retrieval-Augmented Transformer-XL for Close-Domain Dialog Generation

Transformer-based models have demonstrated excellent capabilities of cap...
research
04/03/2023

Dialog-to-Actions: Building Task-Oriented Dialogue System via Action-Level Generation

End-to-end generation-based approaches have been investigated and applie...
research
05/26/2023

DKAF: KB Arbitration for Learning Task-Oriented Dialog Systems with Dialog-KB Inconsistencies

Task-oriented dialog (TOD) agents often ground their responses on extern...
research
11/24/2019

Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context

Conversations have an intrinsic one-to-many property, which means that m...
research
02/24/2023

Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback

Large language models (LLMs), such as ChatGPT, are able to generate huma...
research
10/25/2020

Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference

Intent detection is one of the core components of goal-oriented dialog s...
research
04/17/2020

Predictability of Power Grid Frequency

The power grid frequency is the central observable in power system contr...

Please sign up or login with your details

Forgot password? Click here to reset