Why Do Neural Dialog Systems Generate Short and Meaningless Replies? A Comparison between Dialog and Translation

12/06/2017
by   Bolin Wei, et al.
0

This paper addresses the question: Why do neural dialog systems generate short and meaningless replies? We conjecture that, in a dialog system, an utterance may have multiple equally plausible replies, causing the deficiency of neural networks in the dialog application. We propose a systematic way to mimic the dialog scenario in a machine translation system, and manage to reproduce the phenomenon of generating short and less meaningful sentences in the translation setting, showing evidence of our conjecture.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2021

Automatically Exposing Problems with Neural Dialog Models

Neural dialog models are known to suffer from problems such as generatin...
research
08/24/2018

Learning End-to-End Goal-Oriented Dialog with Multiple Answers

In a dialog, there can be multiple valid next utterances at any point. T...
research
07/12/2019

Effective Incorporation of Speaker Information in Utterance Encoding in Dialog

In dialog studies, we often encode a dialog using a hierarchical encoder...
research
04/06/2022

Quick Starting Dialog Systems with Paraphrase Generation

Acquiring training data to improve the robustness of dialog systems can ...
research
04/09/2020

Conversation Learner – A Machine Teaching Tool for Building Dialog Managers for Task-Oriented Dialog Systems

Traditionally, industry solutions for building a task-oriented dialog sy...
research
05/11/2018

Bootstrapping Multilingual Intent Models via Machine Translation for Dialog Automation

With the resurgence of chat-based dialog systems in consumer and enterpr...
research
12/31/2020

Discovering Dialog Structure Graph for Open-Domain Dialog Generation

Learning interpretable dialog structure from human-human dialogs yields ...

Please sign up or login with your details

Forgot password? Click here to reset