Robust Dialogue Utterance Rewriting as Sequence Tagging

12/29/2020
by   Jie Hao, et al.
0

The task of dialogue rewriting aims to reconstruct the latest dialogue utterance by copying the missing content from the dialogue context. Until now, the existing models for this task suffer from the robustness issue, i.e., performances drop dramatically when testing on a different domain. We address this robustness issue by proposing a novel sequence-tagging-based model so that the search space is significantly reduced, yet the core of this task is still well covered. As a common issue of most tagging models for text generation, the model's outputs may lack fluency. To alleviate this issue, we inject the loss signal from BLEU or GPT-2 under a REINFORCE framework. Experiments show huge improvements of our model over the current state-of-the-art systems on domain transfer.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/22/2022

Hierarchical Context Tagging for Utterance Rewriting

Utterance rewriting aims to recover coreferences and omitted information...
04/29/2020

Modeling Long Context for Task-Oriented Dialogue State Generation

Based on the recently proposed transferable dialogue state generator (TR...
10/22/2020

Cross Copy Network for Dialogue Generation

In the past few years, audiences from different fields witness the achie...
09/29/2020

Utterance-level Dialogue Understanding: An Empirical Study

The recent abundance of conversational data on the Web and elsewhere cal...
03/07/2022

Towards Robust Online Dialogue Response Generation

Although pre-trained sequence-to-sequence models have achieved great suc...
12/29/2020

Interpretable NLG for Task-oriented Dialogue Systems with Heterogeneous Rendering Machines

End-to-end neural networks have achieved promising performances in natur...
08/20/2019

Teacher-Student Framework Enhanced Multi-domain Dialogue Generation

Dialogue systems dealing with multi-domain tasks are highly required. Ho...