Robust Dialogue Utterance Rewriting as Sequence Tagging

by   Jie Hao, et al.

The task of dialogue rewriting aims to reconstruct the latest dialogue utterance by copying the missing content from the dialogue context. Until now, the existing models for this task suffer from the robustness issue, i.e., performances drop dramatically when testing on a different domain. We address this robustness issue by proposing a novel sequence-tagging-based model so that the search space is significantly reduced, yet the core of this task is still well covered. As a common issue of most tagging models for text generation, the model's outputs may lack fluency. To alleviate this issue, we inject the loss signal from BLEU or GPT-2 under a REINFORCE framework. Experiments show huge improvements of our model over the current state-of-the-art systems on domain transfer.


page 1

page 2

page 3

page 4


Hierarchical Context Tagging for Utterance Rewriting

Utterance rewriting aims to recover coreferences and omitted information...

Modeling Long Context for Task-Oriented Dialogue State Generation

Based on the recently proposed transferable dialogue state generator (TR...

Cross Copy Network for Dialogue Generation

In the past few years, audiences from different fields witness the achie...

Utterance-level Dialogue Understanding: An Empirical Study

The recent abundance of conversational data on the Web and elsewhere cal...

Towards Robust Online Dialogue Response Generation

Although pre-trained sequence-to-sequence models have achieved great suc...

Interpretable NLG for Task-oriented Dialogue Systems with Heterogeneous Rendering Machines

End-to-end neural networks have achieved promising performances in natur...

Teacher-Student Framework Enhanced Multi-domain Dialogue Generation

Dialogue systems dealing with multi-domain tasks are highly required. Ho...