DeepAI
Log In Sign Up

DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances

12/03/2020
by   Xiaodong Gu, et al.
0

Recent advances in pre-trained language models have significantly improved neural response generation. However, existing methods usually view the dialogue context as a linear sequence of tokens and learn to generate the next word through token-level self-attention. Such token-level encoding hinders the exploration of discourse-level coherence among utterances. This paper presents DialogBERT, a novel conversational response generation model that enhances previous PLM-based dialogue models. DialogBERT employs a hierarchical Transformer architecture. To efficiently capture the discourse-level coherence among utterances, we propose two training objectives, including masked utterance regression and distributed utterance order ranking in analogy to the original BERT training. Experiments on three multi-turn conversation datasets show that our approach remarkably outperforms the baselines, such as BART and DialoGPT, in terms of quantitative evaluation. The human evaluation suggests that DialogBERT generates more coherent, informative, and human-like responses than the baselines with significant margins.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/07/2022

Towards Robust Online Dialogue Response Generation

Although pre-trained sequence-to-sequence models have achieved great suc...
06/29/2018

Discourse-Wizard: Discovering Deep Discourse Structure in your Conversation with RNNs

Spoken language understanding is one of the key factors in a dialogue sy...
06/07/2021

A Joint Model for Dropped Pronoun Recovery and Conversational Discourse Parsing in Chinese Conversational Speech

In this paper, we present a neural model for joint dropped pronoun recov...
05/03/2022

Zero-shot Sonnet Generation with Discourse-level Planning and Aesthetics Features

Poetry generation, and creative language generation in general, usually ...
06/18/2019

Modeling Semantic Relationship in Multi-turn Conversations with Hierarchical Latent Variables

Multi-turn conversations consist of complex semantic structures, and it ...
05/06/2022

Emp-RFT: Empathetic Response Generation via Recognizing Feature Transitions between Utterances

Each utterance in multi-turn empathetic dialogues has features such as e...
02/18/2021

Learning to Select Context in a Hierarchical and Global Perspective for Open-domain Dialogue Generation

Open-domain multi-turn conversations mainly have three features, which a...