ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation

07/09/2019
by   Hainan Zhang, et al.
0

In multi-turn dialogue generation, response is usually related with only a few contexts. Therefore, an ideal model should be able to detect these relevant contexts and produce a suitable response accordingly. However, the widely used hierarchical recurrent encoderdecoder models just treat all the contexts indiscriminately, which may hurt the following response generation process. Some researchers try to use the cosine similarity or the traditional attention mechanism to find the relevant contexts, but they suffer from either insufficient relevance assumption or position bias problem. In this paper, we propose a new model, named ReCoSa, to tackle this problem. Firstly, a word level LSTM encoder is conducted to obtain the initial representation of each context. Then, the self-attention mechanism is utilized to update both the context and masked response representation. Finally, the attention weights between each context and response representations are computed and used in the further decoding process. Experimental results on both Chinese customer services dataset and English Ubuntu dialogue dataset show that ReCoSa significantly outperforms baseline models, in terms of both metric-based and human evaluations. Further analysis on attention shows that the detected relevant contexts by ReCoSa are highly coherent with human's understanding, validating the correctness and interpretability of ReCoSa.

READ FULL TEXT

page 7

page 8

research
09/27/2020

Modeling Topical Relevance for Multi-Turn Dialogue Generation

Topic drift is a common phenomenon in multi-turn dialogue. Therefore, an...
research
02/18/2021

Learning to Select Context in a Hierarchical and Global Perspective for Open-domain Dialogue Generation

Open-domain multi-turn conversations mainly have three features, which a...
research
11/09/2022

Evaluating and Improving Context Attention Distribution on Multi-Turn Response Generation using Self-Contained Distractions

Despite the rapid progress of open-domain generation-based conversationa...
research
03/14/2023

X-ReCoSa: Multi-Scale Context Aggregation For Multi-Turn Dialogue Generation

In multi-turn dialogue generation, responses are not only related to the...
research
03/21/2019

Learning Multi-Level Information for Dialogue Response Selection by Highway Recurrent Transformer

With the increasing research interest in dialogue response generation, t...
research
03/02/2020

Toward Interpretability of Dual-Encoder Models for Dialogue Response Suggestions

This work shows how to improve and interpret the commonly used dual enco...
research
12/04/2022

Constructing Highly Inductive Contexts for Dialogue Safety through Controllable Reverse Generation

Large pretrained language models can easily produce toxic or biased cont...

Please sign up or login with your details

Forgot password? Click here to reset