Dually Interactive Matching Network for Personalized Response Selection in Retrieval-Based Chatbots

08/16/2019 ∙ by Jia-Chen Gu, et al. ∙ Queen's University USTC 0

This paper proposes a dually interactive matching network (DIM) for presenting the personalities of dialogue agents in retrieval-based chatbots. This model develops from the interactive matching network (IMN) which models the matching degree between a context composed of multiple utterances and a response candidate. Compared with previous persona fusion approaches which enhance the representation of a context by calculating its similarity with a given persona, the DIM model adopts a dual matching architecture, which performs interactive matching between responses and contexts and between responses and personas respectively for ranking response candidates. Experimental results on PERSONA-CHAT dataset show that the DIM model outperforms its baseline model, i.e., IMN with persona fusion, by a margin of 14.5 in terms of top-1 accuracy hits@1.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

DIM

EMNLP 2019: Dually Interactive Matching Network for Personalized Response Selection in Retrieval-Based Chatbots


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Building a conversation system with intelligence is challenging. Response selection, which aims to select a potential response from a set of candidates given the context of a conversation, is an important technique to build retrieval-based chatbots Zhou et al. (2018). Many previous studies on single-turn Wang et al. (2013) or multi-turn response selection Lowe et al. (2015); Zhou et al. (2018); Gu et al. (2019) rank response candidates according to their semantic relevance with the given context.

With the emergence and popular use of personal assistants such as Apple Siri, Google Now and Microsoft Cortana, the techniques of making personalized dialogues has attracted much research attention in recent years Li et al. (2016); Zhang et al. (2018); Mazaré et al. (2018). Zhang et al. (2018) constructed a PERSONA-CHAT dataset for building personalized dialogue agents, where each persona was represented as multiple sentences of profile description. An example dialogue conditioned on given profiles from this dataset is given in Table 1 for illustration.

A persona fusion method for personalized response selection was also proposed by Zhang et al. (2018). In this method, given a context and a persona composed of several profile sentences, the similarities between the context representation and all profile sentences are computed first using attention to get the persona representation. Then, the persona representation is applied to enhance the context representation by a simple concatenation or addition operation. Finally, the enhanced context representation is used to rank response candidates. This method has two main deficiencies. First, the context is treated as a whole for calculating its attention towards profile sentences. However, each context is composed of multiple utterances and these utterances may play different roles when matching different profile sentences. Second, the interactions between the persona and each response candidate are ignored when deriving the persona representation.

Persona 1 Persona 2
Original I just bought a brand new house. Original I love to meet new people.
I like to dance at the club. I have a turtle named timothy.
I run a dog obedience school. My favorite sport is ultimate frisbee.
I have a big sweet tooth. My parents are living in bora bora.
I like taking and posting selkies. Autumn is my favorite season.
Revised I have purchased a home. Revised I like getting friends.
Just go dancing at the nightclub, it is fun! Reptiles make good pets.
I really enjoy animals. I love to run around and get out my energy.
I enjoy chocolate. My family lives on a island.
I pose for pictures and put them online. I love watching the leaves change colors.
Dialogue
Person 1: Hello, how are you doing tonight?
Person 2: I am well an loving this interaction how are you?
Person 1: I am great. I just got back from the club.
Person 2: This is my favorite time of the year season wise.
Person 1: I would rather eat chocolate cake during this season.
Person 2: What club did you go to? Me an timothy watched tv.
Person 1: I went to club chino. What show are you watching?
Person 2: LOL oh okay kind of random.
Person 1: I love those shows. I am really craving cake.
Person 2: Why does that matter any? I went outdoors to play frisbee.
Person 1: It matters because I have a sweet tooth.
Person 2: So? LOL I want to meet my family at home in bora.
Person 1: My family lives in alaska. It is freezing down there.
Person 2: I bet it is oh I could not.
Table 1: An example dialogue from the PERSONA-CHAT dataset.

In this paper, the interactive matching network (IMN) Gu et al. (2019) is adopted as the fundamental architecture to build our baseline and improved models for personalized response selection. The baseline model follows the persona fusion method proposed by Zhang et al. (2018) and two improved models are then proposed. First, an IMN-based persona fusion model with fine-grained context-persona interaction is designed. In this model, each utterance in a context, instead of the whole context, is used to calculate its similarity with each profile sentence in a persona. Second, a dually interactive matching network (DIM) is proposed by formulating the task of personalized response selection as a dual matching problem, i.e., finding a response that can properly match the given context and persona simultaneously. The DIM model calculates the interactions between the context and the response, and between the persona and the response in parallel, in order to derive the final matching feature for response selection.

We test our proposed methods on the PERSONA-CHAT dataset Zhang et al. (2018). Results show that the IMN-based utterance-level persona fusion model and the DIM model can obtain a top-1 accuracy improvement of 2.4% and 14.5%, respectively, over the baseline model, i.e., the IMN-based context-level persona fusion model. Finally, our proposed DIM model outperforms the current state-of-the-art model by a margin of 27.7% in terms of top-1 accuracy on the PERSONA-CHAT dataset.

In summary, the contributions of this paper are three-fold. (1) An IMN-based fine-grained persona fusion model is designed in order to consider the utterance-level interactions between contexts and personas. (2) A dually interactive matching network (DIM) is proposed by formulating the task of personalized response selection as a dual matching problem, aiming to find a response that can properly match the given context and persona simultaneously. (3) Experimental results on the PERSONA-CHAT dataset demonstrate that our proposed models outperform the baseline and state-of-the-art models by large margins on the accuracy of response selection.

2 Related Work

2.1 Response Selection

Response selection is an important problem in building retrieval-based chatbots. Existing work on response selection can be categorized into single-turn Wang et al. (2013) and multi-turn dialogues Lowe et al. (2015); Zhou et al. (2018); Gu et al. (2019). Early studies have been more on single-turn dialogues, considering only the last utterance of a context for response matching. More recently, the research focus has been shifted to multi-turn conversations, a more practical setup for real applications. Wu et al. (2017)

proposed the sequential matching network (SMN) which first matched the response with each context utterance and then accumulated the matching information by a recurrent neural network (RNN).

Zhou et al. (2018) proposed the deep attention matching network (DAM) to construct representations at different granularities with stacked self-attention. Gu et al. (2019)

proposed the interactive matching network (IMN) to enhance the representations of the context and response at both the word-level and sentence-level, and to perform the bidirectional and global interactions between the context and response in order to derive the matching feature vector.

2.2 Persona for Chatbots

Chit-chat models suffer from a lack of a consistent personality as they are typically trained over many dialogues, each with different speakers, and a lack of explicit long-term memory as they are typically trained to produce an utterance given only a very recent dialogue history. Li et al. (2016) proposed a persona-based neural conversation model to capture individual characteristics such as background information and speaking style. Miller et al. (2016) proposed the key-value memory network, where the keys were dialogue histories, i.e., contexts, and the values were next dialogue utterances. Zhang et al. (2018) proposed the profile memory network by considering the dialogue history as input and then performing attention over the persona to be combined with the dialogue history. Mazaré et al. (2018) proposed the fine-tuned persona-chat (FT-PC) model which first pretrained a model using a large-scale corpus with external knowledge and then fine-tuned it on the PERSONA-CHAT dataset.

In general, all these methods adopted a context-level persona fusion strategy, which first obtained the embedding vector of a context and then computed the similarities between the whole context and each profile sentence to acquire the persona representation. However, such persona fusion is relatively too coarse. The utterance-level representations of contexts are not leveraged. The interactions between the persona and each response candidate are also ignored when deriving the persona representation.

3 Task Definition

Given a dialogue dataset with personas, an example of the dataset can be represented as . Specifically, represents a context with as its utterances and as the utterance number. represents a persona with as its profile sentences and as the profile number. represents a response candidate. denotes a label. indicates that is a proper response for ; otherwise, . Our goal is to learn a matching model from . For any context-persona-response triple , measures the matching degree between and

. A softmax output layer over all response candidates is adopted in this model. The model parameters are trained by minimizing a multi-class cross-entropy loss function on

.

4 IMN-Based Persona Fusion

(a) Context-level persona fusion
(b) Utterance-level persona fusion
Figure 1: Comparison of the model architectures for (a) context-level persona fusion and (b) utterance-level persona fusion.

The model architecture used by previous methods with persona fusion Zhang et al. (2018); Mazaré et al. (2018) is shown in Figure 1(a). It first obtains the context representation and then computes the similarities between the whole context and each profile sentence in a persona. Attention weights are calculated for all profile sentences to obtain the persona representation. Finally, the persona representation is combined with the context representation through concatenation or addition operations.

Formally, the representations of the whole context which is the concatenation of utterances, the context utterances, and the profile sentences are denoted as c, and respectively, where c, and . In previous context-level persona fusion methods, the enhanced context representation fused with persona information is calculated as

(1)

Then, the similarity between and the response representation are computed to get the matching degree of .

In this paper, we build our baseline model based on IMN Gu et al. (2019). After the context and response embeddings are obtained in the IMN model, the context-level persona fusion architecture shown in Figure 1

(a) is applied to integrate persona information. All model parameters are estimated in an end-to-end manner. This baseline model is denoted as IMN

in this paper.

Considering each context is composed of multiple utterances and these utterances may play different roles when matching different profile sentences, we propose to improve the baseline model by fusing the persona information at a fine-grained utterance-level as shown in Figure 1(b). This model is denoted as IMN in this paper. First, the similarities between each context utterance and each profile sentence are computed and the enhanced representation of each context utterance is calculated as

(2)

Then, these enhanced utterance representations are aggregated into the enhanced context representation as

(3)

where either RNN or attention-based aggregation Gu et al. (2019) can be employed.

5 Dually Interactive Matching Network

5.1 Model Overview

Figure 2: An overview of our proposed DIM model.

Previous studies on personalized response selection treat personas as supplementary information to enhance context representations by attention-based interaction. In this paper, we formulate the task of personalized response selection as a dual matching problem. The selected response is expected to properly match the given context and persona respectively. Here, personas are considered as equally important counterparts to contexts for ranking response candidates. The interactive matching between the context and response, and that between the persona and response constitute the dually interactive matching network (DIM).

The DIM model is composed of five layers. Figure 2 shows an overview of the architecture. Details about each layer are provided in the following subsections.

5.2 Word Representation Layer

We follow the setting used in IMN Gu et al. (2019), which constructs word representations by combining general pre-trained word embeddings, those estimated on the task-specific training set, as well as character-level embeddings, in order to deal with the out-of-vocabulary issue.

Formally, embeddings of the m-th utterance in a context, the n-th profile sentence in a persona and a response candidate are denoted as , and respectively, where , and are the numbers of words in , and R respectively. Each , or is an embedding vector of d-dimensions.

5.3 Sentence Encoding Layer

The context utterances, profile sentences and response candidate are encoded by bidirectional long short-term memories (BiLSTMs)

Hochreiter and Schmidhuber (1997). We denote the calculations as follows,

(4)
(5)
(6)

where , and . The parameters of these three BiLSTMs are shared in our implementation.

5.4 Matching Layer

The interactions between the context and the response and those between the persona and the response can provide useful matching information for deciding the matching degree between them. Here, the DIM model adopts the same strategy as in the IMN model Gu et al. (2019) which considers the global and bidirectional interactions between two sequences.

Take the context-response matching as an example. First, the context representation with is formed by concatenating the set of utterance representations . Then, a soft alignment is performed by computing the attention weight between each tuple {} as

(7)

After that, local inference is determined by the attention weights computed above to obtain the local relevance between a context and a response bidirectionally. For a word in the context, its relevant representation carried by the response is identified and composed using as

(8)

where the contents in that are relevant to are selected to form . Then, we define . The same calculation is performed for each word in the response to form its relevant representation carried by the context as

(9)

and we define . To further enhance the collected information, the differences and element-wise products between {} and between {} are computed, and are then concatenated with the original vectors to obtain the enhanced representations as follows,

(10)
(11)

So far we have collected the relevant information between the context and response. The enhanced context representation is further converted back to matching matrices of separated utterances as .

The persona-response matching is conducted identically to the context-response matching introduced above, where the representations of profile sentences are used, instead of the representations of context utterances . The results of persona-response matching are denoted as and .

5.5 Aggregation Layer

The aggregation layer converts the matching matrices of context utterances, profile sentences and response into a final matching feature vector.

First, each matching matrix and are processed by BiLSTMs as

(12)
(13)
(14)
(15)

where the four BiLSTMs share the same parameters in our implementation. Then, the aggregated embeddings are calculated by max pooling and last-hidden-state pooling operations as

(16)
(17)
(18)
(19)

Next, the sequences of and are further aggregated to get the embedding vectors for the context and the persona respectively.

Context aggregation

As the utterances in a context are chronologically ordered, the utterance embeddings are sent into another BiLSTM following the chronological order of utterances in the context. Combined max pooling and last-hidden-state pooling operations are then performed to obtain the context embeddings as

(20)
(21)

Persona aggregation

As the profile sentences in a persona are independent to each other, an attention-based aggregation is designed to derive the persona embeddings as follows,

(22)
(23)

where w and are parameters need to be estimated during training.

Last, the final matching feature vector is the concatenation of context, persona and response embeddings as

(24)

where the first two features describe the context-response matching, and the last two describe the persona-response matching.

5.6 Prediction Layer

The final matching feature vector is then sent into a multi-layer perceptron (MLP) classifier with softmax output. Here, the MLP is designed to predict whether a

triple match appropriately based on the derived matching feature vector. Finally, the MLP returns a probability to denote the matching degree.

6 Experiments

6.1 Dataset

We tested our proposed methods on the PERSONA-CHAT dataset Zhang et al. (2018) which contains multi-turn dialogues conditioned on personas. The dataset consists of 8939 complete dialogues for training, 1000 for validation, and 968 for testing. Response selection is performed at every turn of a complete dialogue, which results in 65719 dialogues for training, 7801 for validation, and 7512 for testing in total. Positive responses are true responses from humans and negative ones are randomly sampled. The ratio between positive and negative responses is 1:19 in the training, validation, and testing sets. There are 955 possible personas for training, 100 for validation, and 100 for testing, each consisting of 3 to 5 profile sentences. To make this task more challenging, a version of revised persona descriptions are also provided by rephrasing, generalizing, or specializing the original ones. Since the personas of both speakers in a dialogue are available, the response selection task can be conditioned on the speaker’s persona (“self persona”) or the dialogue partner’s persona (“their persona”) respectively.

6.2 Evaluation Metrics

We used the same evaluation metrics as in the previous work

Zhang et al. (2018). Each model aimed to select the best-matched response from available candidates for the given context and persona . We calculated the recall of the true positive replies, denoted as . In addition, the mean reciprocal rank (MRR) Voorhees (1999) metric was also adopted to take the rank of the correct response over all candidates into consideration.

6.3 Training Details

For building the IMN, IMN, IMN and DIM models, the Adam method Kingma and Ba (2015) was employed for optimization with a batch size of 16. The initial learning rate was 0.001 and was exponentially decayed by 0.96 every 5000 steps. Dropout Srivastava et al. (2014) with a rate of 0.2 was applied to the word embeddings and all hidden layers. A word representation is a concatenation of a 300-dimensional GloVe embedding Pennington et al. (2014), a 100-dimensional embedding estimated on the training set using the Word2Vec algorithm Mikolov et al. (2013), and 150-dimensional character-level embeddings with window sizes {3, 4, 5}, each consisting of 50 filters. The word embeddings were not updated during training. All hidden states of the LSTM have 200 dimensions. The MLP at the prediction layer have 256 hidden units with ReLU Nair and Hinton (2010)

activation. The maximum number of characters in a word, that of words in a context utterance, of utterances in a context, and of words in a response were set to be 18, 20, 15, and 20, respectively. We padded with zeros if the number of utterances in a context was less than 15; otherwise, we kept the last 15 utterances. For the IMN

, IMN and the DIM models, the maximum number of words in a profile sentence and that of profile sentences in a persona were set to be 15 and 5, respectively. Similarly, we padded with zeros if the number of profile sentences in a persona was less than 5. The development set was used to select the best model for testing.

All code was implemented in the TensorFlow framework

Abadi et al. (2016) and is published to help replicate our results 111https://github.com/JasonForJoy/DIM.

6.4 Experimental Results

MRR
IR baseline 21.4 -
Starspace 31.8 -
Profile 31.8 -
KV Profile 34.9 -
IMN 63.8 75.8
Table 2: Evaluation results of the IMN model and previous methods on PERSONA-CHAT dataset without using personas. All the results except ours are copied from Zhang et al. (2018).
Self Persona Their Persona
Original Revised Original Revised
MRR MRR MRR MRR
IR baseline 41.0 (+19.6) - 20.7 (-0.7) - 18.1 (-3.3) - 18.1 (-3.3) -
Starspace 48.1 (+16.3) - 32.2 (+0.4) - 24.5 (-7.3) - 26.1 (-5.7) -
Profile 47.3 (+15.5) - 35.4 (+3.6) - 28.3 (-3.5) - 29.4 (-2.4) -
KV Profile 51.1 (+16.2) - 35.1 (+0.2) - 29.1 (-5.8) - 28.9 (-6.0) -
FT-PC - - 60.7 (-) - - - - -
IMN 64.3 (+0.5) 76.2 (+0.4) 63.8 (+0.0) 75.8 (+0.0) 63.7 (-0.1) 75.8 (+0.0) 63.5 (-0.3) 75.7 (-0.1)
IMN 66.7 (+2.9) 78.1 (+2.3) 64.0 (+0.2) 76.0 (+0.2) 63.9 (+0.1) 75.9 (+0.1) 63.7 (-0.1) 75.7 (-0.1)
DIM 78.8 (+15.0) 86.7 (+10.9) 70.7 (+6.9) 81.2 (+5.4) 64.0 (+0.2) 76.1 (+0.3) 63.9 (+0.1) 76.0 (+0.2)
Table 3: Performance of the proposed and previous methods on the PERSONA-CHAT under various persona configurations. The meanings of “Self Persona”, “Their Persona”, “Original”, and “revised” can be found in Section 6.1. All results except ours are copied from Zhang et al. (2018); Mazaré et al. (2018). Numbers in parentheses indicate the gains or losses after adding the persona conditions.

Table 2 presents the evaluation results of our reproduced IMN model Gu et al. (2019) and previous methods on PERSONA-CHAT dataset without using personas. It can be seen that the IMN model outperformed other models on this dataset by a margin larger than 28.9% in terms of . As introduced above, our proposed models for personalized response selection were all built on IMN.

Table 3

presents the evaluation results of our proposed and previous methods on PERSONA-CHAT under various persona configurations. The t-test shows that the differences between our proposed models, i.e., IMN

and DIM, and the baseline model, i.e. IMN, were both statistically significant with p-value 0.01. We can see that the fine-grained persona fusion at the utterance level rendered a improvement of 2.4% and an MRR improvement of 1.9% by comparing IMN and IMN conditioned on original self personas. The DIM model outperformed its baseline IMN by a margin of 14.5% in terms of and 10.5% in terms of MRR. Compared with the FT-PC model Mazaré et al. (2018) which was first pretrained using a large-scale corpus and then fine-tuned on the PERSONA-CHAT dataset, the DIM model outperformed it by a margin of 10.0% in terms of conditioned on revised self personas. Another advantage of DIM is that it was trained in an end-to-end mode without pretraining and using any external knowledge. Lastly, the DIM model outperforms previous models by margins larger than 27.7% in terms of conditioned on original self personas.

Improvement of Using Personas

Examining the numbers which indicate the gains or losses after adding persona conditions in Table 3, we can see that the context-level persona fusion improves the performance of previous models significantly when original self personas are used. However, the gain achieved by the IMN model is limited. One possible reason is that the IMN model performs attention-based interactions between the context and the response in order to get their matching feature for response selection. Thus, the context embeddings shown in Fig. 1(a) contained the information from both the context and the response, which may be inappropriate for the following context-level persona fusion shown in Eq. (1). The improvement achieved by the DIM model is much higher because it adopts a dual matching framework to address this issue.

Original vs. Revised

Comparing with using original personas, it is more difficult for the models conditioned on the revised personas to extract useful persona information, as shown by the limited improvement achieved by the previous models shown in Table 3. One possible reason is that there are fewer shared words between the response and the persona revised by rephrasing, generalizing, or specializing, which increases the difficulty of understanding the persona and its relationships with the response. For example, it is easier for models to judge the matching degree between the original profile “Autumn is my favorite season.” and the response “This is my favorite time of the year season wise.” than between the revised profile “I love watching the leaves change colors.” and the response. On the contrary, our proposed DIM model still obtains a improvement of 6.9% and an MRR improvement of 5.4% when conditioned on the revised self personas, which can be attributed to the direct and interactive persona-response matching used in this model.

Self vs. Their

As shown in Table 3, no significant gains can be obtained when the models are conditioned on the personas of dialogue partners. Note that there are no significant performance losses with our proposed methods, while the losses of previous models are 2.4% to 7.3% in terms of .

7 Analysis

7.1 Ablations

MRR
DIM 78.8 86.7
  - persona 63.8 75.8
  - context 48.8 60.9
Table 4: Ablation tests of removing either persona-response matching or context-response matching in the DIM model conditioned on original self personas.

To demonstrate the importance of the dual matching framework followed by our proposed DIM model, ablation tests were performed using the original self personas and the results are shown in Table 4. We can see that both the persona-response matching and the context-response matching contribute to the performance of the DIM model. It is reasonable that the context-persona matching is more important because contexts provide the fundamental semantic descriptions for response selection. On the other hand, the single persona-response matching can also achieve a of 48.8% and an MRR of 60.9%, which shows the usefulness of utilizing persona information to select the best-matched response.

7.2 Interactive Matching in DIM

In order to investigate the effectiveness of the interaction matching between the context and the response and that between the persona and the response in the DIM model, a case study was conducted by visualizing the response-to-context and response-to-persona attention weights used in Eq. (9). The results are shown in Fig. 3. We can see that some important words such as “dogs” in the response selected its relevant words such as “animals” in the context to derive the context-response matching features. Some important profile texts such as “I love animals and have two dogs.” also obtained large attention weights for getting the persona-response matching features. This experimental result confirms our formulation of the task of personalized response selection as a dual matching problem.

(a) Response-to-context attention weights
(b) Response-to-persona attention weights
Figure 3: Visualizations of (a) response-to-context or (b) response-to-persona attention weights at the matching layer for a test sample. The darker units correspond to larger values.

7.3 Transfer Test

TrainTest Original Revised
Original 78.8 66.3
Revised 77.6 70.7
Table 5: results of transfer tests on the DIM model.

Transfer tests were conducted by training and evaluating the DIM model using mismatched types of personas. The results are reported in Table 5. It shows that the DIM model achieved a better performance when testing on the same type of personas as training. Meanwhile, the model trained on the revised personas and tested on the original personas achieved less loss than the ones trained on the original personas and tested on the revised personas, which shows that the revised personas can provide a better generalization ability to the DIM model than the original ones.

8 Conclusions

In this paper, we formulate the task of personalized response selection as a dual matching problem to search for a response that can properly match the given context and persona simultaneously. A new model named Dually Interactive Matching Network (DIM) is proposed, which performs the interaction matching between the context and response as well as between the persona and response in parallel, in order to derive the final matching features for personalized response selection. Experimental results show that DIM improves over the IMN models with context-level or utterance-level persona fusion, outperforming previous methods and achieving a new state-of-the-art performance on the PERSONA-CHAT dataset. In the future, we will explore models to make better use of dialogue partners’ persona for response selection.

Acknowledgments

We thank the anonymous reviewers for their valuable comments.

References