Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.READ FULL TEXT VIEW PDF
Open domain dialog systems face the challenge of being repetitive and
Generating relevant/conditioned responses in dialog is challenging, and
Encoder-decoder based neural architectures serve as the basis of
Recently several deep learning based models have been proposed for end-t...
Variational autoencoders (VAE) combined with hierarchical RNNs have emer...
Although deep learning models have brought tremendous advancements to th...
Many dialogue management frameworks allow the system designer to directl...
Seq2seq neural networks, ever since the successful application in machine translationSutskever et al. (2014), have demonstrated impressive results on dialog generation and spawned a great deal of variants Vinyals and Le (2015); Yao et al. (2015); Sordoni et al. (2015); Shang et al. (2015). The vanilla seq2seq models suffer from the problem of generating too many generic responses (generic denotes safe, commonplace responses like “I don’t know”). One major reason is that the element-wise prediction models stochastical variations only at the token level, seducing the system to gain immediate short rewards and neglect the long-term structure. To cope with this problem, Serban et al. (2017) proposed a variational hierarchical encoder-decoder model (VHRED) that brought the idea of variational auto-encoders (VAE) Kingma and Welling (2013); Rezende et al. (2014) into dialog generation. For each utterance, VHRED samples a latent variable as a holistic representation so that the generative process will learn to maintain a coherent global sentence structure. However, the latent variable is learned purely in an unsupervised way and can only be explained vaguely as higher level decisions like topic or sentiment. Though effective in generating utterances with more information content, it lacks the ability of explicitly controlling the generating process.
This paper presents a conditional variational framework for generating specific responses, inspired by the semi-supervised deep generative model Kingma et al. (2014). The principle idea is to generate the next response based on the dialog context, a stochastical latent variable and an external label. Furthermore, the dialog context for both speakers is modeled separately because they have different talking styles, personality and sentiment. The whole network structure functions like a conditional VAE Sohn et al. (2015); Yan et al. (2016). We test our framework on two scenarios. For the first scenario, the label serves as a signal to indicate whether the response is generic or not. By assigning different values to the label, either generic or non-generic responses can be generated. For the second scenario, the label represents an imitated sentiment tag. Before generating the next response, the appropriate sentiment tag is predicted to direct the generating process.
Our framework is expressive and extendable. The generated responses agree with the predefined labels while maintaining meaningful. By changing the definition of the label, our framework can be easily applied to other specific areas.
To provide a better dialog context, we build a hierarchical recurrent encoder-decoder with separated context models (SPHRED). This section first introduces the concept of SPHRED, then explains the conditional variational framework and two application scenarios.
We decomposes a dialog into two levels: sequences of utterances and sub-sequences of words, as in Serban et al. (2016). Let be a dialog with utterances, where is the
-th utterance. The probability distribution of the utterance sequence factorizes as:
where represents the model parameters and encodes the dialog context until step .
If we model the dialog context through a single recurrent neural network (RNN), it can only represent a general dialog state in common but fail to capture the respective status for different speakers. This is inapplicable when we want to infer implicit personal attributes from it and use them to influence the sampling process of the latent variable, as we will see in Section2.4. Therefore, we model the dialog status for both speakers separately. As displayed in Figure 1, SPHRED contains an encoder RNN of tokens and two status RNNs of utterances, each for one speaker. When modeling turn in a dialog, each status RNN takes as input the last encoder RNN state of turn . The higher-level context vector is the concatenation of both status vectors.
We will show later that SPHRED not only well keeps individual features, but also provides a better holistic representation for the response decoder than normal HRED.
VAEs have been used for text generation inBowman et al. (2015); Semeniuta et al. (2017), where texts are synthesized from latent variables. Starting from this idea, we assume every utterance comes with a corresponding label and latent variable . The generation of and are conditioned on the dialog context provided by SPHRED, and this additional class label . This includes 2 situations, where the label of the next sequence is known (like for Scenario 1 in Section 2.3) or not (Section 2.4). For each utterance, the latent variable is first sampled from a prior distribution. The whole dialog can be explained by the generative process:
When the label
is unknown, a suitable classifier is implemented to first predict it from the context vector. This classifier can be designed as, but not restricted to, multilayer perceptrons (MLP) or support vector machines (SVM).
The training objective is derived as in Formula 5, which is a lower bound of the logarithm of the sequence probability. When the label is to be predicted (), an additional classification loss (first term) is added such that the distribution can be learned together with other parameters.
A major focus in the current research is to avoid generating generic responses, so in the first scenario, we let the label indicate whether the corresponding sequence is a generic response, where if the sequence is generic and otherwise. To acquire these labels, we manually constructed a list of generic phrases like “I have no idea”, “I don’t know”, etc. Sequences containing any one of such phrases are defined as generic, which in total constitute around 2 percent of the whole corpus. At test time, if the label is fixed as 0, we expect the generated response should mostly belong to the non-generic class.
No prediction is needed, thus the training cost does not contain the first item in Formula 5. This scenario is designed to demonstrate our framework can explicitly control which class of responses to generate by assigning corresponding values to the label.
In the second scenario, we experiment with assigning imitated sentiment tags to generated responses. The personal sentiment is simulated by appending :), :( or :P at the end of each utterance, representing positive, negative or neutral sentiment respectively. For example, if we append “:)” to the original “OK”, the resulting “OK :)” becomes positive. The initial utterance of every speaker is randomly tagged. We consider two rules for the tags of next utterances. Rule 1 confines the sentiment tag to stay constant for both speakers. Rule 2 assigns the sentiment tag of next utterance as the average of the preceding two ones. Namely, if one is positive and the other is negative, the next response would be neutral.
The label represents the sentiment tag, which is unknown at test time and needs to be predicted from the context. The probability is modeled by feedforward neural networks. This scenario is designed to demonstrate our framework can successfully learn the manually defined rules to predict the proper label and decode responses conforming to this label.
We conducted our experiments on the Ubuntu dialog Corpus Lowe et al. (2015), which contains about 500,000 multi-turn dialogs. The vocabulary was set as the most frequent 20,000 words. All the letters are transferred to lowercase and the Out-of-Vocabulary (OOV) words were preprocessed as unk tokens.
Model hyperparameters were set the same as in VHRED model except that we reduced by half the context RNN dimension. The encoder, context and decoder RNNs all make use of the Gated Recurrent Unit (GRU) structureCho et al. (2014). Labels were mapped to embeddings with size 100 and word vectors were initialized with the pubic Word2Vec embeddings trained on the Google News Corpus111https://code.google.com/archive/p/word2vec/. Following Bowman et al. (2015), of the words in the decoder were randomly dropped. We multiplied the KL divergence and classification error by a scalar which starts from zero and gradually increases so that the training would initially focus on the stochastic latent variables. At test time, we outputted responses using beam search with beam size set to 5 Graves (2012) and unk
tokens were prevented from being generated. We implemented all the models with the open-sourced Python library TensorflowAbadi et al. (2016) and optimized using the Adam optimizer Kingma and Ba (2014)
. Dialogs are cut into set of slices with each slice containing 80 words then fed into the GPU memory. All models were trained with batch size 128. We use the learning rate 0.0001 for our framework and 0.0002 for other models. Every model is tested on the validation dataset once every epoch and stops until it gains nothing more within 5 more epochs.
Accurate automatic evaluation of dialog generation is difficult Galley et al. (2015); Pietquin and Hastie (2013). In our experiment, we conducted three embedding-based evaluations (average, greedy and extrema) Liu et al. (2016)
on all our models, which map responses into vector space and compute the cosine similarity. Though not necessarily accurate, the embedding-based metrics can to a large extent measure the semantic similarity and test the ability of successfully generating a response sharing a similar topic with the golden answer. The results of a GRU language model (LM), HRED and VHRED were also provided for comparison. For the two scenarios of our framework, we further measured the percentage of generated responses matching the correct labels (accuracy). InLiu et al. (2016), current popular metrics are shown to be not well correlated with human judgements. Therefore, we also carried out a human evaluation. 100 examples were randomly sampled from the test dataset. The generated responses from the models were shuffled and randomly distributed to 5 volunteers222All volunteers are well-educated students who have received a Bachelor’s degree on computer science or above.. People were requested to give a binary score to the response from 3 aspects, grammaticality, coherence with history context and diversity. Every response was evaluated 3 times and the result agreed by most people was adopted.
As can be seen from Table 1, SPHRED outperforms both HRED and LM over all the three embedding-based metrics. This implies separating the single-line context RNN into two independent parts can actually lead to a better context representation. It is worth mentioning the size of context RNN hidden states in SPHRED is only half of that in HRED, but it still behaves better with fewer parameters. Hence it is reasonable to apply this context information to our framework.
The last 4 rows in Table 1 display the results of our framework applied in two scenarios mentioned in Section 2.3 and 2.4. SCENE1-A and SCENE1-B correspond to Scenario 1 with the label fixed as 1 and 0. 90.9% of generated responses in SCENE1-A are generic and 86.9% in SCENE1-B are non-generic according to the manually-built rule, which verified the proper effect of the label. SCENE2-A and SCENE2-B correspond to rule 1 and 2 in Scenario 2. Both successfully predict the sentiment with very minor mismatches (0.2% and 0.8%). The high accuracy further demonstrated SPHRED’s capability of maintaining individual context information. We also experimented by substituting the encoder with a normal HRED, the resulting model cannot predict the correct sentiment at all because the context information is highly mingled for both speakers. The embedding based scores of our framework are still comparable with SPHRED and even better than VHRED. Imposing an external label didn’t bring any significant quality decline.
|anybody in the house ?????__eou__ how to change the default ubuntu wall paper ? __eou__ how to change the default ubuntu wallpaper ? __eou__ __eot__ Is there an echo in your head ? Is there an echo in your head ? __eou__ __eot__ what do you mean ? __eou__ __eot__ Repeating = Bad . __eou__ __eot__ no body is answering me __eou__ __eot__||
LM: What do you want to do with it ?
HRED: I don’t know .
SPHRED: If you want to change the default wallpaper , you can change the default theme
|How can I install seamonkey ? __eou__ To save me the pastebin __eou__ I am looking to install seamonkey , anyone ? __eou__ __eot__ http://www.seamonkey-project.org/ __eou__ __eot__ It i snot in the ubuntu repository any more __eou__ __eot__||
SCENE1-A: sorry i have no idea .
SCENE1-B: you need to find the package that you can use .
|hey guys , how can I add an extra xsession to ubuntu 10.04 ? :) __eou__ that is , I dont want GNOME :) __eou__ __eot__ try this : https://wiki.ubuntu. com/CustomXSession :( __eou__ __eot__||
SCENE2-A: ok thanks :)
|hey guys , how can I add an extra xsession to ubuntu 10.04 ? :( __eou__ that is , I dont want GNOME :( __eou__ __eot__ try this : https://wiki.ubuntu. com/CustomXSession :) __eou__ __eot__||
SCENE2-B: thank you for the help ! :P
We conducted human evaluations on VHRED and our framework (Table 3). All models share similar scores, except SCENE1-A receiving lower scores with respect to coherence. This can be explained by the fact that SCENE1-A is trained to generate only generic responses, which limits its power of taking coherence into account. VHRED and Scenario 2 perform close to each other. Scenario 1, due to the effect of the label, receives extreme scores for diversity.
Human Judgements, G refers to Grammaticality and the last four columns is the confusion matrix with respect to coherence and diversity
In general, the statistical results of human evaluations on sentence quality are very similar between the VHRED model and our framework. This agrees with the metric-based results and supports the conclusion drawn in Section 3.3. Though the sample size is relatively small and human judgements can be inevitably disturbed by subjective factors, we believe these results can shed some light on the understanding of our framework.
A snippet of the generated responses can be seen in Table 2. Generally speaking, SPHRED better captures the intentions of both speakers, while HRED updates the common context state and the main topic might gradually vanish for the different talking styles of speakers. SCENE1-A and SCENE1-B are designed to reply to a given context in two different ways. We can see both responses are reasonable and fit into the right class. The third and fourth rows are the same context with different appended sentiment tags and rules, both generate a suitable response and append the correct tag at the end.
In this work, we propose a conditional variational framework for dialog generation and verify it on two scenarios. To model the dialog state for both speakers separately, we first devised the SPHRED structure to provide the context vector for our framework. Our evaluation results show that SPHRED can itself provide a better context representation than HRED and help generate higher-quality responses. In both scenarios, our framework can successfully learn to generate responses in accordance with the predefined labels. Though with the restriction of an external label, the score of generated responses didn’t significantly decreased, meaning that we can constrain the generation within a specific class while still maintaining the quality.
The manually-defined rules, though primitive, represent two most common sentiment shift conditions in reality. The results demonstrated the potential of our model. To apply to real-world scenarios, we only need to adapt the classifier to detect more complex sentiments, which we leave for future research. External models can be used for detecting generic responses or classifying sentiment categories instead of rule or symbol-based approximations. We focused on the controlling ability of our framework, future research can also experiment with bringing external knowledge to improve the overall quality of generated responses.
This work was supported by the National Natural Science of China under Grant No. 61602451, 61672445 and JSPS KAKENHI Grant Numbers 15H02754, 16K12546.
The knowledge engineering review28(01):59–73.
European Conference on Computer Vision. Springer, pages 776–791.