Following the success of neural machine translation systems(Bahdanau et al., 2015; Sutskever et al., 2014; Cho et al., 2014), there has been a growing interest in adapting the encoder-decoder models to model open-domain conversations (Sordoni et al., 2015; Serban et al., 2016a, b; Vinyals and Le, 2015).This is done by framing the next utterance generation as a machine translation problem by treating the dialog history as the source sequence and the next utterance as the target sequence. Then the models are trained end-to-end with Maximum Likelihood (MLE) objective without any hand crafted structures like slot-value pairs, dialog manager, etc used in conventional dialog modeling (Lagus and Kuusisto, 2002). Such data driven approaches are worth pursuing in the context of open-domain conversations since the next utterance distribution in open-domain conversations exhibit high entropy which makes it impractical to manually craft good features.
While the encoder-decoder approaches are promising, lack of specificity has been one of the many challenges (Wei et al., 2017) in modelling non-goal oriented dialogs. Recent encoder-decoder based models usually tend to generate generic or dull responses like “I don’t know.”. One of the main causes are the implicit imbalances present in the dialog datasets that tend to potentially handicap the models into generating uninteresting responses.
Imbalances in a dialog dataset can be broadly divided into two categories: many-to-one and one-to-many. Many-to-one imbalance occurs when the dataset contain very similar responses to several different dialog contexts. In such scenarios, decoder learns to ignore the context (considering it as noise) and behaves like a regular language model. Such a decoder would not generalize to new contexts and will end up predicting generic responses for all contexts. In the one-to-many case, the dataset may exhibit a different type of imbalance where a certain type of generic response may be present in abundance compared to other plausible interesting responses for the same dialog context Wei et al. (2017)
. When trained with a maximum-likelihood (MLE) objective, generative models usually tend to place more probability mass around the most commonly observed responses for a given context. So, we end up observing little variance in the generated responses in such cases. While these two imbalances are problematic for training a dialog model, they are also inherent characteristics of a dialog dataset which cannot be removed.
Several approaches have been proposed in the literature to address the generic response generation issue. Li et al. (2016)
propose to modify the loss function to increase the diversity in the generated responses. Multi-resolution RNN(Serban et al., 2017) addresses the above issue by additionally conditioning with entity information in the previous utterances. Alternatively, Song et al. (2016)
uses external knowledge from a retrieval model to condition the response generation. Latent variable models inspired by Conditional Variational Autoencoders (CVAEs) are explored in(Shen et al., 2017; Zhao et al., 2017). While models with continuous latent variables tend to be uninterpretable, discrete latent variable models exhibit high variance during inference. Shen et al. (2017) append discrete attributes such as sentiment to the latent representation to generate next utterance.
New Conditional Dialog Generation Model. Drawing insights from (Shen et al., 2017; Zhou et al., 2017), we propose a conditional utterance generation model in which the next utterance is conditioned on the dialog attributes corresponding to the next utterance. To do this, we first predict the higher level dialog attributes corresponding to the next response. Then we generate the next utterance conditioned on the dialog context and predicted attributes. Dialog attribute of an utterance refers to discrete features or aspects associated with the utterance. Example attributes include dialog-acts, sentiment, emotion, speaker id, speaker personality or other user defined discrete features of an utterance. While previous research works lack the framework to learn to predict the attributes of the next utterance and mainly view the next utterance’s attribute as a control variable in their models, our method learns to predict the attributes in an end-to-end manner. This alleviates the need to have utterances annotated with attributes during inference.
RL for Dialog Attribute Selection. Further, it also enables us to formulate the dialog attribute selection as a reinforcement learning (RL) problem and optimize the policy initialized by the supervised training using REINFORCE (Williams, 1992). While the Supervised pre-training helps the model to generate utterances coherent with the dialog history, the RL formulation encourages the model to generate utterances optimized for long term rewards like diversity, user-satisfaction scores etc. This way of optimizing the policy over the discrete dialog attribute space is more practical as the action space is low dimensional instead of the entire vocabulary (as common in policies which involve predicting the next token to generate).
By using REINFORCE (Williams, 1992) to further optimize the dialog attribute selection process, We then show improvements in specificity of the generated responses both qualitatively (based on human evaluations) and quantitatively (with respect to the diversity measures). The diversity scores, distinct-1 and distinct-2 are computed as the fraction of uni-grams and bi-grams in the generated responses as described in (Li et al., 2016).
Improvements on Dialog datasets demonstrated through quantitative & qualitative Evaluations:
Additionally, we annotate an existing open domain dialog dataset using dialog attribute classifiers trained with tagged datasets like Switchboard(Godfrey et al., 1992; Jurafsky et al., 1997), Frames (Schulz et al., 2017) and demonstrate both quantitative (in terms of token perplexity/embedding metrics (Rus and Lintean, 2012; Mitchell and Lapata, 2008)
) and qualitative improvements (based on human evaluations) in generating interesting responses. In this work, we show results with two types of dialog attributes - sentiment and dialog-acts. It is worth investigating this approach as we need not invest much in training classifiers for very high accuracy and we show empirically that annotations from classifiers with low accuracy are able to boost token perplexity. We conjecture that the irregularities in the auto-annotated dialog attributes induce a regularization effect while training deep neural networks analogous to the dropout mechanism. Also, annotating utterances with many types of dialog attributes could increase the regularization effect and potentially tip the utterance generation in the favor of certain low frequency but interesting responses.
In this work, we are mainly interested in exploring the impact of the jointly modelling extra discrete dialog attributes along with dialog history for next utterance generation and their contribution to addressing the generic response problem. Although our approach is flexible enough to include latent variables additionally, we mainly focus on the contribution of dialog attributes to address the ”generic” response issue in this work.
2 Attribute Conditional HRED
In this paper, we extend the HRED (Serban et al., 2016a) model (elaborated in the Appendix section) by jointly modelling the utterances with the dialog attributes of each utterance. HRED is a encoder-decoder model consisting of a token-level RNN encoder and an utterance-level RNN encoder to summarize the dialog context followed by a token-level RNN decoder to generate the next utterance. The joint probability can be factorized into dialog attributes prediction, followed by next utterance generation conditioned on the predicted dialog attributes as shown in equation 1 .
where denote different dialog attributes corresponding to the utterance . is the utterance, are the past utterances. For instance, if we condition on three dialog attributes - sentiment, dialog-acts and emotion, we would have . Further, we assume that the dialog attributes are conditionally independent given the dialog context. More simply, we predict the attributes of the next utterance and then, condition on the previous context & the predicted attributes to generate the next utterance.
2.1 Dialog Attribute Prediction
We predict the dialog attribute of the next utterance conditioned on the context vector i.e. summary of the previous utterances and the dialog attributes of the previous utterances. We first pass the attributes of all the previous utterances through an RNN. We combine only the last hidden state of this RNN with the context vector (represents the summary of all the previous utterances) to predict the dialog attribute of the next utterance as shown in Figure 1.
If the dialog dataset is not annotated with the dialog attributes, we build a classifier (with a manually tagged dataset) to annotate the dialog attributes. This classifier is a simple MLP. We empirically show that this classifier need not have high accuracy to improve the dialog modeling. We hypothesize that few misclassified attributes could potentially provide a regularization effect similar to the dropout mechanism (Srivastava et al., 2014).
2.2 Conditional Response Generation
After the dialog attributes prediction, we generate the next utterance conditioned on the dialog context and the predicted attributes as shown in Figure 2. Token generation of the next utterance is modelled as in equation 2. The context and attributes are combined by concatenating their corresponding hidden states.
where is the recurrent hidden state of the decoder after seeing words in the -th utterance, is the token level response decoder, and
where is the summary of previous utterances (recurrent hidden state of the utterance-level encoder), and are the dialog attribute embeddings corresponding to the -th utterance.
During inference, we first predict the dialog attributes of the dialog context. We then predict the dialog attribute of the next utterance conditioned on the predicted attribute and the hierarchical utterance representations. We combine the predicted attribute’s embedding vector with the context representation to generate the next utterance. Looking from another perspective, we could formulate the conditional utterance generation problem as a multi-task problem where we jointly learn to predict the dialog attributes and tokens of the next utterance.
2.3 RL for Dialog Attribute Prediction
Often the MLE objective does not capture the true goal of the conversation and lacks the framework which can take developer-defined rewards into account for modelling such goals. Also, the MLE-based seq2seq models fail to model long term influence of the utterances on the dialog flow causing coherency issues. This calls for a Reinforcement Learning (RL) based framework which has the ability to optimize policies for maximizing long term rewards. At the core, the MLE objective tries to increase the conditional utterance probabilities and influences the model to place higher probabilities over the commonly occurring utterances. On the other hand, RL based methods circumvent this issue by shifting the optimization problem to maximizing long term rewards which could promote diversity, coherency, etc.
Previous approaches Li et al. (2016); Kottur et al. (2017); Lewis et al. (2017) propose to model the token prediction of the next utterance as a reinforcement learning problem and optimize the models to maximize hand-crafted rewards for improving diversity, coherency, and ease of answering. Their approaches involves pre-training the encoder-decoder models with supervised training and then refining the utterance generation further with RL using the hand-engineered rewards. Their state space consists of the dialog context representation (encoder hidden states). Their action space at a given time step includes all possible words that the decoder can generate (which is very large).
While this approach is appealing, policy gradient methods are known to suffer from high variance when using large action spaces. This makes training extremely unstable and requires significant engineering efforts to train successfully.
Another potential drawback with directly acting over the vocabulary space is that the RL optimization procedure tends to strip away the linguistic / natural language aspects learned during the supervised pre-training step, as observed in (Kottur et al., 2017; Lewis et al., 2017). Since the primary focus of the RL objective function is to improve the final reward (which may not emphasize on the linguistic aspects of the generated responses, for e.g., diversity scores), the optimization algorithm could lead the decoder into generating unnatural responses. We propose to avoid both the issues by reducing the action space to a higher level abstraction space i.e. the dialog attributes. Our action space comprises the discrete dialog attributes and the state space is the dialog context. Intuitively, this enables the RL policy to view the dialog attributes as control variables for improving dialog flow and modelling long term influence. For instance, if the input response was “how old are you?”, an RL policy optimized to maximize conversation length and engagement could choose to set one of the next utterance attributes as a question-type to generate a response like “why do you ask?” instead of a straightforward answer, to keep the conversation engaging. Thus, we believe that this approach enables the model to predict such rare but interesting utterances to which the MLE objective fails to give attention.
Our policy network comprises of the encoders and the attribute prediction network. Given the previous utterances , the policy network first encodes them by using the encoders. Then this encoded representation is passed to the attribute prediction network. The output of the attribute prediction network is the action. While there are many ways to design the reward function, we adopt the ease-of-answering reward introduced by Li et al. (2016) - negative log-likelihood of a set of manually constructed dull utterances (usually the most commonly occurring phrases in the dataset) in response to the next generated utterance. Let be the set of dull utterances. With the sampled dialog-acts, from the policy network, we generate the next utterance using the decoder. Then we add this generated utterance to the context and predict the probability of seeing one of the dull utterances in the -th step. This is used to compute the reward as follows:
where is the number of tokens in the dull utterance . The normalization avoids the reward function attending to only the longer dull responses. We use REINFORCE (Williams, 1992) to optimize our policy, . The expected reward is given by equation 5.
The gradient is estimated as in equation6.
where is the reward baseline (computed as the running average of the rewards during training). We initialize the policy with the supervised training and add an L2-loss to penalize the network weights from moving away from the supervised network weights.
3 Training Setup
Datasets: We first start with the Reddit-discourse dataset (Zhang et al., 2017) for training dialog attribute classifiers and modelling utterance generation.
Reddit: The Reddit discourse dataset (Zhang et al., 2017) is manually pre-annotated with dialog-acts via crowd sourcing. The dialog-acts comprise of answer, question, humor, agreement, disagreement, appreciation, negative reaction, elaboration, announcement. It comprises conversations from around randomly sampled Reddit threads with over comments and an average of turns per thread.
Open-Subtitles: Additionally, we show results with the unannotated Open-Subtitles dataset (Tiedemann, 2009) (we randomly sample up to million dialogs for training and validation). We tag the dataset with dialog attributes using pre-trained classifiers.
We experiment with two types of dialog attributes in this paper - sentiment and dialog-acts. We annotate the utterances with sentiment tags - positive, negative, neutral using the Stanford Core-NLP tool (Manning et al., 2014). We adopt the dialog-acts from two annotated dialog corpus - Switchboard (Godfrey et al., 1992) and Frames (Schulz et al., 2017).
Switchboard: Switchboard corpus(Godfrey et al., 1992) is a collection of 1155 chit-chat style telephonic conversations based on 70 topics. Jurafsky et al. (1997) revised the original tags to 42 dialog-acts. In our experiments, we restrict dialog-acts to the top-10 most frequently annotated tags in the corpus - Statement-non-opinion, Acknowledge , Statement-opinion, Agree/Accept, Abandoned or Turn-Exit, Appreciation, Yes-No-Question, Non-verbal, Yes answers, Conventional-closing. We consider the top-10 frequently annotated tags as a simple solution to avoid the class imbalance issue (the Statement-non-opinion act is tagged 72824 times, while Thanking is tagged only 67 times) for training the dialog attribute classifiers.
Frames: Frames(Schulz et al., 2017) is a task oriented dialog corpus collected in the Wizard-of-Oz fashion. It comprises of 1369 human-human dialogues with an average of 15 turns per dialog. The wizards had access to a database of hotels and flights information and had to converse with users to help finalize vacation plans. The dataset has 20 different types of dialog-acts annotations. Like the Switchboard corpus, we adopt the top 10 frequently occurring acts in the dataset for our experiments - inform, offer, request, suggest, switch-frame, no result, thank you, sorry, greeting, affirm.
Model Details: We use two-layer GRUs (Chung et al., 2014) for both encoder and decoders with hidden sizes of 512. We restrict the vocabulary for both the datasets to top frequency occurring tokens. The dialog attribute classifier for dialog attributes is a simple 2-layer MLP with layer sizes of , andfor the token embeddings, hidden-hidden transition matrices of the encoder and decoder GRUs.
4 Experimental Results
In this section, we present the experimental results along with qualitative analysis.
In Section 4.1, we discuss the dialog attribute classification results for different model architectures trained on the Reddit, Switchboard and Frames datasets.
In Section 4.2, we first demonstrate quantitative improvements (token perplexity/embedding based metrics) for the Attribute conditional HRED model with the manually annotated Reddit dataset. Further, we discuss the model perplexity improvements along with sample conversations and human evaluation results on the Open-Subtitles dataset. We annotate it with sentiment and dialog-acts (from Switchboard/Frames datasets) using pre-trained classifiers described in Section 4.1.
Finally, in Section 4.3, we analyze the quality of the generated responses after RL fine-tuning using diversity scores (distinct-1, distinct-2), sample conversations and human evaluation results for diversity and relevance.
4.1 Dialog Attribute Prediction
In this section, we present the experiments with the model architectures for the dialog attribute prediction - dialog-acts from Reddit, Switchboard and Frames datasets. First, we demonstrate the performance of the dialog-acts classifiers on the Reddit dataset as shown in Table 1.
The model refers to the architecture which predicts the dialog-acts based on current utterance alone. The tokens in the current utterance are fed through a two-layer GRU and the final hidden state is used to predict the dialog-acts. The model predicts the current utterance’s dialog-acts based on the dialog-acts corresponding to the previous two utterances. We consider the dialog-acts prediction problem as a sequence modelling problem where we feed the dialog-acts into a single-layer GRU and predict the current dialog-acts conditioned on the previous dialog-acts. We settled on conditioning on the dialog-acts corresponding to the previous two utterances alone as we didn’t observe any boost in the classifier performance from the older dialog-acts. As seen in Table 1, conditioning additionally on the dialog attributes helps improve classifier performance.
Next, we train classifiers to predict dialog-acts of utterances of the Switchboard and Frames corpus. In our experiments, the number of act types is 11 - the top 10 most frequently occurring acts in the corpus and ”others” category covering the rest of the tags.
As seen from Table 2, classifier performance is not really high and yet, contribute to improvements in perplexity for the conditional Seq2Seq models (discussed in Section 4.2). While we aim for better classifier performance, it is important to note here that the primary objective of such dialog attribute classifiers is to tag unannotated open-domain dialog datasets. As future work, we will study how the classification errors influence response generation.
4.2 Utterance Evaluation
Reddit: First, we evaluate Seq2Seq models trained on the manually annotated Reddit corpus as shown in Table 3. Seq2Seq+Attr refers to our model where we condition on the dialog-acts additionally. Please note that we use the notation ”Attr” here to maintain generality as it may refer to other dialog attributes like sentiment later in this section. For both the baseline and conditional Seq2Seq models, we consider a dialog context involving the previous two turns as we did not observe significant performance improvement with three or more turns. We use a 2-layer GRU language model as a baseline for comparison. As seen from Table 3, Seq2Seq+Attr fares well both in terms of perplexity and embedding metrics. Higher perplexity observed in the Reddit corpus could be due to the presence of several topics in the dataset (exhibits high entropy) and fewer dialogs compared to other open domain dialog datasets.
Open-Subtitles: With promising results on the manually tagged Reddit corpus, we now evaluate our attribute conditional HRED model on the unannotated Open-Subtitles dataset. We tag the Open-Subtitles dataset with the sentiment tags using the Stanford Core-NLP tool (Manning et al., 2014) and dialog-acts from Frames & Switchboard corpus using the pre-trained classifiers described in Section 4.1.
|Num Dialogs(in Millions)|
|Model||Attributes||0.2 M||0.5 M||1 M||2 M|
In Table 4, we compare the model perplexity when trained on varying dialog corpus size. In most of the cases, we observe that the conditioning with acts from both the frames and switchboard yields the lowest perplexity. We observe that the perplexity improvement is substantial for smaller datasets which is also corroborated from the experiments with the Reddit dataset.
Human Evaluation: Following the human evaluation setting in (Li et al., 2016), we randomly sample 200 input message and the generated outputs from the Seq2Seq+Attr & Seq2Seq models. We present each of them to 3 judges and ask them to decide which of the two outputs is 1) relevant and 2) diverse or interesting. Ties are permitted. Results for human evaluation are shown in Table 8. We observe that Seq2Seq+Attr performs better than the Seq2Seq model both in terms of diversity and relevance.
|Seq2Seq+Attr vs Seq2Seq|
Please note that the Seq2Seq+Attr model performs better in terms of diversity compared to the relevancy. This is in line with our expectations, as the purpose of dialog attribute annotations is to help the model focus better on less-frequent responses.
|Input:||i wish i was home watching tv.|
|Seq2Seq:||i dont know what i was thinking about|
|Seq2Seq+Attr:||i cant wait to see it.|
|Input:||He used from his charity to settle legal problems.|
|Seq2Seq:||i have no idea what youre talking about|
|Seq2Seq+Attr:||i dont think he is going to be a president.|
|Input:||tell us how you really feel|
|Seq2Seq:||i dont understand why|
|Seq2Seq+Attr:||lmao i could hella picture your reaction|
Additionally, we present a few sample conversations in Table 6, where we observe that the Seq2Seq+Attr model generates more interesting responses.
4.3 RL For Dialog Attribute Prediction
For the RL fine-tuning, we report the diversity scores of the generated responses with the models trained on the Open-Subtitles dataset in Table 7. The diversity scores, distinct-1 and distinct-2 are computed as the fraction of uni-grams and bi-grams in the generated responses following the previous work by Li et al. (2015).
We use the model conditioned on acts from both Switchboard and Frames for the Seq2Seq+Attr and RL cases. The action space for the policy in this case, covers the 10 acts from Switchboard and Frames each. We choose a collection of commonly occurring phrases in the Open-Subtitles dataset as the set of dull responses, for the reward computation in equation 4. We observe that the RL fine-tuning improves over the conditional seq2seq in terms of the diversity scores.
Human Evaluation: As described in Section 4.2, we present each of the 200 randomly sampled input-response pairs of the and models to 3 judges and ask to them rate each sample for diversity and relevance. From Table 8, we can see that the model significantly performs better both in terms of diversity and relevance.
|RL vs Seq2Seq+Attr|
Qualitative Analysis: In Table 9, we present the percentage of the commonly occurring generic responses from the Open-Subtitles dataset in the validation set samples corresponding to the and models. We observe very low percentages of such generic responses in the samples after RL fine-tuning. It is interesting to note that RL model has successfully learned to minimize the generation of other dull responses like I would love to be , I would love to see, I dont want to apart from expected the dull responses, (used in the reward computation). At the same time, RL model has scored higher in terms of the Relevancy metric, as seen in Table 8 which indicates that the RL fine-tuning actually explores interesting responses whilst avoiding the generic responses.
|Generic Responses||RL(%)||Seq2Seq + Attr(%)|
|thank you so much||7.56||7.32|
|i dont understand why||0.0||15.64|
|i would love to see||0.66||5.65|
|i dont know how||0.0||13.97|
|i dont want to||1.66||3.99|
|i dont know why||0.0||3.66|
|i would love to be||0.99||2.21|
|i have no idea||4.31||3.33|
Additionally, we present a few sample conversations in Table 10, where we observe that the RL model generates more diverse and relevant responses.
|Input:||i’m honestly a bit confused why no one has brought me or my books any cake|
|Seq2Seq+Attr:||i dont think i dont think anything|
|RL:||i cant wait to see you in the city.|
|Input:||ive been playing spaceship with my year old niece for the past few days|
|Seq2Seq+Attr:||i dont even know what i was talking about.|
|RL:||this is the best thing ive ever seen.|
|Input:||it makes me so happy that you like them|
|Seq2Seq+Attr:||i dont know what i was thinking about it|
|RL:||i was just thinking about the same thing|
5 Related Work
There are several works focusing on dialog-acts classification and clustering based analysis (Reithinger and Klesen, 1997; Liu, 2006; Khanpour et al., 2016; Ang et al., 2005; Crook et al., 2009; Stolcke et al., 2000; Ezen-Can and Boyer, 2013). Shen et al. (2017) additionally add sentiment feature to the latent variables in the VAE setting for utterance generation. In our work, we use dialog attributes from different sources - Switchboard and Frames corpus to model utterance generation in a more realistic setting. As for the RL setting, existing research efforts include Li et al. (2016); Dhingra et al. (2016); Jaques et al. (2016) which formulate the token prediction as a RL policy in Seq2Seq models. However, searching over a huge vocabulary space typically involves training with huge number of samples and careful fine-tuning of the policy optimization algorithms. Additionally, as discussed in Section 2.3, it requires precautionary measures to prevent the RL algorithm from removing the linguistic aspects of the generated utterances. In another related research work, Serban et al. (2017) use dialog-acts as one among their hand crafted features to select responses from an ensemble of dialog systems. They use dialog-acts in their RL policy, however their action space comprises of responses from an ensemble of dialog models. They include dialog-acts in their features for their distributed state representation.
In this work, we address the dialog utterance generation problem by jointly modeling previous dialog context and discrete dialog attributes. We analyze both quantitatively (model perplexity and other embedding based metrics) and qualitatively (human evaluation, sample conversations) to validate that composed
dialog attributes help generate interesting responses. Further, we formulate the dialog attribute prediction problem as a reinforcement learning problem. We fine tune the attribute selection policy network trained with supervised learning using REINFORCE and demonstrate improvements in diversity scores compared to the Seq2Seq model. In the future, we plan to extend the model for additional dialog attributes like emotion, speaker persona etc. and evaluate the controllability aspect of the responses based on the dialog attributes.
- Ang et al. (2005) Jeremy Ang, Yang Liu, and Elizabeth Shriberg. 2005. Automatic dialog act segmentation and classification in multiparty meetings. In ICASSP (1), pages 1061–1064.
- Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings Of The International Conference on Representation Learning (ICLR 2015).
- Cho et al. (2014) K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. ArXiv e-prints.
- Chung et al. (2014) J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. ArXiv e-prints.
- Crook et al. (2009) Nigel Crook, Ramón Granell, and Stephen G. Pulman. 2009. Unsupervised classification of dialogue acts using a dirichlet process mixture model. In Proceedings of the SIGDIAL 2009 Conference, The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 11-12 September 2009, London, UK, pages 341–348.
- Dhingra et al. (2016) B. Dhingra, L. Li, X. Li, J. Gao, Y.-N. Chen, F. Ahmed, and L. Deng. 2016. Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access. ArXiv e-prints.
- Ezen-Can and Boyer (2013) Aysu Ezen-Can and Kristy Elizabeth Boyer. 2013. Unsupervised classification of student dialogue acts with query-likelihood clustering. In Proceedings of the 6th International Conference on Educational Data Mining, Memphis, Tennessee, USA, July 6-9, 2013, pages 20–27.
- Godfrey et al. (1992) John J. Godfrey, Edward C. Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In Proceedings of the 1992 IEEE International Conference on Acoustics, Speech and Signal Processing - Volume 1, ICASSP’92, pages 517–520, Washington, DC, USA. IEEE Computer Society.
- Jaques et al. (2016) N. Jaques, S. Gu, D. Bahdanau, J. M. Hernández-Lobato, R. E. Turner, and D. Eck. 2016. Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control. ArXiv e-prints.
- Jurafsky et al. (1997) D. Jurafsky, R. Bates, N. Coccaro, R. Martin, M. Meteer, K. Ries, E. Shriberg, A. Stolcke, P. Taylor, and C. Van Ess-Dykema. 1997. Automatic detection of discourse structure for speech recognition and understanding. In 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings, pages 88–95.
Khanpour et al. (2016)
Hamed Khanpour, Nishitha Guntakandla, and Rodney D. Nielsen. 2016.
Dialogue act classification in domain-independent conversations using a deep recurrent neural network.In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 2012–2021.
- Kottur et al. (2017) S. Kottur, J. M. F. Moura, S. Lee, and D. Batra. 2017. Natural Language Does Not Emerge ’Naturally’ in Multi-Agent Dialog. ArXiv e-prints.
- Lagus and Kuusisto (2002) Krista Lagus and Jukka Kuusisto. 2002. Topic identification in natural language dialogues using neural networks. In Proceedings of the SIGDIAL 2002 Workshop, The 3rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Thursday, July 11, 2002 to Friday, July 12, 2002, Philadelphia, PA, USA, pages 95–102.
- Lewis et al. (2017) M. Lewis, D. Yarats, Y. N. Dauphin, D. Parikh, and D. Batra. 2017. Deal or No Deal? End-to-End Learning for Negotiation Dialogues. ArXiv e-prints.
- Li et al. (2015) J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. 2015. A Diversity-Promoting Objective Function for Neural Conversation Models. ArXiv e-prints.
- Li et al. (2016) J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. 2016. A diversity-promoting objective function for neural conversation models. In The North American Chapter of the Association for Computational Linguistics (NAACL), pages 110–119.
- Li et al. (2016) J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. Jurafsky. 2016. Deep Reinforcement Learning for Dialogue Generation. ArXiv e-prints.
- Liu (2006) Yang Liu. 2006. Using SVM and error-correcting codes for multiclass dialog act classification in meeting corpus. In INTERSPEECH 2006 - ICSLP, Ninth International Conference on Spoken Language Processing, Pittsburgh, PA, USA, September 17-21, 2006.
- Manning et al. (2014) Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60.
- Merity et al. (2017) S. Merity, N. Shirish Keskar, and R. Socher. 2017. Regularizing and Optimizing LSTM Language Models. ArXiv e-prints.
- Merity et al. (2018) S. Merity, N. Shirish Keskar, and R. Socher. 2018. An Analysis of Neural Language Modeling at Multiple Scales. ArXiv e-prints.
- Mitchell and Lapata (2008) Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In ACL, pages 236–244.
- Reithinger and Klesen (1997) Norbert Reithinger and Martin Klesen. 1997. Dialogue act classification using language models. In EuroSpeech.
- Rus and Lintean (2012) Vasile Rus and Mihai Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 157–162. Association for Computational Linguistics.
- Schulz et al. (2017) Hannes Schulz, Jeremie Zumer, Layla El Asri, and Shikhar Sharma. 2017. A frame tracking model for memory-enhanced dialogue systems. CoRR, abs/1706.01690.
- Serban et al. (2017) I. V. Serban, C. Sankar, M. Germain, S. Zhang, Z. Lin, S. Subramanian, T. Kim, M. Pieper, S. Chandar, N. R. Ke, S. Rajeshwar, A. de Brebisson, J. M. R. Sotelo, D. Suhubdy, V. Michalski, A. Nguyen, J. Pineau, and Y. Bengio. 2017. A Deep Reinforcement Learning Chatbot. ArXiv e-prints.
Serban et al. (2017)
Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen
Zhou, Yoshua Bengio, and Aaron C. Courville. 2017.
Multiresolution recurrent neural networks: An application to dialogue
Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3288–3294.
- Serban et al. (2016a) Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016a. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of AAAI.
- Serban et al. (2016b) Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2016b. A hierarchical latent variable encoder-decoder model for generating dialogues. CoRR, abs/1605.06069.
- Shen et al. (2017) X. Shen, H. Su, Y. Li, W. Li, S. Niu, Y. Zhao, A. Aizawa, and G. Long. 2017. A Conditional Variational Framework for Dialog Generation. ArXiv e-prints.
- Song et al. (2016) Y. Song, R. Yan, X. Li, D. Zhao, and M. Zhang. 2016. Two are Better than One: An Ensemble of Retrieval- and Generation-Based Dialog Systems. ArXiv e-prints.
- Sordoni et al. (2015) Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714.
Srivastava et al. (2014)
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and
Ruslan Salakhutdinov. 2014.
Dropout: a simple way to prevent neural networks from overfitting.
Journal of machine learning research, 15(1):1929–1958.
- Stolcke et al. (2000) Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339–373.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112.
Jörg Tiedemann. 2009.
News from OPUS - A collection of multilingual parallel corpora
with tools and interfaces.
In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors,
Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia, Borovets, Bulgaria.
- Vinyals and Le (2015) Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869.
- Wei et al. (2017) B. Wei, S. Lu, L. Mou, H. Zhou, P. Poupart, G. Li, and Z. Jin. 2017. Why Do Neural Dialog Systems Generate Short and Meaningless Replies? A Comparison between Dialog and Translation. ArXiv e-prints.
- Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning.
- Zhang et al. (2017) Amy X. Zhang, Bryan Culbertson, and Praveen Paritosh. 2017. Characterizing online discussion using coarse discourse sequences. In Proceedings of the 11th International AAAI Conference on Weblogs and Social Media, ICWSM ’17.
- Zhao et al. (2017) T. Zhao, R. Zhao, and M. Eskenazi. 2017. Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders. ArXiv e-prints.
- Zhou et al. (2017) H. Zhou, M. Huang, T. Zhang, X. Zhu, and B. Liu. 2017. Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory. ArXiv e-prints.