Incorporating a specific persona is vital to make chatbots appear more human-like. By focusing on the bot’s linguistic style, its personality can be extended. We examine various methods to build a chatbot that attempts to capture the characteristic tones of Star Trek characters.
In this work, we incorporate this aspect of persona by using multiple Encoder-Decoder models to handle different types of dialog. Our Enterprise to Computer bot (E2Cbot) consists of two sequence to sequence (SEQ2SEQ) models Sutskever et al. (2014); Bahdanau et al. (2014), one to handle Star Trek style input and another for everyday conversations. We use a Word Graph Filippova (2010); Banerjee et al. (2016) trained on Star Trek dialogs for shifting everyday conversation to the Star Trek domain. Sentences are ranked using a Bigram Language Model trained on Star Trek dialogs to find the most likely response.
The responses of the different models are evaluated using the relative perplexity of the utterance compared to Star Trek dialogs and replaced with alternates if they are too different. This ensures that the chatbot does not give an ungrammatical or incoherent response if the model confidence is below a certain threshold.
A key challenge with SEQ2SEQ models is that they tend to frequently give uninteresting responses like ”I don’t know” or ”I don’t understand.” To ensure an engaging conversation, we define rules to output standard responses when the model confidence is low. Unlike prior research, E2Cbot has independent Encoder-Decoder models for Star Trek and non-Star Trek domains. This structure gives promising results for Star Trek as-is. For non-Star Trek, we insert words into the SEQ2SEQ output to shift it towards the Star Trek domain.
We foresee a wide variety of applications for models like ours. The resulting chatbot can be used in various social situations as it can mimic different personae. For example, better emotional connection is established when an interlocutor matches the style and interests of the speaker. One could imagine sports-lover bots or extravert bots. Similarly, an ideal counselor would be more understanding and would encourage a person to talk whereas a general conversation bot would be more chatty and take a similar number of conversation turns as the user. Also, the utterance length of a counselor would be smaller compared to that of a general conversation bot.
2 Related work
This work is similar in intent to the rule-based ‘Fake Spock’ bot Pandora (2009) that imitates Spock. However, since that is a rule-based bot, it becomes too complex to create and update the rules. Fake Spock bot’s outputs are illustrated in Table 1. E2Cbot is data-driven and can be trained to encompass multiple topics. It does not imitate any specific Star Trek character, but adds a Star Trek ‘touch’ to its replies about day-to-day topics.
|User||Fake Spock Bot|
|I am hungry captain.||What will you be eating?|
|You can do it.||I shall attempt it.|
|Mister Spock.||I am an AI chatting on the Internet.|
|Maximum power.||I see no logic in preferring Stonn over me.|
In the work of DBLP:journals/corr/LiGBGD16 personality is embedded into the SEQ2SEQ model to handle speaker inconsistency in response generation. They modified the LSTM cell to encode speaker information and inject it into the hidden layer at each time step. This is called the Speaker Model and models the personality of the speaker and helps in predicting personalized responses throughout the generation process.
Our work is similar to the Neural-Storyteller model Kiros et al. (2015)
which involves ’style-shifting’ i.e. transferring standard image captions to the style of stories from novels. Each passage from a novel is mapped to a skip-thought vector. The RNN conditions on the skip-thought vector and generates the passage that it has encoded. It uses a linear vector transformation F(x) to transform inputx from caption style vector c to book-style vector b using the equation:
F(x) = x - c + b
We used three different datasets to train individual parts of our model.
3.1 Star Trek Dataset
To train the Star Trek SEQ2SEQ component, we created our own dataset of dialogs pulled from various Star Trek T.V. episodes and movies Dialogs (n.d.). The initial cleaning was done using an open-source Github repository Hogervorst (2016). This was followed by rule based cleaning to remove stage directions. Post-response pairs were created using a method similar to that outlined by lowe2015ubuntu for the Ubuntu dialog corpus. The same exchange between characters was used to generate multiple pairs by including the context as well. Exchange gave and post reply pairs. The final dataset consisted of 100,990 post-response pairs with an average utterance length of 14.3 words.
3.2 Cornell Movie-Dialog Corpus
We used the Cornell Movie-Dialogue Corpus by Danescu-Niculescu-Mizil+Lee:11a to train a SEQ2SEQ component to handle general, non-Star Trek conversations. It contained 199,455 post-response pairs with an average utterance length of 12.82 words.
We used an open source Twitter dataset Ma (2017)
to train a binary classifier to better predict non-Star Trek style inputs. This dataset is meant to capture regular, non-Star Trek conversation that a user might attempt to have with E2Cbot. We used 50,000 post response tweet pairs with an average utterance length of 16.18 words.
Figure 1 shows the pipeline of our model111Our code is available at https://github.com/GJena/CIS-700-7_Chatbot-Project. We use multiple SEQ2SEQ models to cover Star Trek-like dialogs and normal dialogs. To handle everyday conversation dialogs, we used the Cornell movie dataset. We will discuss the model in detail in the following subsections.
4.1 Binary Classifier
A logistic regression-based binary classifier routes the user utterance to either the Star Trek SEQ2SEQ model or the Cornell Movie Data SEQ2SEQ model. The classifier was trained on 200,000 Star Trek dialogs, 100,000 Cornell Movie Dialog Corpus dialogs and 100,000 tweets from the Twitter dataset. From them, we randomly sampled 80% dialogs as training and 20% dialogs as test data. The feature space was constructed using top 10,000 term frequency-inverse document frequency (TF-IDF) unigrams and bigrams after removing stop words. The classifier had a 95% accuracy on the test set.
4.2 Star Trek SEQ2SEQ
The first SEQ2SEQ model was trained on Star Trek data. It has 3 hidden layers with 1024 units encoder decoder structure. To compensate for the lack of data, we augmented the data by adding context as mentioned in Section 3.1.
4.3 Cornell Movie Data SEQ2SEQ
The second SEQ2SEQ model was trained on Cornell Movie data. Its architecture is the same as the Star Trek SEQ2SEQ model.
4.4 Word Graph
We used a modified implementation of the Word Graph algorithm by Banerjee et al. (2016) for domain-specific linguistic styles. Star Trek dialogs were selected to construct a word graph that stores words and their POS tags as nodes and adjacency as edges. The output generated by the normal conversation SEQ2SEQ model is tokenized and parsed through the NLTK POS tagger Loper and Bird (2002) i.e. each node represents (<word>, <POS>). Words with multiple POS tags are added as different nodes. The algorithm parses through the input and looks up the word graph for a list of candidate words that can be added between any two words in the input or at the start or end of the input.
This implementation is especially good at inserting words such as ‘Doctor’, ‘Jim’ at the start or end of the sentences due to high occurrence in the Star Trek dialogs. A few examples of Word Graph output are shown in Figure 2.
Apart from adding names of characters to the sentence, the Word Graph algorithm can also append other words to make sentences grammatical. Figure 3 shows relevant examples.
4.5 Filtering using a Language Model
Since the Word Graph produces some ungrammatical outputs, a Bigram Language Model trained on the augmented Star Trek dialog corpus is used. The sentence with the highest probability is chosen. If multiple sentences have the highest probability, we choose the one containing words present in a handcrafted keyword list.
4.6 Filtering Unlikely Response Candidates
If the perplexity of the response is very low or very high compared to the perplexity of Star Trek dialogs, a response from a standard response set or a reply in Klingon is output.
Both filtering techniques disposed of ungrammatical and non-Star Trek response candidates to ensure high quality output.
There is little consensus on the best evaluation metrics for free-form chatbots. We used a set of standard input sentences against which we evaluate both bots.
Our evaluation dataset consists of 20 sentences of which 50% are normal conversation and 50% are Star Trek specific dialogs. Quantitative metrics used include perplexity, overlap with Star Trek vocabulary and human evaluation. Perplexity of the response is compared with the perplexity of Star Trek dialogs. Overlap with the Star Trek vocabulary is measured with the rationale that a higher overlap would better capture the Star Trek style.
In addition, ten human annotators rate the responses on the properties of correct grammar, coherence or relevance and Star Trek relatedness. The annotators give a score of 0 if the response does not exhibit the property or 1 if it does. The scorers comprised six people who are Star Trek fans and four who aren’t familiar with Star Trek.
Figure 4 shows some sample responses of the bots for our evaluation dataset. Both output valid responses, except for ”Engage”. ”Engage” is a command generally given to activate the warp drive of the spaceship. E2Cbot gives a coherent response whereas Fake Spock bot’s reply is irrelevant. This shows the shortcoming of pure rule based system, since it is difficult to cover all cases. In the last example, E2Cbot addsSpock to the sentence to give it a Star Trek touch but the Pandora bot gives a generic response. Table 2 shows the scores given by the annotators for different metrics. Table 3 shows perplexity of responses on the Star Trek dialog data and the vocabulary overlap of the responses with the Star Trek vocabulary. The perplexity of Star Trek dialogs was found out to be 65.69.
|Star Trek style||64%||86%|
|Model||Average Perplexity||Vocabulary Overlap|
The overall performance of E2Cbot is better than the Fake Spock Pandora bot. However, the Pandora bot does a better job of generating grammatically correct responses since it is rule-based. Our data-driven model is able to produce more coherent responses, including responding to out of domain input. Additionally, the responses generated by E2Cbot had more Star Trek style.
Our model is able to automatically generate text in Star Trek style, even for out of domain input. It is, in general, an important advance beyond rule based systems like Fake Spock bot. Since we are mainly using a data driven approach, this model can be easily expanded to other domains like news or sports. It can also be extended to emulate specific fictional characters. Further exploration can be done to combine the two models and achieve a superior model.
Future work involves experimenting with MemN2N by Sukhbaatar et al. (2015) and Joint Attention mechanism by Xing et al. (2016) in place of SEQ2SEQ. MemN2N has shown to retain information for a long period of time. Joint Attention mechanism allows the encoder to focus on multiple things. Additionally the model might be augmented by explicitly adding a personality vector along with tone and mood as input.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473.
Banerjee et al. (2016)
Siddhartha Banerjee, Prakhar Biyani, and Kostas Tsioutsiouliklis. 2016.
Transforming chatbot responses to mimic domain-specific linguistic
Second Workshop on Chatbots and Conversational Agent Technologies.
- Danescu-Niculescu-Mizil and Lee (2011) Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011.
- Dialogs (n.d.) Star Trek Dialogs. n.d. Star trek dialogs. Accessed: 2017-04-22. http://www.chakoteya.net/StarTrek/.
- Filippova (2010) Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceedings of the 23rd International Conference on Computational Linguistics. Association for Computational Linguistics, pages 322–330.
- Hogervorst (2016) Roel M. Hogervorst. 2016. Star trek dialog cleaning. Accessed: 2017-04-22. https://github.com/RTrek/TNG.
- Kiros et al. (2015) Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. arXiv preprint arXiv:1506.06726 .
- Li et al. (2016) Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. CoRR abs/1603.06155. http://arxiv.org/abs/1603.06155.
Loper and Bird (2002)
Edward Loper and Steven Bird. 2002.
Nltk: The natural
Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics - Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, ETMTNLP ’02, pages 63–70. https://doi.org/10.3115/1118108.1118117.
- Lowe et al. (2015) Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909 .
- Ma (2017) Marsan Ma. 2017. Twitter dataset. Accessed: 2017-04-22. https://github.com/Marsan-Ma/chat_corpus.
- Pandora (2009) Pandora. 2009. Fake spock pandora bot. Accessed: 2017-04-22. https://www.chatbots.org/chat_bot/mr_spock.
- Sukhbaatar et al. (2015) Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. Weakly supervised memory networks. CoRR abs/1503.08895. http://arxiv.org/abs/1503.08895.
- Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR abs/1409.3215. http://arxiv.org/abs/1409.3215.
- Xing et al. (2016) Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2016. Topic augmented neural response generation with a joint attention mechanism. CoRR abs/1606.08340. http://arxiv.org/abs/1606.08340.