Conversational bots become increasingly popular in a wide range of business areas. In order to support the rapid development of bots, a number of bot building platforms have been launched, for example, from Microsoft, Amazon and so on. Despite this progress, the development of a business-critical bot still requires a serious effort from the design to actual implementation of several components such as language understanding, state tracking, action selection, and language generation. Not only does this complexity prevent casual developers from building quality bots but also introduces an unavoidable degradation in performance due to some non-trivial problems including unclear state representation design, insufficient labeled data and error propagation down the pipeline.
Recently, end-to-end (E2E) approaches using Deep Neural Networks (DNNs) have shown the potential to solve such problems – the DNNs induce a latent representation in the course of the joint optimization of all components without requiring any labeling on internal state. Despite such appealing aspects, the neural E2E approaches also have major challenges to overcome. The state-of-the-art systems require numerous dialogs only to learn simple behaviors. In general, it is expensive to collect a sufficient amount of dialogs from a target task. A possibility to the data-intensiveness problem would be to repurpose already built models through fine-tuning with a few additional dialogs from the target task. For example, one can add a payment handling capability to a new bot by repurposing any model that is already trained on payment-related conversations. It has been shown, however, that neural models tend to forget what it previously learned when it continuously trains on a new task, which is what is calledCatastrophic Forgetting French (1999). Another possible approach would be to compose only relevant parts from each of the pretrained models for the target task. Unfortunately, it is also unclear how to compose neural models due to the lack of modularity.
There have been prior studies that partly address the data-intensiveness problem outside the neural model Wen et al. (2016b); Williams et al. (2017); Zhao et al. (2017); Eshghi et al. (2017); Wen et al. (2016a); Zhao et al. (2017). In this work, we instead propose to directly address the problem with recent developments in the space of continual learning for neural models. Specifically, we adopt a domain-independent neural conversational model and introduce a novel Adaptive Elastic Weight Consolidation (AEWC) algorithm to continuously learn a new task without forgetting valuable skills that are already learned. To test our method, we first continuously train a model on synthetic data that only consists of general conversation patterns like opening/closing turns and then on a corpus of human-computer (H-C) dialogs in a customer support domain that barely has general conversations. Then, we show that the resulting model is able to handle both general and task-specific conversations without forgetting general conversational skills. Second, we continuously train a model on a large amount of human-human (H-H) dialogs and then on a small number of H-C dialogs in the same customer support domain. As H-H conversations typically cover various out-of-domain topics including general conversations, this allows us to show that the resulting model can carry out the target task while handling general conversations that do not occur in the H-C dialogs.
2 Related Work
Traditionally, a dialog system has a pipeline architecture, typically consisting of language understanding, dialog state tracking, dialog control policy, and language generation Jokinen and McTear (2009); Young et al. (2013). With this architecture, developing dialog systems requires designing the input and output representation of multiple components. It also, oftentimes, involves writing a large number of handcrafted rules and laboriously labeling dialog datasets to train statistical components. In order to avoid this costly manual endeavor, a line of research has emerged to introduce end-to-end trainable neural models Sordoni et al. (2015); Vinyals and Le (2015); Serban et al. (2016); Wen et al. (2016b); Bordes and Weston (2016). But the E2E neural approach is data-intensive requiring over thousands of dialogs to learn even simple behaviors. Broadly there are two lines of work addressing the data-intensiveness problem. The first makes use of domain-specific information and linguistic knowledge to abstract out the data Wen et al. (2016b); Williams et al. (2017); Zhao et al. (2017); Eshghi et al. (2017). The second line of work adopts data recombination approaches to generate counterfeit data that mimics target domain dialogs Wen et al. (2016a); Zhao et al. (2017). The prior approaches, however, partly bring us back to the downside of traditional approaches: The difficulty in maintenance increases as the number of rules grows; The system quality depends on external expertise; It is hard to scale out over different domains. Furthermore, the increased size of training data, as a result of data recombination approaches, would lead to a significant increase in training time. Unlike prior work, we approach the problem from the standpoint of continual learning where a single neural network accumulates task-relevant knowledge over time and exploits this knowledge to rapidly learn a new task from a small number of examples.
Continual Learning for Neural Networks
Previous studies that address the catastrophic forgetting problem are broadly partitioned into three groups. First, architectural approaches reduce interference between tasks by altering the architecture of the network. The simplest form is to copy the entire network for the previous task and add new features for a new task. Rusu et al. (2016); Lee et al. (2016). Though this prevents forgetting on earlier tasks, the architectural complexity grows with the number of tasks. fernando2017pathnet proposes a single network where a subset of different modules gets picked for each task based on an evolutionary idea to alleviate the complexity issue. Second, functional approaches encourage similar predictions for the previous and new tasks. li2016learning applies the old network to the training data of the new task and uses the output as pseudo-labels. jung2016less performs a similar regularization on the distance between the final hidden activations. But the additional computation using the old network makes functional approaches computationally expensive. Lastly, implicitly distributed knowledge approaches use neural networks of a large capacity to distribute knowledge for each task using dropout, maxout, or local winner-take-all Goodfellow et al. (2013); Srivastava et al. (2013)
. But these earlier approaches had limited success and failed to preserve performance on the old task when an extreme change to the environment occurred. Recently, kirkpatrick2017overcoming proposed elastic weight consolidation (EWC) which makes use of a point estimate for the Fisher information metric as a weighting factor for a distance penalty between the parameters of the new and old tasks. To alleviate the cost of exactly computing the diagonal of the Fisher metric, zenke2017continual presented an online method that accumulates the importance of individual weights over the entire parameter trajectory during training. This method, however, could yield an inaccurate importance measure when the loss function is not convex. Thus, we propose an adaptive version of the online method that applies exponential decay to cumulative quantities.
3 Continual Learning for Conversational Agents
In this section, we describe a task-independent conversation model and an adaptive online algorithm for continual learning which together allow us to sequentially train a conversation model over multiple tasks without forgetting earlier tasks.
3.1 Task-Independent Conversation Model
As we need to use the same model structure across different tasks, including open-domain dialogs as well as task-oriented dialogs, we adopt a task-independent model. Thus, our model should be able to induce a meaningful representation from raw observations without access to hand-crafted task-specific features. 111Inspired by Williams et al. (2017), one can still make use of action masks without hurting the task independence. In order to achieve this goal, we employ a neural encoder for state tracking and a neural ranking model for action selection.
Due to the sequential nature of the conversation, variants of Recurrent Neural Networks (RNNs)Medsker and Jain (1999) have been widely adopted for modeling conversation. State tracking models with RNN architecture, however, usually tend to become less effective as the length of a sequence increases due to the gradual information loss along the recurring RNN units. To address this problem, we use a hierarchical recurrent encoder architecture which has been recently adopted for generative conversation models Serban et al. (2016). Specifically, our model mimics the natural structure in language — a conversation consists of a sequence of utterances, an utterance is a sequence of words, a word, in turn, is composed of characters.
In order to capture character-level patterns in word embeddings, we concatenate the word embeddings with the output of the bidirectional character-level RNN encoder. We use Long Short-Term Memory (LSTM)Hochreiter and Schmidhuber (1997)
as the RNN unit that takes an input vectorand a state vector to output a new state vector . Let denote the set of characters and the set of words. is a sequence of characters . We compute the embedding of a word, , as follows:
where denote the vector concatenation operation. and are character embeddings and word embeddings, respectively.
Next, we have another bidirectional LSTM-RNN layer that takes the word embeddings of an utterance to generate an utterance embedding :
After that, we have a unidirectional LSTM-RNN layer that takes a sequence of pairs of user utterance embedding, , and previous system action embeddings 222We treat system actions the same as user utterances and encode both with a common LSTM-RNN encoder, , as input to induce state embeddings :
In a continual learning setting, there is no predefined action set from which a classifier selects an action across different tasks. Thus, we cast the action selection problem as a ranking problem where we consider the affinity strength between a state embedding(Eq. 4) and a set of candidate system action embeddings (Eq. 3): 333We employ the same utterance-level encoder as in the state tracking to yield the embeddings of the candidate system actions.
where projects the state embedding onto the action space. In order to optimize the projection matrix, , we adopt the Plackett-Luce model Plackett (1975)
. The Plackett-Luce model normalizes ranking score by transforming real-valued scores into a probability distribution:
With the transformed probability distribution, we minimize the cross-entropy loss against the true label distribution. Thanks to the normalization, our ranking model performs a more effective penalization to negative candidates.
3.2 Adaptive Elastic Weight Consolidation
In order to achieve continual learning, we need to minimize the total loss function summed over all tasks, , without access to the true loss functions of prior tasks. A catastrophic forgetting arises when minimizing leads to an undesirable increase of the loss on prior tasks with . Variants of the EWC algorithm tackle this problem by optimizing a modified loss function:
where represents an weighting factor between prior and current tasks, all model parameters introduced in Section 3.1, the parameters at the end of the previous task and regularization strength per parameter . The bigger , the more influential is the parameter. EWC defines , for example, to be a point estimate which is equal to the diagonal entries of the Fisher information matrix at the final parameter values. Since EWC relies on a point estimate, we empirically noticed that sometimes fails to capture the parameter importance when the loss surface is relatively flat around the final parameter values as essentially decreases to zero.
In contrast to EWC, Zenke et al. (2017) computes an importance measure online by taking the path integral of the change in loss along the entire trajectory through parameter space. Specifically, the per-parameter contribution to changes in the total loss is defined as follows:
where is the parameter trajectory as a function of time , and . Note that the minus sign indicates that we are interested in decreasing the loss. In practice, we can approximate as the sum of the product of the gradient with the parameter update . Having defined , is defined such that the regularization term carries the same units as the loss by dividing by the total displacement in parameter space:
where quantifies how far the parameter moved during the training process for task . is introduced to keep the expression from exploding in cases where the denominator gets close to zero. Note that, with this definition, the quadratic surrogate loss in (7) yields the same change in loss over the parameter displacement as the loss function of the previous tasks.
The path integral over the entire trajectory of parameters during training, however, can yield an inaccurate importance measure when the loss function is not convex. As the loss function of neural networks is generally not convex, we propose to apply exponential decay to and :
where is a decay factor.
|Train Data||# Dialogs||Avg. Dialog Len||Avg. User Len||Avg. System Len|
|Test Data||# Dialogs||Avg. Dialog Len||Avg. User Len||Avg. System Len|
In order to test our method, we used four dialog datasets — open_close, HH_reset_password, HC_reset_password and HC_reset_password+. 444The datasets are not publicly available, but we are unaware of suitable H-H and H-C paired data in the public domain. Basic statistics of the datasets are shown in Table 1. Example dialogs are provided in the Appendix.
A synthetic corpus that we created in order to clearly demonstrate the phenomena of catastrophic forgetting and the impact of our method. In open_close, every conversation has only opening/closing utterances without any task-related exchanges.
A real H-H conversation dataset that we obtained from our company’s text-based customer support chat system. All conversation logs are anonymized and filtered by a particular problem which is “reset password” for this study. Data from this system is interesting in that H-H conversations typically cover not only task-related topics but also various out-of-domain topics including general conversations. If we can transfer such conversational skills to an H-C conversation model, the H-C model will be able to handle many situations even without seeing relevant examples in its training data. But using H-H dialogs to bootstrap a task-oriented dialog system has been shown to be difficult even with serious annotation effort Bangalore et al. (2008).
A real corpus of H-C conversations that we obtained from our company’s text-based customer support dialog system. This data is distinctive since it is used by real users and the system was developed by customer support professionals at our company with sophisticated rules. As the data was obtained from a dialog system, however, it includes mistakes that system made. Thus, we modified the dialog data to correct system’s mistakes by replacing it with the most appropriate system action given the context and then discarding the rest of the dialog, since we do not know how the user would have replied to this new system action. The resulting dataset is a mixture of complete and partial conversations, containing only correct system actions.
The HC_reset_password dataset barely has general conversations since the dialog system for a particular problem only starts after customers are routed to the system from somewhere else according to the brief problem description. But for a standalone system, it is natural to have opening and closing utterances. Thus, we extended the HC_reset_password dataset with opening and closing utterances which were randomly sampled from the open_close dataset. This allows us to test if our method can keep the conversation skills that it learned from either open_close or HH_reset_password after we further train the model on HC_reset_password.
To implement the state tracker, we use three LSTM-RNNs with – 25 hidden units for each direction of the RNNs for word encoding, 128 hidden units for each direction of the RNNs for utterance encoding and 256 hidden units for the RNN for state encoding. We shared the RNNs for word and utterance encoding for both user utterances and (candidate) system actions. We initialized all LSTM-RNNs using orthogonal weights Saxe et al. (2013)
. We initialized the character embeddings with 8-dimensional vectors randomly drawn from the uniform distributionwhile the word embedding weight matrix is initialized with the GloVe embeddings with 100 dimension Pennington et al. (2014). We use the Adam optimizer Kingma and Ba (2015), with gradients computed on mini-batches of size 1 and clipped with norm value 5. The learning rate was set to throughout the training and all the other hyper parameters were left as suggested in kingma2014adam. To create train sets for the Plackett-Luce model, we performed negative sampling for each dataset to generate 9 distractors for each truth action.
Continual Learning Details
As a simple transfer mechanism, we initialized all weight parameters with prior weight parameters when there is a prior model. Given that our focus is few-shot learning, we don’t assume the existence of development data with which we can decide when to stop training. Instead, training terminates after 100 epochs which is long enough to reconstruct all training examples. Theand are updated continuously during training, whereas the importance measure and the prior weight are only updated at the end of each task. We set the trade-off parameter to . If the path integral (Eq. 10) is exact, would mean an equal weighting among tasks. However, the evaluation of the integral typically involves various noises, leading to an overestimate of . To compensate the overestimation, generally has to be chosen smaller than one.
Results on Synthetic Dialog to H-C Dialog
In order to clearly demonstrate the catastrophic forgetting problem, we compare three models trained by different training schemes: 1) No Transfer (NT) – we train a model on HC_reset_password from scratch, 2) Weight Transfer (WT) – we train a model on open_close, and continued to train the model on HC_reset_password, 3) AEWC – the same as 2) except for AEWC being applied. We compared accuracy on both the evaluation data from HC_reset_password and HC_reset_password+. The result is shown in Table 2. All the models perform well on HC_reset_password due to the similarity between the training and evaluation data. But the performances of NT and WT on HC_reset_password+ significantly drop down. This surprisingly poor result confirms shalyminov2017challenging which found that neural conversation models can be badly affected by systemic noise. In this case, we systemically introduced unseen turns into dialogs. On the contrary, AWEC shows a higher performance than the others by trying to find optimal parameters not only for the previous task but also for the new task. One of key observations is that the performance of WT on HC_reset_password+ starts strong but keeps decreasing as more training examples are given. This indicates that weight transfer alone cannot help carry prior knowledge to a new task, rather it might lead to poor local optima if the prior knowlege is not general enough.
Results on H-H Dialog to H-C Dialog
Now we turn to a more challenging problem — we bootstrap a conversation model by first learning from H-H dialogs without any manual labeling. We again compare three models trained by different training schemes: 1) No Transfer – we train a model on HC_reset_password from scratch, 2) Weight Transfer – we train a model on HH_reset_password, and continued to train the model on HC_reset_password 3) AEWC – the same as 2) except for AEWC being applied. The characteristics of H-H dialogs are vastly different from H-C dialogs. For example, the average utterance length and conversation length are way longer than H-C dialogs, introducing complex long dependencies. Also, H-H dialogs cover much broader topics that do not exactly match the conversation goal introducing many new vocabularies that do not occur in the corresponding H-C dialogs. In order to make the knowledge transfer process more robust to such non-trivial differences, the dropout regularization Srivastava et al. (2014) was applied to both utterance and state embeddings with the ratio of 0.4. We also limited the maximum utterance length to 20 words. The result is shown in Table 3, which generally agrees on the previous experimental results obtained with synthetic data. All three models work well on HC_reset_password. NT performs poorly on HC_reset_password+ as expected whereas WT interestingly shows a much higher performance than NT. This can be attributed to the fact that HH_reset_password contains a broader array of conversation skills including some knowledge on the target task compared to open_close, which leads to a more general and transferable model that would not necessarily be forgotten when faced with a new task. But the increasing performance gap between WT and AEWC on HC_reset_password+ clearly shows that WT gradually forgets the prior knowledge as it trains on more examples from a new task. Another noteworthy observation is that, when just one training example is given, the performances of WT and AEWC are much higher than that of NT. This result demonstrates that our method can capture meaningful knowledge from prior tasks in such a way that it partly works on a related new task.
We have presented a novel approach to the data-intensiveness problem due to the lack of compositionality of end-to-end neural conversation models. We tackled the problem with a continual learning technique for neural networks. Specifically, we proposed a universal conversational model that can be used across different tasks and a novel Adaptive Elastic Weight Consolidation method which together allows us to continuously train on a sequence of tasks without forgetting earlier tasks. We tested our method with two experimental configurations – conversation skill transfer from synthetic dialogs/H-H dialogs to H-C dialogs in a customer support domain. Future work includes in-depth analysis on how our algorithm distributes different pieces of knowledge across the network. Also, it would be interesting to apply the continual learning method for composing multiple tasks to handle a more complex task.
- Bangalore et al. (2008) Srinivas Bangalore, Giuseppe Di Fabbrizio, and Amanda Stent. 2008. Learning the structure of task-driven human–human dialogs. IEEE Transactions on Audio, Speech, and Language Processing 16(7):1249–1259.
- Bordes and Weston (2016) Antoine Bordes and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683 .
Eshghi et al. (2017)
Arash Eshghi, Igor Shalyminov, and Oliver Lemon. 2017.
Bootstrapping incremental dialogue systems from minimal data: the
generalisation power of dialogue grammars.
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pages 2210–2220.
- Fernando et al. (2017) Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. 2017. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734 .
- French (1999) Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3(4):128–135.
- Goodfellow et al. (2013) Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211 .
- Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735–1780.
- Jokinen and McTear (2009) Kristiina Jokinen and Michael McTear. 2009. Spoken dialogue systems. Synthesis Lectures on Human Language Technologies 2(1):1–151.
- Jung et al. (2016) Heechul Jung, Jeongwoo Ju, Minju Jung, and Junmo Kim. 2016. Less-forgetting learning in deep neural networks. arXiv preprint arXiv:1607.00122 .
- Kingma and Ba (2015) Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. The International Conference on Learning Representations (ICLR). .
- Kirkpatrick et al. (2017) James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences page 201611835.
Lee et al. (2016)
Sang-Woo Lee, Chung-Yeon Lee, Dong-Hyun Kwak, Jiwon Kim, Jeonghee Kim, and
Byoung-Tak Zhang. 2016.
Dual-memory deep learning architectures for lifelong learning of everyday human behaviors.In IJCAI. pages 1669–1675.
Li and Hoiem (2016)
Zhizhong Li and Derek Hoiem. 2016.
Learning without forgetting.
European Conference on Computer Vision. Springer, pages 614–629.
- Medsker and Jain (1999) Larry Medsker and Lakhmi C Jain. 1999. Recurrent neural networks: design and applications. CRC press.
- Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014) 12:1532–1543.
- Plackett (1975) Robin L Plackett. 1975. The analysis of permutations. Applied Statistics pages 193–202.
- Rusu et al. (2016) Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671 .
- Saxe et al. (2013) Andrew M Saxe, James L McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120 .
- Serban et al. (2016) Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI. pages 3776–3784.
- Shalyminov et al. (2017) Igor Shalyminov, Arash Eshghi, and Oliver Lemon. 2017. Challenging neural dialogue models with natural data: Memory networks fail on incremental phenomena. In Proc. SEMDIAL 2017 (SaarDial) Workshop on the Semantics and Pragmatics of Dialogue. pages 98–106.
- Sordoni et al. (2015) Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714 .
Srivastava et al. (2014)
Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and
Ruslan Salakhutdinov. 2014.
Dropout: a simple way to prevent neural networks from overfitting.
Journal of Machine Learning Research15(1):1929–1958.
- Srivastava et al. (2013) Rupesh K Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and Jürgen Schmidhuber. 2013. Compete to compute. In Advances in neural information processing systems. pages 2310–2318.
- Vinyals and Le (2015) Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869 .
- Wen et al. (2016a) Tsung-Hsien Wen, Milica Gašić, Nikola Mrkšić, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016a. Multi-domain neural network language generation for spoken dialogue systems. In Proceedings of the 2016 Conference on North American Chapter of the Association for Computational Linguistics (NAACL).
- Wen et al. (2016b) Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016b. A network-based end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562 .
- Williams et al. (2017) Jason D Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. arXiv preprint arXiv:1702.03274 .
- Young et al. (2013) Steve Young, Milica Gašić, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE 101(5):1160–1179.
- Zenke et al. (2017) Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In International Conference on Machine Learning. pages 3987–3995.
- Zhao et al. (2017) Tiancheng Zhao, Allen Lu, Kyusong Lee, and Maxine Eskenazi. 2017. Generative encoder-decoder models for task-oriented spoken dialog systems with chatting capability. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. pages 27–36.
Appendix A open_close example dialog
Hello. What can I help you?
Okay. Thank you.
Sure thing! Have a great day.
Appendix B HC_reset_password example dialog
I forgot my password and now I have locked my account for 30 days
Okay, you don’t need to remember your password, we can reset it. Would you like to try that?
SOLUTION: To reset your password, go to xx_url_xx. Was that helpful?
Okay that’s also another problem I have. I added extra security & the number I have on fine is no longer active. I chnsged my number and because I can’t access it I can’t long into my account & it won’t let me reset my password
Let’s connect you to a person who can help you.
Appendix C HC_reset_password+ example dialog
Hi. What can I help you?
Okay, you don’t need to remember your password, we can reset it. Would you like to try that?
already tried that
Let’s connect you to a person who can help you.
Okay thank you.
Sure thing! Have a great day.
Appendix D HH_reset_password example dialog
hi, thanks for visiting answer desk! i’m xx_firstname_xx q.
hello i am having trouble accessing my laptop
hi, how may i help you today?
i forgot the password i changed the pw on my account, but the computer is still not able to be accessed
oh that’s bad , that might be very important to you but no worries i will help you out with your issue. to start with may i have your complete name please?
my name is xx_firstname_xx xx_firstname_xx, i am contacting you on behalf of xx_firstname_xx mine
that’s okay xx_firstname_xx. may i also know your email address and phone number please?
my email, xx_email_xx, xx_phonenumber_xx
thank you for the information. xx_firstname_xx what is your current operating system?
may i know what is the error message you received upon trying to unlock your computer?
the password is not working i forgot the pw, i tried to reset the pw from the account
is it a local account or microsoft account?
i am not sure i thought it was a microsoft account but the pw didnt change
can you send me the email so that i can check if it is microsoft account?
xx_email_xx though i think i may have created one by accident setting up the computer it might be xx_email_xx or some variation of that i chnaged the pw on the xx_email_xx but it had no effect
since we’re unable to know what exactly happening to your computer. i will provide you our technical phone support so that you will be well instructed on what you are going to do to get your computer work again. would that be okay with you?
one moment please.