Spoken language understanding (SLU) for dialogue systems is designed to correctly interpret a user utterance, a unit of communication given a specific context Grice (1968)
. For successful interpretation, the system may classify user intent (the action that the user would like the system to perform) and resolve slot values (particular attributes associated with intents). Task-oriented dialogues currently most often consist of a one-shot utterance said by the user for the digital assistant to perform a task (e.g., ‘What is the weather like?’). In some cases, however, a dialogue may involve multiple turns, each turn a sequence of user and system utterances. Reasoning over multi-turn dialogues, in which user and agent add information incrementally to specify the user’s intent, is a challenge which only increases in difficulty when the agent needs to resolve referring expressions across turns, either explicit references (example - pronominal and nominal anaphora) or implicit ones (example - zero anaphora).
In this paper, we approach the referring expression resolution task in multi-turn discourse as a query reformulation task - the utterance containing the referring expression is rewritten to contain all relevant slot values from the context, a process we call contextual query rewrite (CQR). In order to be feasible, query reformulation must be able to leverage multi-turn context, be intuitive, and learnable, principles we apply in the creation of our corpus extended via crowd-sourcing and then used to train an end-to-end dialogue system without a need for explicit state trackers. Our main contributions are:
We introduce a new task - contextual query rewrite - for resolving referring expressions without explicit need to track the state.
We release a publicly available corpus consisting of gold standard and crowd-sourced rewrites as an extension to an existing task-oriented dialogue corpus.
We motivate contextual query rewriting (CQR) with an example shown in Table 1. Typically, we would expect a digital assistant to understand the second user utterance (U2) as referring to traffic near the coffee shop, rather than defaulting to the user’s current location, although here this is not explicitly stated. Our correct interpretation of this utterance is possible via an implicit reference to the location using zero anaphora (e.g., “What’s the address (of the coffee shop)?”), recognized as the predominant anaphora type observed cross-linguistically Givón (2017). The user could also refer to the same coffee shop using a nominal anaphoric reference (“that coffee shop”) , or a locative form (“there”), or as a pronominal form (e.g., “it”).
|speaker||domain||utterance||Resolved referenced slots(key=value)||Reformulated query|
|U1||POI||any coffee shops nearby||poi_type=coffee shop|
|V1||POI||found a coffeeworks 2 miles away||poi=coffeeworks|
|U2||Traffic||how’s the traffic||location=coffeeworks||how’s the traffic to coffeeworks|
More generally, the task of referring expression resolution can be solved as a carryover task, where the relevant slots from the context are carried over to the current turn (Naik et al., 2018). For our working example described in Table 1, the result of the carryover task shows up as additional slots associated with the utterance, as show in in Figure 0(a). A challenge is dealing with domain specific schemas requiring accurate transformations even for emergent slots where there is very little data available to train these mappings correctly. Another solution is to make the natural language understanding system contextually aware Gupta et al. (2018). However, updating the domain-specific NLU sub-systems is more complex, as it requires re-training the production sub-systems, often a time-consuming and laborious task. Moreover, this approach does not work for systems that model the meaning using frameworks other than intents and slots.
In this paper, we propose query reformulation, where we take an otherwise ambiguous utterance such as “how’s the traffic” in Table 1 and add the relevant slot values from the context, here the name of the place (“coffeeworks”), to make a reformulated query: “how’s the traffic to coffeeworks?”, as shown in Figure 0(b). We call this approach to resolving referring expressions contextual query rewrite (CQR). The main advantage of this approach is that it does not require updating the domain specific NLUs, and takes advantage of the fact that these systems are optimized for single shot performance. Resolving referring expressions is now equivalent to generating a single shot natural language query, thereby making this process invariant to the meaning representation and domain specific schema changes.
2.2 CQR Task Formulation
We can now formally define the CQR task. We define a sequence of dialogue turns ; the current user utterance 222For simplicity we assume a turn taking model - a user turn and system turn alternate., where is a sequence of tokens . Associated with the dialogue turns and the current turn is a set of slot values . The CQR task is to learn a mapping
where the reformulated user turn contains tokens copied either from the vocabulary or from a subset of relevant slot values from . The challenge is to learn a reformulated user utterance that has implicitly selected the subset of slots that are relevant at turn , while retaining the semantics associated with turn .
3 Related work
Dialogue corpora: There are various dialogue corpora, and collection methodologies. Weston et al. (2015) have the objective of improving algorithms for text understanding by modularizing each reasoning task; two of their tasks involve coreference; however, these appear to involve the resolution of pronominal forms exclusively, forms which in on our data represent only a small fraction of all anaphoric references. Bordes et al. (2017) released a corpus of 18,000 synthetic dialogues for a single domain (restaurant reservations), however, these do not reflect real user behavior. Human efforts may also be directed in a Wizard-of-Oz schema using the interactions of crowd-sourced workers to develop corpora. For example, Wen et al. (2017) create a corpus of approx. six hundred eighty dialogues for a single domain (restaurants), and like them we also set out to avoid handcrafting and labeled datasets by representing slot-value pairs explicitly. Eric et al. (2017a) use the same approach to create dialogues for three domains (weather, navigation, and calendar scheduling), a corpus particularly rich with anaphoric references.
Dialogue state tracking (DST) is considered to be a higher-level module as it has to combine information from previous user utterances and system responses with the current utterance to infer full meaning. Many deep-learning based methods have recently been proposed for DST such as a neural belief trackerMrkšić et al. (2017), and a self-attentive dialogue state tracker Zhong et al. (2018) which are suitable for small-scale domain-specific dialogue systems; as well as more scalable approaches such as Rastogi et al. (2017); Xu and Hu (2018) that solve the problem of an infinite number of slot values and Naik et al. (2018) who additionally solve the problem of large number of disparate schemas in each domain. Unfortunately, all of the above approaches fail to address the problem that as the number of domain-specific chatbots on a dialogue platform grows larger, the DST module becomes increasingly complex as it tries to handle the interactions between different chatbots and their different schemas.
Text Generation Seq2Seq models with attention Sutskever et al. (2014); Bahdanau et al. (2014) have seen rapid adoption in automatic summarisation See et al. (2017); Rush et al. (2015). Madotto et al. (2018); Eric et al. (2017b) propose end-to-end generative approaches where a copy mechanism is used to copy entities from a knowledge base when generating the response. Closest to this work, is the copy mechanism based user query reformulation for search-based systems Dehghani et al. (2017). Exploring black-box methods like query re-writing allow us to benefit from the progress made in these fields and apply them to state tracking and reference resolution tasks in dialogue.
4 Contextual Query Rewrite Dataset
4.1 Query reformulation methodology
To build a corpus of query reformulations, we begin first with the principles that guided our decision-making process.
Multi-turn: We expect human-computer interaction will soon more often involve multiple turns (where each turn consists of a single user and agent utterance pair); anaphoric references are likely to occur more in multi-turn discourse; and, cross-linguistically, anaphor is a standard linguistic strategy for referring to the same entity and increase discourse cohesiveness Hobbs (1979) signaling what new versus known information is; within-sentence anaphoric references fall outside the scope of the present research framework;
Intuitive: Deciding which slot values are relevant given a particular dialogue history should be intuitive, assessed as the agreement among individuals;
Interpretable: Evaluating the output of a model should be straightforward, i.e., given the guidelines for query reformulation (below), an analyst will be able to assess quickly performance;
Learnable: An end-to-end dialogue system should be able to resolve anaphoric references to increase user satisfaction; the extent to which a model learns to identify which slot values are relevant can be examined, explored in Section 5.
The guidelines for the task of query reformulation are summarized here:
Identify the utterance which most closely matches the user’s intent or request; we call this the basis utterance;
Reformulate the basis utterance to be a one-shot utterance, making the user’s intent unambiguous by including all relevant slot values from the previous context; determining what is ‘relevant’, however, is dependent on how much we assume a model may automatically be able to infer given a specific utterance;
When a place is not referred to directly by poi_type (e.g., “I want some pizza”), the poi_type is assumed to be inferable, e.g., as “pizza restaurant”.
Some cases are not subject to reformulation, e.g., confirmation of the agent’s decision (e.g., Agent: “Would you like directions?”, User: “Yes please”), or when giving thanks or otherwise signaling the end of the dialogue;
Slot values from the context replace an anaphoric reference (whether nominal, pronominal, or zero) in the basis utterance;
Only certain types of anaphora need to be attended to, specifically references to slot values from the given domain; we ignore non-slot values (e.g., “We want coffee” does not need to be resolved to “you and I”), as well as anaphoric references to propositions or events (e.g., “That sounds good”);
When multiple values for a specific slot are available from context or in the current utterance, only use the most specific slots values, e.g., “Take me via the fastest route”, specifying route conditions precludes the need to specify traffic information further, e.g., “avoid all heavy traffic”;
For utterances with multiple anaphora (e.g., “Give me the address and let’s go there”), resolve both references: e.g., “Give me the address of the coffee shop Coupa and let’s go to the coffee shop Coupa”); this is not strictly enforced;
Intent may need to be carried over in addition to slots, e.g., “How about another coffee shop?” is reformulated as “Give me directions to another coffee shop…”; this is rare;
4.2 Corpus selection and first modifications
With the principles above (multi-turn dialogue with cross-sentential anaphora), we selected a publicly available corpus Eric et al. (2017a), a corpus composed of approximately three thousand dialogues over three domains (weather, scheduling, and navigation). For additional statistics regarding the original corpus, we refer the reader to Eric et al. (2017a). Applying the guidelines above to the first task of query reformulation, we arrived at a modified corpus to begin our study and later experiments, noticing primarily the anaphora types described in Section 1. In the released corpus, we include flags for each anaphora type333We also include flags for other interesting linguistic forms, incl. either (as in, “Let’s go to either…”) and besides, to exclude an option (e.g., “Let’s go to another coffee shop besides…”).
. As an initial estimate of how much more information the gold reformulations contain, we count slot values in the basis utterances before and after reformulation, presented in Table3.
|(before)||(after)||# slots (after)|
4.3 Crowdsourcing reformulations
Upon completion of the first step of corpus modification, we then manually identified the basis utterance with relevant slot values for each dialogue and produced a set of gold reformulations. For scalability, we then extended the corpus by gathering five crowd-sourced reformulations for each dialogue, referred to here as rewrites to be distinguished from the gold reformulations we composed ourselves.
We presented crowd-sourced workers with dialogues with highlighted basis utterance and the list of slot values that we judged relevant. Workers were encouraged to use their own strategies to rewrite the basis utterance in order to increase linguistic variation. Examples, instructions, and feedback to the workers helped ensure that semantic similarity (regarding which slot values to include) was as high as possible. This process of collecting rewrites took two weeks, with other descriptions shown in Table 4.
|Reformulations per gold||5|
|Avg # submissions||57.5|
|Work time per submission||120.9s|
Table 5, shows an example of a crowd-sourced reformulation in this extended dialogue corpus.
|Basis||What is the address ?|
|Gold||What is the address of the gas station Chevron?|
|Crowd||(1) What is the address of the Chevron gas station?|
|(2) Tell me the address for the Chevron gas station please.|
|(3) What is the address to Chevron gas station?|
|(4) Give me the address for Chevron gas station.|
|(5) What is the address of the Chevron gas station?|
We also assessed each crowd-sourced rewrite quantitatively by determining F1 and BLEU Papineni et al. (2002) score between the gold reformulation and each rewrite. To do this, we de-lexicalize slot values, using labels from the original corpus; we provide an example in Table 6
that results in F1 of 1.0 (all slot values are the same, meaning semantic similarity is high) with a BLEU score of 0.30 (low n-gram similarity indicative of syntactic/lexical variations).
|Original||give me the address.|
|Gold||give me the address of the mall Ravenswood 5 miles away.|
|Gold slots||give me the address of the [poi_type0] [poi0] [distance0]|
|Crowd slots||give me the address to [poi0] [poi_type0] that is [distance0]|
For the entire extended corpus of rewrites, we arrived at the similarity metrics presented in Table 7. In addition to F1 and BLEU scores, we also counted how many slots each gold reformulation has on average compared to rewrites, where we see, for example, that the gold reformulations have on average almost one more slot value per utterance than rewrites; this is indicative of how unnatural it may be to compose utterances that specify an entity so unambiguously (e.g., “Take me to the gas station Chevron two miles away.”)
|Mean # slots in each gold||4.03|
|Mean # slots in each rewrite||3.20|
|Difference # slots (gold vs rewrites)||0.823|
We also compared the five rewrites as a group to their corresponding gold reformulation: first, we grouped the rewrites for each dialogue (2042 total); next, we determined mean F1, BLEU, and difference in number of slots for each group pairwise compared to the gold reformulation; then, each group’s F1 and BLEU scores were binned as shown in Figure 3, where mean F1 scores are indicative of high in-group semantic similarity; and low BLEU scores indicative of syntactic and lexical variation within each group.
5 Experiments and Analysis
5.1 Establishing the baseline
Spoken language understanding consists usually of two tasks, domain classification/intent determination (e.g., Weather, Navigation, etc.) and slot-filling which identifies the spans of text in the utterance assigned to a slot value-attribute pair given the domain. In the dialogue corpus used here , the three domains and their corresponding slot values are: Weather (location, date, weather attribute); Navigation (point of interest type, point of interest, address, traffic information, distance); and Calendar scheduling (date, time, location, party, agenda). As a baseline, we first compare the original dialogues and the gold reformulations on the domain classification and slot-filling tasks.
To assess this, we use a joint classifier for both tasks, an attention-based RNN for domain classification & slot filling (Liu and Lane, 2016; Mesnil et al., 2015). We evaluate performance on two different inputs from the proposed dataset:
The input is the original dialogue (concatenated user plus system utterances) and the output is the relevant slot values and domain at the user turn.
- Gold CQR
The input is the gold reformulation for the above user turn and the output is the relevant slot values and domain i.e we treat the dialogue as a single shot utterance.
For pre-processing, we encode the data using BIO tags. We perform the classification task on the two datasets and then compare the accuracy of the semantic labeler on the slots that both setups share (i.e., ignoring how the classifier does on the newly added slots in the gold reformulations).
|Input Type||Domain Classification||Slot F1|
Results in Table 8 indicate that while for domain classification there is no significant gain when comparing gold reformulations against the original dataset, for slot-filling task, increased prediction accuracy is evident for the gold reformulations. This suggests that query reformulation could lead to potentially better language understanding performance downstream as it is relatively easier to train and optimize NLU systems for single-shot utterances compared to multi-turn utterances.
5.2 Query Rewriting Experiments
In a second set of experiments, we would like to test the query rewriting more directly. As described in Section 2.2, we view this as summarizing a dialogue into a single
utterance unambiguously specifying user intent. For the
experiment, we delexicalize slot values using the canonical entity type from the original corpus (e.g., poi_type = place of interest type), giving an example in Table 9.
|(input)||USER i need directions to a poi_type0|
|SYSTEM i have a poi_type0 that is distance0|
|USER give me the address|
|(output)||give me the address of poi_type0 distance0|
We train two separate models drawing from different distributions: one of only gold reformulation data and the other including crowd-sourced reformulations. To quantify task complexity, we note that over the entire corpus, 67% of slots are carried over from dialogue to reformulation, an indication of the non-triviality of the task. For reproducibility, we use the open source neural sequence modeling system OpenNMT Klein et al. (2017). The only hyper-parameter changed from initial settings is to remove copy_loss_by_seqlength
, which improves overall accuracy. Error rates for the training and dev sets are presented in the Figure4 with observed accuracy metrics presented in Table 10. The dev set error for the Mturk extensions is higher indicating that there is lexical diversity in the reformulations, which is also seen in the BLEU scores in Table 10. The entity F1 scores is higher, as the model has seen many more variations around the carryover entities.
|Gold + MTurk||10045/1279/1279||0.897||0.397|
We show that anaphora is quite common in a single, human-created corpus Eric et al. (2017a) of multi-turn dialogues used to train task-oriented, spoken dialogue systems. We introduce contextual query rewrite (CQR), where the referring expressions resolution task is defined as query reformulations given the dialogue history. We show a principled approach to creating a corpus of query reformulations, and how this can be extended via crowd-sourcing. Two experiments demonstrate that query reformulations can be used to train high accuracy models for the task of generating fully unambiguous single-shot utterances as well as for more standard tasks of domain classification and slot filling, indicating that this approach may be suitable for anaphora resolution at larger scales.
In future work, we intend to extend query reformulation for multiple languages, as well as to assess if anaphora resolution using query reformulation is also possible for longer dialogues. As a step towards improving dialogue systems in general and encouraging work on anaphora resolution specifically, we make our corpus publicly available.
- Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
- Bordes et al. (2017) Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In ICLR.
- Dehghani et al. (2017) Mostafa Dehghani, Sascha Rothe, Enrique Alfonseca, and Pascal Fleury. 2017. Learning to attend, copy, and generate for session-based query suggestion. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 1747–1756. ACM.
- Eric et al. (2017a) Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017a. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49, Saarbrücken, Germany. Association for Computational Linguistics.
- Eric et al. (2017b) Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D Manning. 2017b. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49.
- Givón (2017) T. Givón. 2017. The Story of Zero. John Benjamins Publishing Company.
H.P. Grice. 1968.
Utterer’s meaning, sentence-meaning, and word-meaning.
In Rankin T.L. Kulas J., Fetzer J.H., editor,
Philosophy, Language, and Artificial Intelligence. Studies in Cognitive Systems, vol 2, pages 49–66. Springer, Dordrecht.
- Gupta et al. (2018) Raghav Gupta, Abhinav Rastogi, and Dilek Z. Hakkani-Tür. 2018. An efficient approach to encoding context for spoken language understanding. In Interspeech.
- Hobbs (1979) Jerry R. Hobbs. 1979. Coherence and coreference. Cognitive Science, 3(1):67–90.
- Klein et al. (2017) Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. CoRR, abs/1701.02810.
- Liu and Lane (2016) Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. In Interspeech 2016, pages 685–689.
- Madotto et al. (2018) Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. ACL.
- Mesnil et al. (2015) Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Z. Hakkani-Tür, Xiaodong He, Larry P. Heck, Gökhan Tür, Dong Yu, and Geoffrey Zweig. 2015. Using rnns for slot filling in spoken language understanding. IEEE/ACM.
- Mrkšić et al. (2017) Nikola Mrkšić, Diarmuid Ó Séaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777–1788, Vancouver, Canada. Association for Computational Linguistics.
- Naik et al. (2018) Chetan Naik, Arpit Gupta, Hancheng Ge, Lambert Mathias, and Ruhi Sarikaya. 2018. Contextual slot carryover for disparate schemas. In Interspeech 2018.
- Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics.
- Rastogi et al. (2017) Abhinav Rastogi, Dilek Hakkani-Tür, and Larry Heck. 2017. Scalable multi-domain dialogue state tracking. In Automatic Speech Recognition and Understanding Workshop (ASRU), 2017 IEEE, pages 561–568. IEEE.
Rush et al. (2015)
Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015.
A neural attention
model for abstractive sentence summarization.
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics.
- See et al. (2017) Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–1083. Association for Computational Linguistics.
Sutskever et al. (2014)
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014.
Sequence to sequence learning with neural networks.In NIPS.
- Wen et al. (2017) Tsung-Hsien Wen, David Vandyke, Nikola Mrkšić, Milica Gasic, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438–449. Association for Computational Linguistics.
- Weston et al. (2015) Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698.
- Xu and Hu (2018) Puyang Xu and Qi Hu. 2018. An end-to-end approach for handling unknown slot values in dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1448–1457, Melbourne, Australia. Association for Computational Linguistics.
- Zhong et al. (2018) Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1458–1467, Melbourne, Australia. Association for Computational Linguistics.