Constraint based Knowledge Base Distillation in End-to-End Task Oriented Dialogs

09/15/2021 ∙ by Dinesh Raghu, et al. ∙ 0

End-to-End task-oriented dialogue systems generate responses based on dialog history and an accompanying knowledge base (KB). Inferring those KB entities that are most relevant for an utterance is crucial for response generation. Existing state of the art scales to large KBs by softly filtering over irrelevant KB information. In this paper, we propose a novel filtering technique that consists of (1) a pairwise similarity based filter that identifies relevant information by respecting the n-ary structure in a KB record. and, (2) an auxiliary loss that helps in separating contextually unrelated KB information. We also propose a new metric – multiset entity F1 which fixes a correctness issue in the existing entity F1 metric. Experimental results on three publicly available task-oriented dialog datasets show that our proposed approach outperforms existing state-of-the-art models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

**footnotetext: D. Raghu and A. Jain contributed equally to this work.footnotetext: D. Raghu is an employee at IBM Research. This work was carried out as part of PhD research at IIT Delhi.

Task oriented dialog systems interact with users to achieve specific goals such as restaurant reservation or calendar enquiry. To satisfy a user goal, the system is expected to retrieve necessary information from a knowledge base and convey it using natural language. Recently several end-to-end approaches Bordes and Weston (2017); Wu et al. (2018); He et al. (2020b); Madotto et al. (2018) have been proposed for learning these dialog systems.

Inferring the most relevant KB entities necessary for generating the response is crucial for achieving task success. To effectively scale to large KBs, existing approaches Wen et al. (2018); Wu et al. (2018) distill the KB by softly filtering irrelevant KB information based on the dialog history. For example, in Figure 1 the ideal filtering technique is expected to filter just the row 1 as the driver is requesting information about dinner with Alex. But existing techniques often filter some irrelevant KB information along with the relevant KB information. For example, in Figure 1 row 3 may also get filtered along with row 1.

Figure 1: An example dialog between a driver and a system along with the associated knowledge base.

Our analysis of the best performing distillation technique Wu et al. (2018) revealed that embeddings learnt for entities of the same type are quite close to each other. This may be due to entities of the same type often appearing in similar context in history and KB. Such embeddings hurt the overall performance as they reduce the gap between relevant and irrelevant KB records. For example, in Figure 1 row 3 may not get distilled out if Alex and Ana have similar embeddings.

In this paper, we propose Constraint based knowledge base Distillation NETwork (CDNet), which (1) uses a novel pairwise similarity based distillation computation which distills KB at a record-level, and (2) an auxiliary loss which helps to distill contextually unrelated KB records by enforcing constraints on embeddings of entities of the same type.

We noticed the popular entity F1 evaluation metric has a correctness issue when the response contains multi instances of the same entity value. To fix this issue, we propose a new metric called multiset entity F1.

We empirically show that CDNet performs either significantly better than or comparable to existing approaches on three publicly available task oriented dialog datasets.

Figure 2: Architecture of CDNet model.

2 Related Work

We first discuss approaches that are closely related to our work. wu2019globaltolocal perform KB distillation but fails to capture the relationship across attributes in KB records. It represents a KB record with multiple attributes as a set of triples (subject, predicate, object). This breaks direct connection between record attributes and requires the system to reason over longer inference chains. In Figure 1, if event field is used as the key to break the record into triples, then the distillation has to infer that (dinner, invitee, Alex), (dinner, date, 1 Feb) and (dinner, time, 10am) are connected. In contrast, CDNet performs KB distillation by maintaining the attribute relationships. wen-etal-2018-sequence perform distillation using the similarity between dialog history representation and each attribute representation in a KB record, whereas CDNet uses word based pairwise similarity for distillation.

We now briefly discuss approaches that improve other aspects of task oriented dialogs. He2020 and 9053667 model KBs using Relational GCNs Schlichtkrull et al. (2018).raghu-etal-2019-disentangling provide support for entities unseen during train. reddy2018multilevel improve the ability to reason over KB by respecting the relationships between connected attributes. qin2019entityconsistent restricts the response to contain entities from a single KB record. Qin et al. (2020) handle multiple domains using shared-private networks and he-etal-2020-amalgamating optimize their network on both F1 and BLEU. We are the first to propose a pairwise similarity score for KB distillation and a embedding constraint loss to distill irrelevant KB records.

3 CDNet

CDNet 333https://github.com/dair-iitd/CDNet has an encoder-decoder architecture that takes as input (1) the dialog history , modelled as a sequence of utterances and each utterance as sequence of words , and (2) a knowledge base with records and each record has key-value attribute pairs . The network generates the system response one word at a time.

3.1 CDNet Encoder

Context Encoder: The dialog history is encoded using a hierarchical encoder Sordoni et al. (2015). Each utterance representation is computed using a Bi-GRU Schuster and Paliwal (1997). We denote the hidden state of the word in utterance as . The context representation is generated by passing s through a GRU.

KB Encoder: We encode the KB using the multi-level memory proposed by reddy2018multilevel as its structure allows us to perform distillations over KB records. The KB memory contains two-levels. The first level is a set of KB records. Each KB record is represented as sum of its attributes , where is the embedding matrix. In the second level, each record is represented as a set of attributes. Each attribute is a key-value pair, where the key is the attribute type embedding and the value is the attribute embedding.

3.2 KB Distillation

The KB distillation module softly filters irrelevant KB records based on the dialog history by computing a distillation distribution () over the KB records. To compute , we first score each KB record based on the dialog history as follows:

(1)

where CosSim is the cosine similarity between two vectors. The distillation likelihood

for each record then is given by .

Defining distillation distribution over the KB records rather than KB triples has two main advantages: (1) attributes (such as invitee, event, time and date in Figure 1) in a KB record are directly connected and thus easy to distill, (2) it helps to distill the right records even when the record keys are not unique. In Figure 1, row 3 would be distilled even though it shares the same event name.

3.3 CDNet Decoder

Following wu2019globaltolocal, we first generate a sketch response which uses entity type (or sketch) tag in place of an entity. For example, The @meeting with @invitee is at @time is generated instead of The dinner with Alex is at 10pm. When an entity tag is generated, we choose an entity suggested by the context and KB memory pointers.

Sketch RNN: We use a GRU to generate the sketch response. At each time , a generate distribution is computed using the decoder hidden state and an attended summary of the dialog context . The summary , where is the Luong attention Luong et al. (2015) weights over the context word representations ().

Context Memory Pointer: At each time , generate the copy distribution over the context by performing a multi-hop Luong attention over the context memory. The initial query is set to . is then attended over the context to generate an attention distribution and a summarized context . We represent this as . In the next hop the same process is repeated by updating the query . The attention weights after hops is used for computing the context pointer as follows:

(2)

KB Memory Pointer: At each time , we generate the copy distribution over the KB using (1) Luong attention weight over the KB record and (2) Luong attention weight over attribute keys in a record and (3) the distillation weight over the KB record . The KB pointer is computed as follows:

(3)

The two copy pointers are combined using a soft gate See et al. (2017) to get the final copy distribution as follows,

(4)

3.4 Loss

We guide the distillation module using two auxiliary loss terms: entity constraint loss and distillation loss . Often entities of the same type (e.g., Ana and Alex) have embeddings similar to each other. As a result, records with similar but unrelated entities are incorrectly assigned a high distillation likelihood. To alleviate this problem, we make the cosine similarity between two entities of the same type to be as low as possible. This is captured by the constraint loss given by,

(5)

where is a set of entity pairs in the KB that belong to the same entity type.

The distillation likelihood of a KB record depends on the similarity between entities in the record and the words mentioned in the dialog context. We compute the distillation loss by defining reference distillation distribution as , where is the number of times any attribute in occurs in and in the gold response. The distillation loss is given by,

(6)

The overall loss function

, where and are the cross entropy loss on and respectively. Detailed equations are described in Appendix B.

4 Experimental Setup

Datasets: We evaluate our model on three datasets – CamRest Wen et al. (2017), Multi-WOZ 2.1 (WOZ) Budzianowski et al. (2018) and Stanford Multi-Domain (SMD) Dataset Eric et al. (2017).

Utterance Set MultiSet
Gold for which one? I have two, one on the 8th at 11am and {8th, 11am, wednesday} {8th, 11am, wednesday, 11am}
one on wednesday at 11am
Pred-1 your appointment is on 8th at 11am {8th, 11am} {8th, 11am}
Pred-2 your appointment is on 8th on 8th on 8th and on 8th {8th} {8th 8th, 8th, 8th}
Table 1: An example to demonstrate the correctness issue with the Entity F1 metric.
CamRest SMD WOZ 2.1
Model BLEU Ent. F1 MSE F1 BLEU Ent. F1 MSE F1 BLEU Ent. F1 MSE F1
DSR Wen et al. (2018) 18.3 53.6 - 12.7 51.9 - 9.1 30.0 -
GLMP Wu et al. (2018) 15.1 58.9 57.5 13.9 59.6 59.6 6.9 32.4 -
MLM Reddy et al. (2019) 15.5 62.1 - 17 54.6 - - - -
Ent. Const. Qin et al. (2019) 18.5 58.6 - 13.9 53.7 - - - -
TTOS He et al. (2020a) 20.5 61.5 - 17.4 55.4 - - - -
DFNet Qin et al. (2020) - - - 14.4 62.7 56.7 9.4 35.1 34.8
EER He et al. (2020c) 19.2 65.7 65.5 17.2 59.0 55.1 13.6 35.6 35.0
FG2Seq He et al. (2020b) 20.2 66.4 65.4 16.8 61.1 59.1 14.6 36.5 36.0
CDNet 21.8 68.6 68.4 17.8 62.9 62.9 11.9 38.7 38.6
Table 2: Performance of CDNet and baselines on the CamRest, SMD and Multi-WOZ 2.1 datasets.

Baselines: We compare CDNet against the following baselines: MLM Reddy et al. (2019), DSR Wen et al. (2018), GLMP Wu et al. (2018), Entity Consistent Qin et al. (2019), EER He et al. (2020c), FG2Seq He et al. (2020b), TTOS He et al. (2020a) and DFNet Qin et al. (2020).

Training Details: CDNet is trained end to end using Adam optimizer Kingma and Ba (2014). The embedding dimensions of the hidden states of encoder and decoder GRU are set to 200 and 100 respectively. Word embeddings are initialized with pre-trained 200d GloVe embeddings Pennington et al. (2014)

. Words not in Glove are initialized using Glorot uniform distribution

Glorot and Bengio (2010). The dropout rate is set to 0.2 and teacher forcing ratio set to 0.9. The best hyper-parameter setting for each dataset and other training details are reported in the Appendix A.

Evaluation Metrics: We measure the performance of all the models using BLEU Papineni et al. (2002), our proposed multiset entity F1 and for completeness the previously used entity F1 Wu et al. (2018).

MultiSet Entity F1 (MSE F1): The entity F1 is used to measure the model’s ability to predict relevant entities from the KB. It is computed by micro averaging over the set of entities in the gold responses and the set of entities in the predicted responses. This metric suffers from two main problems. First, when the gold response has multiple instances of the same entity value, it is accounted for just once in the set representation. For example, in Table 1 the entity value 11am occurs twice in the gold response but accounted for just once in the set representation. As a result the recall computation does not penalize the prediction pred-1 for missing an instance of 11am. Second, the existing metric fails to penalize models that stutter. For example, in Table 1 the precision of pred-2 is not penalized for repeating the entity value 8th. We propose a simple modification to the entity F1 metric to fix these correctness issues. The modified metric, named MultiSet Entity F1, is computed by micro averaging over the multiset of entities rather than a set. As multisets allow multiple instances of same entity values, it (1) accounts for the same entity value mentioned more than once in the gold by penalizing recall for missing any instances and (2) accounts for models that stutters by penalizing the precision.

5 Results

The results are shown in Table 2. On CamRest and SMD, CDNet outperforms existing models in both MSE F1 and BLEU. On WOZ, CDNet achieves best only in MSE F1. We observed that the responses generated by CDNet on WOZ were appropriate, but did not have good lexical overlap with the gold responses. To investigate this further, we perform a human evaluation of the responses predicted by CDNet, FG2Seq and EER.

(a)
(b)
Figure 3: T-sne plots of entity embeddings from SMD of (a) CDNet & (b) GLMP.

Human Evaluation: We conduct a human evaluation to assess two dimensions of generated responses: (1) Appropriateness: how useful are the responses for the given dialog context and KB, and (2) Naturalness: how human-like are the predicted responses. We randomly sampled 75 dialogs from each of the three datasets and requested two judges to evaluate on a Likert scale Likert (1932). The results are summarized in Table 3. CDNet outperforms both FG2Seq and EER on appropriateness across all three datasets. Despite having a lower BLEU score on WOZ, CDNet performs in-par with the other two baselines on naturalness.

Appropriateness Naturalness
Model SMD Cam WoZ SMD Cam WoZ
EER 2.9 3.8 3.4 3.6 4.2 4.0
FG2Seq 3.1 3.7 3.7 3.9 4.3 4.0
CDNet 3.6 4.1 3.9 3.7 4.3 4.1
Table 3: Human Evaluation of CDNet on the CamRest, SMD and Multi-WOZ 2.1 datasets.

Ablation Study: We perform an ablation study by defining three variants. Table 4 shows the MSE F1 and BLEU for the two settings on CamRest and SMD datasets. (1) We remove the entity constraint loss from the overall loss . (2) We replace our pairwise similarity based score used for KB distillation with the global pointer score () proposed by Wu et al. (2018). We refer to this setting as naive distillation. (3) We replace our pairwise similarity based score with the entry-level attention proposed by Wen et al. (2018). We see that both our contributions: pairwise similarity scorer for computing distillation distribution and the entity constraint loss contribute to the overall performance.

CamRest SMD
Model BLEU MSE F1 BLEU MSE F1
CDNet 21.8 68.4 17.8 62.9
No 19.2 65.4 17.4 62.2
Naive Dist. 15.0 64.2 16.9 60.6
Entry-Level Attn. 16.2 62.0 17.1 59.4
Table 4: Ablation study of CDNet on the CamRest and SMD datasets.

Discussion: We now discuss the effect of the entity constraint loss on the KB entity embeddings. Figure 3 shows the t-SNE plot Van der Maaten and Hinton (2008) of entity embeddings of CDNet and GLMP where entities of the same type are represented using the same colour. We see that entities of the same type (e.g. father and boss of the type invitees) are clustered together in embedding space of GLMP, while they are distributed across the space in CDNet. This shows that the entity constraint loss has helped reduce the embedding similarity between entities of the same type and ensures KB records with similar but unrelated entities are filtered by the KB distillation. Visualization of distillation distribution helping identify relevant KB entities is shown in Appendix C.

6 Conclusion

We propose CDNet for learning end-to-end task oriented dialog system. CDNet performs KB distillation at the level of KB records, thereby respecting the relationships between the connected attributes. CDNet uses a pairwise similarity based score function to better distill the relevant KB records. By defining constraints over embeddings of entities of the same type, CDNet filters out contextually unrelated KB records. We propose a simple modification to the entity F1 metric that helps fix correctness issues. We refer to the new metric as multiset entity F1. CDNet significantly outperforms existing approaches on multiset entity F1 and appropriateness, while being comparable on naturalness and BLEU. We release the code for further research.

Acknowledgments

This work is supported by IBM AI Horizons Network grant, an IBM SUR award, grants by Google, Bloomberg and 1MG, a Visvesvaraya faculty award by Govt. of India, and the Jai Gupta chair fellowship by IIT Delhi. We thank the IIT Delhi HPC facility for computational resources.

References

  • Bordes and Weston (2017) Antoine Bordes and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In International Conference on Learning Representations.
  • Budzianowski et al. (2018) Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In

    Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

    , pages 5016–5026.
  • Eric et al. (2017) Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Dialog System Technology Challenges, Saarbrücken, Germany, August 15-17, 2017, pages 37–49.
  • Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. 2010.

    Understanding the difficulty of training deep feedforward neural networks.

    In

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    , pages 249–256. JMLR Workshop and Conference Proceedings.
  • He et al. (2020a) Wanwei He, Min Yang, Rui Yan, Chengming Li, Ying Shen, and Ruifeng Xu. 2020a. Amalgamating knowledge from two teachers for task-oriented dialogue system with adversarial training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3498–3507, Online. Association for Computational Linguistics.
  • He et al. (2020b) Zhenhao He, Yuhong He, Qingyao Wu, and Jian Chen. 2020b. Fg2seq: Effectively encoding knowledge for end-to-end task-oriented dialog. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8029–8033. IEEE.
  • He et al. (2020c) Zhenhao He, Jiachun Wang, and Jian Chen. 2020c. Task-oriented dialog generation with enhanced entity representation. Proc. Interspeech 2020, pages 3905–3909.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Lei Ba. 2014. Adam: Amethod for stochastic optimization. In Proc. 3rd Int. Conf. Learn. Representations.
  • Likert (1932) Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology.
  • Luong et al. (2015) Thang Luong, Hieu Pham, and Christopher D. Manning. 2015.

    Effective approaches to attention-based neural machine translation.

    In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics.
  • Van der Maaten and Hinton (2008) Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne.

    Journal of machine learning research

    , 9(11).
  • Madotto et al. (2018) A. Madotto, CS. Wu, and P. Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of 56th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
  • Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543.
  • Qin et al. (2019) Libo Qin, Yijia Liu, Wanxiang Che, Haoyang Wen, Yangming Li, and Ting Liu. 2019. Entity-consistent end-to-end task-oriented dialogue system with kb retriever. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 133–142.
  • Qin et al. (2020) Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, and Ting Liu. 2020. Dynamic fusion network for multi-domain end-to-end task-oriented dialog. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6344–6354, Online. Association for Computational Linguistics.
  • Raghu et al. (2019) Dinesh Raghu, Nikhil Gupta, and Mausam. 2019. Disentangling Language and Knowledge in Task-Oriented Dialogs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1239–1255, Minneapolis, Minnesota. Association for Computational Linguistics.
  • Reddy et al. (2019) Revanth Gangi Reddy, Danish Contractor, Dinesh Raghu, and Sachindra Joshi. 2019. Multi-level memory for task oriented dialogs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3744–3754.
  • Schlichtkrull et al. (2018) Michael Sejr Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In ESWC.
  • Schuster and Paliwal (1997) Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681.
  • See et al. (2017) Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–1083.
  • Sordoni et al. (2015) Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. 2015. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 553–562.
  • Wen et al. (2018) Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, and Ting Liu. 2018. Sequence-to-sequence learning for task-oriented dialogue with dialogue state representation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3781–3792, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
  • Wen et al. (2017) TH Wen, D Vandyke, N Mrkšíc, M Gašíc, LM Rojas-Barahona, PH Su, S Ultes, and S Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017-Proceedings of Conference, volume 1, pages 438–449.
  • Wu et al. (2018) Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2018. Global-to-local memory pointer networks for task-oriented dialogue. In International Conference on Learning Representations.

Appendix A Training Details:

All the hyper parameters are finalised after a grid search over the dev set. We sample learning rates (LR) from . The Disentangle Label Dropout (DLD) rate Raghu et al. (2019) is sampled from . The number of hops H in the response decoder is sampled from

. We ran each hyperparameter setting 10 times and use the setting with the best validation entity F1. The best performing hyperparameters for all datasets are listed in Table

5.

Dataset Hops DLD LR Val MSE F1
CamRest 1 0% 0.0005 68.6
SMD 3 5% 0.00025 60.4
WoZ 2.1 3 0% 0.00025 34.3
Table 5: Best performing hyperparameters along with the best validation Entity F1 (Val Ent. F1) achieved for the three datasets.

All experiments were run on a single Nvidia V100 GPU with 32GB of memory. CDNet

 has an average runtime of 3 hours (6 min per epoch), 10 hours (20 min per epoch) and 24 hours (36 min per epoch) on CamRest, SMD and WOZ respectively.

CDNet has a total of 2.8M trainable parameters (400K for embedding matrix, 720K for context encoder, 240k for the sketch RNN and 1440k for the Memory pointers).

Appendix B Detailed Equations

In this section, we describe the details of context encoder, CDNetdecoder and the loss.

b.1 Context Encoder

Given a dialog history we compute the utterance representation and context representation as follows:

(7)
(8)

where is the number of words in utterance and is the word in the utterance.

b.2 CDNet Decoder

Let and be the hidden state and the predicted word at time respectively. The hidden state is computed as follows,

(9)

Now, we compute multi-hop Luong attention over the words representations in the context memory. We set the initial query to and then apply Luong attention as follows:

(10)

where , are trainable parameters. We then compute the summarized context representation and the next hop query as follows:

(11)
(12)

We repeat this for hops. The attention vector after hop is represented . The generate distribution is given by:

(13)

where and are trainable parameters. The context copy distribution is computed as follows:

(14)

The KB copy distribution is given by,

(15)
(16)
(17)

where , , and are trainable parameters. Now we compute the gate to combine and to get a final copy distribution as follows:

(18)
(19)
(20)

where is a trainable parameter.

b.3 Loss

We compute the cross entropy loss over the generate and copy distribution as follows:

(21)
(22)

Appendix C Distillation Visualisation

We show the visualisation of how the KB distillation distribution helps the decoder rectify the incorrect KB memory pointer inference in Figure 4. Figure 5 shows how the KB distillation distribution helps increase the confidence associated with the correct entity in the KB.

Appendix D Datasets

We present statistics of SMD, CamRest and WOZ in Table 6.

SMD CamRest WOZ
Train Dialogs 2425 406 1839
Val Dialogs 302 135 117
Test Dialogs 304 135 141
Table 6: Statistics of the three datasets.

Appendix E Domain-Wise Results

Table 7 and Table 8 show the domain wise entity F1 scores of SMD and WOZ datasets respectively. We note that CDNet either has the best or the second-best performance in domain wise scores.

Model BLEU F1 MSE F1 Cal Wea Nav
MLM 17.0 54.6 - 66.7 56 46.9
DSR 12.7 51.9 - 52.1 50.4 52.0
Ent. Const. 13.9 53.7 - 55.6 52.2 54.5
TTOS 17.4 55.4 - 63.5 64.1 45.9
DFNet 14.4 62.7 - 73.1 57.6 57.9
GLMP 13.9 59.6 59.6 70.2 58.0 54.3
EER 17.2 59.0 55.1 71.8 57.8 52.5
FG2Seq 16.8 61.1 59.1 73.3 57.4 56.1
CDNet 17.8 62.9 62.9 75.4 61.3 56.7
Table 7: Domain wise Entity F1 performance of CDNet and baselines on the SMD dataset.
Model BLEU F1 MSE F1 Hot Att Res
DSR 9.1 30.0 - 27.1 28.0 33.4
DFNet 9.4 35.1 34.8 30.6 28.1 40.9
GLMP 6.9 32.4 - 28.1 24.4 38.4
EER 13.6 35.6 35.0 35.7 43.0 34.2
FG2Seq 14.6 36.5 36.0 34.4 37.2 38.9
CDNet 11.9 38.7 38.6 36.3 38.9 41.7
Table 8: Domain wise Entity F1 performance of CDNet and baselines on WOZ dataset.

Appendix F Qualitative Example

Table 9 shows responses predicted by CDNet, EER and FG2Seq for an example from the WOZ dataset.

Appendix G Human Evaluation

Figure 6 shows a screenshot of the task used for collecting human judgements.

Figure 4: Attention visualization of a decode time step of an example from SMD dataset. corresponds to the sketch tag @poi. is computed by combining the output of the KB memory pointer and the distillation distribution .
Figure 5: Attention visualization of a decode time step of an example from SMD dataset. corresponds to the sketch tag @address. is computed by combining the output of the KB memory pointer and the distillation distribution .
Address Area Choice Name Postcode Pricerange Ref Food
regent_street_city_centre centre a_lot pizza_hut_city_centre cb21ab cheap oog9kq5v italian
4_-_6_rose_crescent centre a_lot la_raza cb23ll cheap oog9kq5v spanish
finders_corner_newmarket_road east a_lot the_missing_sock cb259aq cheap oog9kq5v international
106_regent_street_city_centre centre a_lot curry_garden cb21dp expensive oog9kq5v indian
usr-1 i need a cheap spanish restaurant . are there any around ?
sys-1 yes , la_raza is a restaurant serving spanish food in the cheap price range . it is located at 4_-_6_rose_crescent . would you like a reservation ?
usr-2 i would like a restaurant in the centre .
sys-2 la_raza is a cheap restaurant serving spanish food in the centre part of town . it ’ s address is 4_-_6_rose_crescent and it ’ s post code is cb2311 .
usr-3 great . can you book me a table for 7 on tuesday at 11:45 and provide me with a reference number ?
Gold all right , i ’ ve made a reservation . your reference number is oog9kq5v . what else can i do for you ?
EER i ’ m sorry , but i was unable to book that for you . would you like to try another restaurant ?
FG2Seq i ’ m sorry , but the la_raza restaurant restaurants . would you like me to find another restaurant ?
CDNet i have booked you at la_raza . your reference number is oog9kq5v . is there anything else i can help you with ?
Table 9: Example dialog from WOZ along with responses generated by EER, FG2Seq and CDNet.
Figure 6: A sample human evaluation task used for collecting appropriateness and naturalness of responses generated by three (anonymized) models.