Reference-Aware Language Models

11/05/2016 ∙ by Zichao Yang, et al. ∙ Google Carnegie Mellon University 0

We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or discourse context, even when the targets of the reference may be rare words. Experiments on three tasks shows our model variants based on deterministic attention.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Referring expressions (REs) in natural language are noun phrases (proper nouns, common nouns, and pronouns) that identify objects, entities, and events in an environment. REs occur frequently and they play a key role in communicating information efficiently. While REs are common in natural language, most previous work does not model them explicitly, either treating REs as ordinary words in the model or replacing them with special tokens that are filled in with a post processing step (wen:2016; LuongSLVZ15). Here we propose a language modeling framework that explicitly incorporates reference decisions. In part, this is based on the principle of pointer networks in which copies are made from another source (GulcehreANZB16; GuLLL16; ling:2016; ptrnets; ahn:2016; merity2016pointer). However, in the full version of our model, we go beyond simple copying and enable coreferent mentions to have different forms, a key characteristic of natural language reference.

[width=0.5]reference.pdf

Figure 1: Reference-aware language models.

Figure 1 depicts examples of REs in the context of the three tasks that we consider in this work. First, many models need to refer to a list of items (kiddon:2016; wen:2016). In the task of recipe generation from a list of ingredients (kiddon:2016), the generation of the recipe will frequently refer to these items. As shown in Figure 1, in the recipe “Blend soy milk and …”, soy milk refers to the ingredient summaries. Second, reference to a database is crucial in many applications. One example is in task oriented dialogue where access to a database is necessary to answer a user’s query (young2013pomdp; li:2016; vinyals:2015; wen:2016; sordoni:2015; serban2016building; bordes2016learning; williams2016end; shang2015neural; wen2016network). Here we consider the domain of restaurant recommendation where a system refers to restaurants (name) and their attributes (address, phone number etc) in its responses. When the system says “the nirala is a nice restaurant”, it refers to the restaurant name the nirala from the database. Finally, we address references within a document (mikolov2010recurrent; ji2015document; wang2015larger), as the generation of words will often refer to previously generated words. For instance the same entity will often be referred to throughout a document. In Figure 1, the entity you refers to I in a previous utterance. In this case, copying is insufficient– although the referent is the same, the form of the mention is different.

In this work we develop a language model that has a specific module for generating REs. A series of decisions (should I generate a RE? If yes, which entity in the context should I refer to? How should the RE be rendered?) augment a traditional recurrent neural network language model and the two components are combined as a mixture model. Selecting an entity in context is similar to familiar models of attention 

(BahdanauCB14), but rather than being a soft decision that reweights representations of elements in the context, it is treated as a hard decision over contextual elements which are stochastically selected and then copied or, if the task warrants it, transformed (e.g., a pronoun rather than a proper name is produced as output). In cases when the stochastic decision is not available in training, we treat it as a latent variable and marginalize it out. For each of the three tasks, we pick one representative application and demonstrate our reference aware model’s efficacy in evaluations against models that do not explicitly include a reference operation.

Our contributions are as follows:

  • We propose a general framework to model reference in language. We consider reference to entries in lists, tables, and document context. We instantiate these tasks into three specific applications: recipe generation, dialogue modeling, and coreference based language models.

  • We develop the first neural model of reference that goes being copying and can model (conditional on context) how to form the mention.

  • We perform comprehensive evaluation of our models on the three data sets and verify our proposed models perform better than strong baselines.

2 Reference-aware language models

Here we propose a general framework for reference-aware language models.

We denote each document as a series of tokens , where is the number of tokens. Our goal is to maximize

, the probability of each word

given its previous context . In contrast to traditional neural language models, we introduce a variable at each position, which controls the decision on which source is generated from. Then the conditional probability is given by:

(1)

where has different meanings in different contexts. If is from a reference list or a database, then is one dimensional and denotes is generated as a reference. can also model more complex decisions. In coreference based language model, denotes a series of sequential decisions, such as whether is an entity, if yes, which entity refers to. When is not observed, we will train our model to maximize the marginal probability over , i.e., .

2.1 Reference to lists

Ingredients Recipe
1 cup plain soy milk Blend soy milk and spinach leaves together in a blender until smooth. Add banana and pulse until thoroughly blended.
3/4 cup packed fresh spinach leaves
1 large banana, sliced
Table 1: Ingredients and recipe for Spinach and Banana Power Smoothie.

We begin to instantiate the framework by considering reference to a list of items. Referring to a list of items has broad applications, such as generating documents based on summaries etc. Here we specifically consider the application of recipe generation conditioning on the ingredient lists. Table. 1 illustrates the ingredient list and recipe for Spinach and Banana Power Smoothie. We can see that the ingredients soy milk, spinach leaves, and banana occur in the recipe.

[width=0.4]recipe_pointer.pdf

Figure 2: Recipe pointer

Let the ingredients of a recipe be and each ingredient contains tokens . The corresponding recipe is . We would like to model .

We first use a LSTM (hochreiter1997long) to encode each ingredient: . Then, we sum the resulting final state of each ingredient to obtain the starting LSTM state of the decoder. We use an attention based decoder:

where

is the attention function that returns the probability distribution over the set of vectors

, conditioned on any input representation . A full description of this operation is described in (BahdanauCB14). The decision to copy from the ingredient list or generate a new word from the softmax is performed using a switch, denoted as . We can obtain a probability distribution of copying each of the words in the ingredients by computing in the attention mechanism.

Objective: We can obtain the value of through a string match of tokens in recipes with tokens in ingredients. If a token appears in the ingredients, we set and otherwise. We can train the model in a fully supervised fashion, i.e., we can obtain the probability of as if and otherwise.

However, it may be not be accurate. In many cases, the tokens that appear in the ingredients do not specifically refer to ingredients tokens. For examples, the recipe may start with “Prepare a cup of water”. The token “cup” does not refer to the “cup” in the ingredient list “1 cup plain soy milk”. To solve this problem, we treat as a latent variable, we wish to maximize the marginal probability of over all possible values of . In this way, the model can automatically learn when to refer to tokens in the ingredients. Thus, the probability of generating token is defined as:

If no string match is found for , we simply set in the above objective.

2.2 Reference to databases

name price range food area address post code phone
ali baba moderate lebanese centre 59 Hills Road City Centre CB 2, 1 NT 01462 432565
the nirala moderate indian north 7 Milton Road Chesterton CB 4, 1 UY 01223 360966
Table 2: Fragment of database for dialogue system.

We then consider the more complicated task of reference to database entries. Referring to databases is quite common in question answering and dialogue systems, in which databases are external knowledge and they are resorted to reply users’ query. In our paper, we consider the application of task-oriented dialogue systems in the domain of restaurant recommendations. Different from lists that are one dimensional, databases are two-dimensional and referring to table entries requires sophisticated model design.

To better understand the model, we first make a brief introduction of the data set. We use dialogues from the second Dialogue State Tracking Challenge (DSTC2) (henderson2014dialog). Table. 3 is one example dialogue from this dataset.

M: Hello, welcome to the Cambridge restaurant system? You can ask for restaurants by area, price range or food type . How may I help you?
U: i want a moderately priced restaurant in the north town
M: the nirala is a nice restaurant in the north of town in the moderate price range
U: what is the address
M: Sure , the nirala is on 7 Milton Road Chesterton
Table 3: Example dialogue, M stands for Machine and U stands for User

We can observe from this example, users get recommendations of restaurants based on queries that specify the area, price and food type of the restaurant. We can support the system’s decisions by incorporating a mechanism that allows the model to query the database to find restaurants that satisfy the users’ queries. A sample of our database (refer to data preparation part on how we construct the database) is shown in Table 2. We can observe that each restaurant contains 6 attributes that are generally referred in the dialogue dataset. As such, if the user requests a restaurant that serves “indian” food, we wish to train a model that can search for entries whose “food” column contains “indian”. Now, we describe how we deploy a model that fulfills these requirements. We first introduce the basic dialogue framework in which we incorporates the table reference module.

[width=0.5]hrnn.pdf

Figure 3: Hierarchical RNN Seq2Seq model. The red box denotes attention mechanism over the utterances in the previous turn.

Basic Dialogue Framework: We build a basic dialogue model based on the hierarchical RNN model described in (serban2016building), as in dialogues, the generation of the response is not only dependent on the previous sentence, but on all sentences leading to the response. We assume that a dialogue is alternated between a machine and a user. An illustration of the model is shown in Figure 3.

Consider a dialogue with turns, the utterances from a user and a machines are denoted as and respectively, where is the -th utterance. We define , , where () denotes the -th (-th) token in the -th utterance from the user (the machines). The dialogue sequence starts with a machine utterance and is given by . We would like to model the utterances from the machine

We encode and into continuous space in a hierarchical way with LSTM: Sentence Encoder: For a given utterance , We encode it as . The representation of is given by the . The same process is applied to obtain the machine utterance representation . Turn Encoder: We further encode the sequence with another LSTM encoder. We shall refer the last hidden state as , which can be seen as the hierarchical encoding of the previous utterances. Decoder: We use as the initial state of decoder LSTM and decode each token in . We can express the decoder as:

We can also incoroprate the attetionn mechanism in the decoder. As shown in Figure. 3, we use the attention mechanism over the utterance in the previous turn. Due to space limit, we don’t present the attention based decoder mathmatically and readers can refer to (BahdanauCB14) for details.

2.2.1 Incorporating Table Reference

We now extend the decoder in order to allow the model to condition the generation on a database.

Pointer Switch: We use to denote the decision of whether to copy one cell from the table. We compute this probability as follows:

Thus, if , the next token is generated from the database, whereas if , then the following token is generated from a softmax. We now describe how we generate tokens from the database.

We denote a table with rows and columns as , where is the cell in row and column . The attribute of each column is denoted as , where is the -th attribute. and are one-hot vector.
Table Encoding: To encode the table, we first build an attribute vector and then an encoding vector for each cell. The attribute vector is simply an embedding lookup . For the encoding of each cell, we first concatenate embedding lookup of the cell with the corresponding attribute vector and then feed it through a one-layer MLP as follows: then .
Table Pointer: The detailed process of calculating the probability distribution over the table is shown in Figure 4. The attention over cells in the table is conditioned on a given vector

, similarly to the attention model for sequences. However, rather than a sequence of vectors, we now operate over a table.

[width=]table_pointer.pdf

Figure 4: Decoder with table pointer.

Step 1: Attention over the attributes to find out the attributes that a user asks about, . Suppose a user says cheap, then we should focus on the price attribute.

Step 2: Conditional row representation calculation, . So that contains the price information of the restaurant in row .

Step 3: Attention over to find out the restaurants that satisfy users’ query, . Restaurants with cheap price will be picked.

Step 4: Using the probabilities , we compute the weighted average over the all rows . contains the information of cheap restaurant.

Step 5: Attention over columns to compute the probabilities of copying each column .

Step 6: To get the probability matrix of copying each cell, we simply compute the outer product .

The overall process is as follows:

If , we embed the above attention process in the decoder by replacing the conditioned state with the current decoder state .

Objective: As in previous task, we can train the model in a fully supervised fashion, or we can treat the decision as a latent variable. We can get in a similar way.

2.3 Reference to document context

Finally, we address the references that happen in a document itself and build a language model that uses coreference links to point to previous entities. Before generating a word, we first make the decision on whether it is an entity mention. If so, we decide which entity this mention belongs to, then we generate the word based on that entity. Denote the document as , and the entities are , each entity has mentions, , such that refer to the same entity. We use a LSTM to model the document, the hidden state of each token is . We use a set to keep track of the entity states, where is the state of entity .

um and [I]1 think that is what’s - Go ahead [Linda]2. Well and thanks goes to [you]1 and to [the media]3 to help [us]4…So [our]4 hat is off to all of [you]5 [width=0.9]entity_pointer.pdf
Figure 5: Coreference based language model, example taken from wiseman2016learning.

Word generation: At each time step before generating the next word, we predict whether the word is an entity mention:

where denotes whether the next word is an entity and if yes denotes which entity the next word corefers to. If the next word is an entity mention, then else . Hence,

Entity state update: Since there are multiple mentions for each entity and the mentions appear dynamically, we need to keep track of the entity state in order to use coreference information in entity mention prediction. We update the entity state at each time step. In the beginning, , denotes the state of an virtual empty entity and is a learnable variable. If and , then it indicates the next word is a new entity mention, then in the next step, we append to , i.e., , if and , then we update the corresponding entity state with the new hidden state, . Another way to update the entity state is to use one LSTM to encode the mention states and get the new entity state. Here we use the latest entity mention state as the new entity state for simplicity. The detailed update process is shown in Figure 5.

Note that the stochastic decisions in this task are more complicated than previous two tasks. We need to make two sequential decisions: whether the next word is an entity mention, and if yes, which entity the mention corefers to. It is intractable to marginalize these decisions, so we train this model in a supervised fashion (refer to data preparation part on how we get coreference annotations).

3 Experiments

3.1 Data sets and preprocessing

Recipes: We crawled all recipes from www.allrecipes.com. There are about recipes in total, and every recipe has an ingredient list and a corresponding recipe. We exclude the recipes that have less than 10 tokens or more than 500 tokens, those recipes take about 0.1% of all data set. On average each recipe has 118 tokens and 9 ingredients. We random shuffle the whole data set and take 80% as training and 10% for validation and test. We use a vocabulary size of 10,000 in the model.

Dialogue: We use the DSTC2 data set. We only use the dialogue transcripts from the data set. There are about 3,200 dialogues in total. The table is not available from DSTC2. To reconstruct the table, we crawled TripAdvisor for restaurants in the Cambridge area, where the dialog dataset was collected. Then, we remove restaurants that do not appear in the data set and create a database with 109 restaurants and their attributes (e.g. food type). Since this is a small data set, we use 5-fold cross validation and report the average result over the 5 partitions. There may be multiple tokens in each table cell, for example in Table. 2, the name, address, post code and phone number have multiple tokens, we replace them with one special token. For the name, address, post code and phone number of the -th row, we replace the tokens in each cell with , , , . If a table cell is empty, we replace it with an empty token . We do a string match in the transcript and replace the corresponding tokens in transcripts from the table with the special tokens. Each dialogue on average has 8 turns (16 sentences). We use a vocabulary size of 900, including about 400 table tokens and 500 words.

Coref LM: We use the Xinhua News data set from Gigaword Fifth Edition and sample 100,000 documents that has length in range from 100 to 500. Each document has on average 234 tokens, so there are 23 million tokens in total. We process the documents to get coreference annotations and use the annotations, i.e., , in training. We take 80% as training and 10% as validation and test respectively. We ignore the entities that have only one mention and for the mentions that have multiple tokens, we take the token that is most frequent in the all the mentions for this entity. After preprocessing, tokens that are entity mentions take about 10% of all tokens. We use a vocabulary size of 50,000 in the model.

Model val test
PPL BLEU PPL BLEU
All Ing Word All Ing Word
Seq2Seq 5.60 11.26 5.00 14.07 5.52 11.26 4.91 14.39
Attn 5.25 6.86 5.03 14.84 5.19 6.92 4.95 15.15
Pointer 5.15 5.86 5.04 15.06 5.11 6.04 4.98 15.29
Latent 5.02 5.10 5.01 14.87 4.97 5.19 4.94 15.41
Table 4: Recipe results, evaluated in perplexity and BLEU score. All means all tokens, Ing denotes tokens from recipes that appear in ingredients. Word means non-table tokens. Pointer and Latent differs in that for Pointer, we provide supervised signal on when to generate a reference token, while in Latent it is a latent decision.
Model All Table Table OOV Word
Seq2Seq 1.350.01 4.980.38 1.99E77.75E6 1.230.01
Table Attn 1.370.01 5.090.64 7.91E71.39E8 1.240.01
Table Pointer 1.330.01 3.990.36 1360 2600 1.230.01
Table Latent 1.360.01 4.990.20 3.78E76.08E7 1.240.01
+ Sentence Attn
Seq2Seq 1.280.01 3.310.21 2.83E9 4.69E9 1.190.01
Table Attn 1.280.01 3.170.21 1.67E79.5E6 1.200.01
Table Pointer 1.270.01 2.990.19 82.86110 1.200.01
Table Latent 1.280.01 3.260.25 1.27E71.41E7 1.200.01
Table 5: Dialogue perplexity results. Table means tokens from table, Table OOV denotes table tokens that do not appear in the training set. Sentence Attn denotes we use attention mechanism over tokens in utterances from the previous turn.

3.2 Baselines, model training and evaluation

We compare our model with baselines that do not model reference explicitly. For recipe generation and dialogue modeling, we compare our model with basic seq2seq and attention model. We also apply attention mechanism over the table for dialogue modeling as a baseline. For coreference based language model, we compare our model with simple RNN language model.

We train all models with simple stochastic gradient descent with gradient clipping. We use a one-layer LSTM for all RNN components. Hyper-parameters are selected using grid search based on the validation set.

Evaluation of our model is challenging since it involves three rather different applications. We focus on evaluating the accuracy of predicting the reference tokens, which is the goal of our model. Specifically, we report the perplexity of all words, words that can be generated from reference and non-reference words. The perplexity is calculated by multiplying the probability of decision at each step all together. Note that for non-reference words, they also appear in the vocabulary. So it is a fair comparison to models that do not model reference explicitly. For the recipe task, we also generate the recipes using beam size of 10 and evaluate the generated recipes with BLEU. We didn’t use BLEU for dialogue generation since the database entries take only a very small part of all tokens in utterances.

3.3 Results and analysis

The results for recipe generation, dialogue and coref based language model are shown in Table 4, 5, and 6 respectively. The recipe results in Table 4 verifies that modeling reference explicitly improves performance. Latent and Pointer perform better than Seq2Seq and Attn model. The Latent model performs better than the Pointer model since tokens in ingredients that match with recipes do not necessarily come from the ingredients. Imposing a supervised signal gives wrong information to the model and hence makes the result worse. With latent decision, the model learns to when to copy and when to generate it from the vocabulary.

The findings for dialogue basically follow that of recipe generation, as shown in Table 5. Conditioning table performs better in predicting table tokens in general. Table Pointer has the lowest perplexity for tokens in the table. Since the table tokens appear rarely in the dialogue transcripts, the overall perplexity does not differ much and the non-table token perplexity are similar. With attention mechanism over the table, the perplexity of table token improves over basic Seq2Seq model, but still not as good as directly pointing to cells in the table, which shows the advantage of modeling reference explicitly. As expected, using sentence attention improves significantly over models without sentence attention. Surprisingly, Table Latent performs much worse than Table Pointer. We also measure the perplexity of table tokens that appear only in test set. For models other than Table Pointer, because the tokens never appear in the training set, the perplexity is quite high, while Table Pointer can predict these tokens much more accurately. This verifies our conjecture that our model can learn reasoning over databases.

The coref based LM results are shown in Table 6. We find that coref based LM performs much better on the entity perplexity, but is a little bit worse for non-entity words. We found it was an optimization problem and the model was stuck in a local optimum. So we initialize the Pointer model with the weights learned from LM, the Pointer model performs better than LM both for entity perplexity and non-entity words perplexity.

In Appendix A, we also visualize the heat map of table reference and list items reference. The visualization shows that our model can correctly predict when to refer to which entries according to context.

Model val test
All Entity Word All Entity Word
LM 33.08 44.52 32.04 33.08 43.86 32.10
Pointer 32.57 32.07 32.62 32.62 32.07 32.69

Pointer + init

30.43 28.56 30.63 30.42 28.56 30.66
Table 6: Coreference based LM. Pointer + init means we initialize the model with the LM weights.

4 Related Work

In terms of methodology, our work is closely related to previous works that incorporate copying mechanism with neural models (GulcehreANZB16; GuLLL16; ling:2016; ptrnets). Our models are similar to models proposed in (ahn:2016; merity2016pointer), where the generation of each word can be conditioned on a particular entry in knowledge lists and previous words. In our work, we describe a model with broader applications, allowing us to condition, on databases, lists and dynamic lists.

In terms of applications, our work is related to chit-chat dialogue (li:2016; vinyals:2015; sordoni:2015; serban2016building; shang2015neural) and task oriented dialogue (wen:2016; bordes2016learning; williams2016end; wen2016network). Most of previous works on task oriented dialogues embed the seq2seq model in traditional dialogue systems, in which the table query part is not differentiable, while our model queries the database directly. Recipe generation was proposed in (kiddon:2016). They use attention mechanism over the checklists, whereas our work models explicit references to them. Context dependent language models (mikolov2010recurrent; jozefowicz2016exploring; mikolov2010recurrent; ji2015document; wang2015larger) are proposed to capture long term dependency of text. There are also lots of works on coreference resolution (haghighi2010coreference; wiseman2016learning). We are the first to combine coreference with language modeling, to the best of our knowledge.

5 Conclusion

We introduce reference-aware language models which explicitly model the decision of from where to generate the token at each step. Our model can also learns the decision by treating it as a latent variable. We demonstrate on three applications, table based dialogue modeling, recipe generation and coref based LM, that our model performs better than attention based model, which does not incorporate this decision explicitly. There are several directions to explore further based on our framework. The current evaluation method is based on perplexity and BLEU. In task oriented dialogues, we can also try human evaluation to see if the model can reply users’ query accurately. It is also interesting to use reinforcement learning to learn the actions in each step in coref based LM.

References

Appendix A Visualization

M: Hello , welcome to the Cambridge restaurant system ? You can ask for restaurants by area , price range or food type . How may I help you ?
U: and expensive restaurant that serves tuscan food
M: Sorry there is no tuscan restaurant in the expensive price range
U: french food
M: cote is a great restaurant serving french food and it is in the expensive price range
U: may i have the phone number
M: The phone number of cote is 01223 311053 .
U: thank you good bye
(a) Dialogue script
name price range food area address post code phone
charlie chan cheap chinese east Regent Street City Centre C.B 2, 1 D.B 01223 361763
chiquito restaurant bar expensive mexican south 2G Cambridge Leisure Park Cherry Hinton Road Cherry Hinton C.B 1, 7 D.Y 01223 400170
city stop expensive food north Cambridge City Football Club Milton Road Chesterton EMPTY 01223 363270
clowns cafe expensive italian centre EMPTY C.B 1, 1 L.N 01223 355711
cocum expensive indian west 71 Castle Street City Centre C.B 3, 0 A.H 01223 366668
cote expensive french centre Bridge Street City Centre C.B 2, 1 U.F 01223 311053
curry garden expensive indian centre 106 Regent Street City Centre EMPTY 01223 302330
curry king expensive indian centre 5 Jordans Yard Bridge Street City Centre C.B 1, 2 B.D 01223 324351
curry prince moderate indian east 451 Newmarket Road Fen Ditton C.B 5, 8 J.J 01223 566388
(b) Attention heat map: cote is a great restaurant serving french food and it is in the expensive price range.
name price range food area address post code phone
charlie chan cheap chinese east Regent Street City Centre C.B 2, 1 D.B 01223 361763
chiquito restaurant bar expensive mexican south 2G Cambridge Leisure Park Cherry Hinton Road Cherry Hinton C.B 1, 7 D.Y 01223 400170
city stop expensive food north Cambridge City Football Club Milton Road Chesterton EMPTY 01223 363270
clowns cafe expensive italian centre EMPTY C.B 1, 1 L.N 01223 355711
cocum expensive indian west 71 Castle Street City Centre C.B 3, 0 A.H 01223 366668
cote expensive french centre Bridge Street City Centre C.B 2, 1 U.F 01223 311053
curry garden expensive indian centre 106 Regent Street City Centre EMPTY 01223 302330
curry king expensive indian centre 5 Jordans Yard Bridge Street City Centre C.B 1, 2 B.D 01223 324351
curry prince moderate indian east 451 Newmarket Road Fen Ditton C.B 5, 8 J.J 01223 566388
(c) Attention heap map: The phone number of cote is 01223 311053 .
Table 7: Dialogue visualization 1
M: Hello , welcome to the Cambridge restaurant system ? You can ask for restaurants by area , price range or food type . How may I help you ?
U: cheap restaurant
M: What kind of food would you like ?
U: in the west part of town
M: la margherita is a nice restaurant in the west of town in the cheap price range
U: address
M: Sure , la margherita is on 15 Magdalene Street City Centre
U: thank you
M: la margherita is a nice restaurant in the west of town in the cheap price range
U: good bye
(a) Dialogue script
name price range food area address post code phone
india house expensive indian west 31 Newnham Road Newnham EMPTY 01223 461661
j restaurant cheap oriental centre 86 Regent Street City Centre C.B 2, 1 D.P 01223 307581
jinling noodle bar moderate chinese centre 11 Peas Hill City Centre C.B 2, 3 P.P 01223 566188
kohinoor cheap indian centre 74 Mill Road City Centre EMPTY 01223 323639
kymmoy expensive oriental centre 52 Mill Road City Centre C.B 1, 2 A.S 01223 311911
la margherita cheap italian west 15 Magdalene Street City Centre C.B 3, 0 A.F 01223 315232
la mimosa expensive mediterranean centre Thompsons Lane Fen Ditton C.B 5, 8 A.Q 01223 362525
la raza cheap spanish centre 4 - 6 Rose Crescent C.B 2, 3 L.L 01223 464550
la tasca moderate spanish centre 14 -16 Bridge Street C.B 2, 1 U.F 01223 464630
lan hong house moderate chinese centre 12 Norfolk Street City Centre EMPTY 01223 350420
(b) Attention heat map: la margherita is a nice restaurant in the west of town in the cheap price range
name price range food area address post code phone
india house expensive indian west 31 Newnham Road Newnham EMPTY 01223 461661
j restaurant cheap oriental centre 86 Regent Street City Centre C.B 2, 1 D.P 01223 307581
jinling noodle bar moderate chinese centre 11 Peas Hill City Centre C.B 2, 3 P.P 01223 566188
kohinoor cheap indian centre 74 Mill Road City Centre EMPTY 01223 323639
kymmoy expensive oriental centre 52 Mill Road City Centre C.B 1, 2 A.S 01223 311911
la margherita cheap italian west 15 Magdalene Street City Centre C.B 3, 0 A.F 01223 315232
la mimosa expensive mediterranean centre Thompsons Lane Fen Ditton C.B 5, 8 A.Q 01223 362525
la raza cheap spanish centre 4 - 6 Rose Crescent C.B 2, 3 L.L 01223 464550
la tasca moderate spanish centre 14 -16 Bridge Street C.B 2, 1 U.F 01223 464630
lan hong house moderate chinese centre 12 Norfolk Street City Centre EMPTY 01223 350420
(c) Attention heap map: Sure , la margherita is on 15 Magdalene Street City Centre.
Table 8: Dialogue visualization 2

[width=] recipe_heat_map_0_0.pdf

(a) part 1

[width=] recipe_heat_map_0_1.pdf

(b) part 2
Figure 6: Recipe heat map example 1. The ingredient tokens appear on the left while the recipe tokens appear on the top. The first row is the .

[width=] recipe_heat_map_1_0.pdf

(a) part 1

[width=] recipe_heat_map_1_1.pdf

(b) part 2

[width=] recipe_heat_map_1_2.pdf

(c) part 3
Figure 7: Recipe heat map example 2.