Neural Belief Tracker: Data-Driven Dialogue State Tracking

by   Nikola Mrkšić, et al.
University of Cambridge
Apple Inc.

One of the core components of modern spoken dialogue systems is the belief tracker, which estimates the user's goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: a) Spoken Language Understanding models that require large amounts of annotated training data; or b) hand-crafted lexicons for capturing some of the linguistic variation in users' language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided.


page 1

page 2

page 3

page 4


Towards Universal Dialogue State Tracking

Dialogue state tracking is the core part of a spoken dialogue system. It...

Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing

Robust dialogue belief tracking is a key component in maintaining good q...

Fully Statistical Neural Belief Tracking

This paper proposes an improvement to the existing data-driven Neural Be...

Neural User Simulation for Corpus-based Policy Optimisation for Spoken Dialogue Systems

User Simulators are one of the major tools that enable offline training ...

Joint Learning of Interactive Spoken Content Retrieval and Trainable User Simulator

User-machine interaction is crucial for information retrieval, especiall...

Joint On-line Learning of a Zero-shot Spoken Semantic Parser and a Reinforcement Learning Dialogue Manager

Despite many recent advances for the design of dialogue systems, a true ...

Uncertainty Measures in Neural Belief Tracking and the Effects on Dialogue Policy Performance

The ability to identify and resolve uncertainty is crucial for the robus...

1 Introduction

Spoken dialogue systems (SDS) allow users to interact with computer applications through conversation. Task-based systems help users achieve goals such as finding restaurants or booking flights. The dialogue state tracking (DST) component of an SDS serves to interpret user input and update the belief state, which is the system’s internal representation of the state of the conversation Young et al. (2010)

. This is a probability distribution over dialogue states used by the downstream

dialogue manager to decide which action the system should perform next Su et al. (2016a, b)

; the system action is then verbalised by the natural language generator

Wen et al. (2015a, b); Dušek and Jurčíček (2015).

User: I’m looking for a cheaper restaurant
System: Sure. What kind - and where?
User: Thai food, somewhere downtown
inform(price=cheap, food=Thai, area=centre)
System: The House serves cheap Thai food
User: Where is it?
inform(price=cheap, food=Thai, area=centre); request(address)
System: The House is at 106 Regent Street
Figure 1: Annotated dialogue states in a sample dialogue. Underlined words show rephrasings which are typically handled using semantic dictionaries.

The Dialogue State Tracking Challenge (DSTC) series of shared tasks has provided a common evaluation framework accompanied by labelled datasets Williams et al. (2016). In this framework, the dialogue system is supported by a domain ontology which describes the range of user intents the system can process. The ontology defines a collection of slots and the values that each slot can take. The system must track the search constraints expressed by users (goals or informable slots) and questions the users ask about search results (requests), taking into account each user utterance (input via a speech recogniser) and the dialogue context (e.g., what the system just said). The example in Figure 1 shows the true state after each user utterance in a three-turn conversation. As can be seen in this example, DST models depend on identifying mentions of ontology items in user utterances. This becomes a non-trivial task when confronted with lexical variation, the dynamics of context and noisy automated speech recognition (ASR) output.

Traditional statistical approaches use separate Spoken Language Understanding (SLU) modules to address lexical variability within a single dialogue turn. However, training such models requires substantial amounts of domain-specific annotation. Alternatively, turn-level SLU and cross-turn DST can be coalesced into a single model to achieve superior belief tracking performance, as shown by Henderson:14b. Such coupled models typically rely on manually constructed semantic dictionaries to identify alternative mentions of ontology items that vary lexically or morphologically. Figure 2 gives an example of such a dictionary for three slot-value pairs. This approach, which we term delexicalisation, is clearly not scalable to larger, more complex dialogue domains. Importantly, the focus on English in DST research understates the considerable challenges that morphology poses to systems based on exact matching in morphologically richer languages such as Italian or German (see Vulic:2017).

Food=Cheap: [affordable, budget, low-cost,
low-priced, inexpensive, cheaper, economic, …]
Rating=High: [best, high-rated, highly rated,
top-rated, cool, chic, popular, trendy, …]
Area=Centre: [center, downtown, central,
city centre, midtown, town centre, …]
Figure 2: An example semantic dictionary with rephrasings for three ontology values in a restaurant search domain.

In this paper, we present two new models, collectively called the Neural Belief Tracker (NBT) family. The proposed models couple SLU and DST, efficiently learning to handle variation without requiring any hand-crafted resources. To do that, NBT models move away from exact matching and instead reason entirely over pre-trained word vectors. The vectors making up the user utterance and preceding system output are first composed into intermediate representations. These representations are then used to decide which of the ontology-defined intents have been expressed by the user up to that point in the conversation.

To the best of our knowledge, NBT models are the first to successfully use pre-trained word vector spaces to improve the language understanding capability of belief tracking models. In evaluation on two datasets, we show that: a) NBT models match the performance of delexicalisation-based models which make use of hand-crafted semantic lexicons; and b) the NBT models significantly outperform those models when such resources are not available. Consequently, we believe this work proposes a framework better-suited to scaling belief tracking models for deployment in real-world dialogue systems operating over sophisticated application domains where the creation of such domain-specific lexicons would be infeasible.

Figure 3: Architecture of the NBT Model. The implementation of the three representation learning subcomponents can be modified, as long as these produce adequate vector representations which the downstream model components can use to decide whether the current candidate slot-value pair was expressed in the user utterance (taking into account the preceding system act).

2 Background

Models for probabilistic dialogue state tracking, or belief tracking, were introduced as components of spoken dialogue systems in order to better handle noisy speech recognition and other sources of uncertainty in understanding a user’s goals Bohus and Rudnicky (2006); Williams and Young (2007); Young et al. (2010). Modern dialogue management policies can learn to use a tracker’s distribution over intents to decide whether to execute an action or request clarification from the user. As mentioned above, the DSTC shared tasks have spurred research on this problem and established a standard evaluation paradigm Williams et al. (2013); Henderson et al. (2014b, a). In this setting, the task is defined by an ontology that enumerates the goals a user can specify and the attributes of entities that the user can request information about. Many different belief tracking models have been proposed in the literature, from generative Thomson and Young (2010) and discriminative Henderson et al. (2014d)

statistical models to rule-based systems

Wang and Lemon (2013). To motivate the work presented here, we categorise prior research according to their reliance (or otherwise) on a separate SLU module for interpreting user utterances:111The best-performing models in DSTC2 all used both raw ASR output and the output of (potentially more than one) SLU decoders Williams (2014); Williams et al. (2016). This does not mean that those models are immune to the drawbacks identified here for the two model categories; in fact, they share the drawbacks of both.

Separate SLU

Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state Thomson and Young (2010); Wang and Lemon (2013); Lee and Kim (2016); Perez (2016); Perez and Liu (2017); Sun et al. (2016); Jang et al. (2016); Shi et al. (2016); Dernoncourt et al. (2016); Liu and Perez (2017); Vodolán et al. (2017). In the DSTC challenges, some systems used the output of template-based matching systems such as Phoenix Wang (1994). However, more robust and accurate statistical SLU systems are available. Many discriminative approaches to spoken dialogue SLU train independent binary models that decide whether each slot-value pair was expressed in the user utterance. Given enough data, these models can learn which lexical features are good indicators for a given value and can capture elements of paraphrasing Mairesse et al. (2009). This line of work later shifted focus to robust handling of rich ASR output Henderson et al. (2012); Tur et al. (2013)

. SLU has also been treated as a sequence labelling problem, where each word in an utterance is labelled according to its role in the user’s intent; standard labelling models such as CRFs or Recurrent Neural Networks can then be used

(Raymond and Ricardi, 2007; Yao et al., 2014; Celikyilmaz and Hakkani-Tur, 2015; Mesnil et al., 2015; Peng et al., 2015; Zhang and Wang, 2016; Liu and Lane, 2016b; Vu et al., 2016; Liu and Lane, 2016a, i.a.). Other approaches adopt a more complex modelling structure inspired by semantic parsing Saleh et al. (2014); Vlachos and Clark (2014). One drawback shared by these methods is their resource requirements, either because they need to learn independent parameters for each slot and value or because they need fine-grained manual annotation at the word level. This hinders scaling to larger, more realistic application domains.


Research on belief tracking has found it advantageous to reason about SLU and DST jointly, taking ASR predictions as input and generating belief states as output Henderson et al. (2014d); Sun et al. (2014); Zilka and Jurcicek (2015); Mrkšić et al. (2015). In DSTC2, systems which used no external SLU module outperformed all systems that only used external SLU features. Joint models typically rely on a strategy known as delexicalisation whereby slots and values mentioned in the text are replaced with generic labels. Once the dataset is transformed in this manner, one can extract a collection of template-like -gram features such as [want tagged-value food]. To perform belief tracking, the shared model iterates over all slot-value pairs, extracting delexicalised feature vectors and making a separate binary decision regarding each pair. Delexicalisation introduces a hidden dependency that is rarely discussed: how do we identify slot/value mentions in text? For toy domains, one can manually construct semantic dictionaries which list the potential rephrasings for all slot values. As shown by Mrkšić et al. Mrkšić et al. (2016), the use of such dictionaries is essential for the performance of current delexicalisation-based models. Again though, this will not scale to the rich variety of user language or to general domains.

The primary motivation for the work presented in this paper is to overcome the limitations that affect previous belief tracking models. The NBT model efficiently learns from the available data by: 1) leveraging semantic information from pre-trained word vectors to resolve lexical/morphological ambiguity; 2) maximising the number of parameters shared across ontology values; and 3) having the flexibility to learn domain-specific paraphrasings and other kinds of variation that make it infeasible to rely on exact matching and delexicalisation as a robust strategy.

3 Neural Belief Tracker

The Neural Belief Tracker (NBT) is a model designed to detect the slot-value pairs that make up the user’s goal at a given turn during the flow of dialogue. Its input consists of the system dialogue acts preceding the user input, the user utterance itself, and a single candidate slot-value pair that it needs to make a decision about. For instance, the model might have to decide whether the goal food=Italian has been expressed in ‘I’m looking for good pizza’. To perform belief tracking, the NBT model iterates over all candidate slot-value pairs (defined by the ontology), and decides which ones have just been expressed by the user.

Figure 3 presents the flow of information in the model. The first layer in the NBT hierarchy performs representation learning given the three model inputs, producing vector representations for the user utterance (), the current candidate slot-value pair () and the system dialogue acts (). Subsequently, the learned vector representations interact through the context modelling and semantic decoding submodules to obtain the intermediate interaction summary vectors and . These are used as input to the final decision-making module which decides whether the user expressed the intent represented by the candidate slot-value pair.

Figure 4: NBT-DNN Model. Word vectors of -grams () are summed to obtain cumulative -grams, then passed through another hidden layer and summed to obtain the utterance representation .
Figure 5: NBT-CNN Model. convolutional filters of window sizes are applied to word vectors of the given utterance ( in the diagram, but

in the system). The convolutions are followed by the ReLU activation function and max-pooling to produce summary

-gram representations. These are summed to obtain the utterance representation .

3.1 Representation Learning

For any given user utterance, system act(s) and candidate slot-value pair, the representation learning submodules produce vector representations which act as input for the downstream components of the model. All representation learning subcomponents make use of pre-trained collections of word vectors. As shown by Mrkšić et al. Mrkšić et al. (2016), specialising word vectors to express semantic similarity rather than relatedness is essential for improving belief tracking performance. For this reason, we use the semantically-specialised Paragram-SL999 word vectors Wieting et al. (2015) throughout this work. The NBT training procedure keeps these vectors fixed: that way, at test time, unseen words semantically related to familiar slot values (i.e. inexpensive to cheap) will be recognised purely by their position in the original vector space (see also Rocktäschel et al. Rocktäschel et al. (2016)). This means that the NBT model parameters can be shared across all values of the given slot, or even across all slots.

Let represent a user utterance consisting of words . Each word has an associated word vector . We propose two model variants which differ in the method used to produce vector representations of : NBT-DNN and NBT-CNN. Both act over the constituent -grams of the utterance. Let be the concatenation of the word vectors starting at index , so that:


where denotes vector concatenation. The simpler of our two models, which we term NBT-DNN, is shown in Figure 4. This model computes cumulative -gram representation vectors , and , which are the -gram ‘summaries’ of the unigrams, bigrams and trigrams in the user utterance:


Each of these vectors is then non-linearly mapped to intermediate representations of the same size:


where the weight matrices and bias terms map the cumulative -grams to vectors of the same dimensionality and denotes the sigmoid activation function. We maintain a separate set of parameters for each slot (indicated by superscript ). The three vectors are then summed to obtain a single representation for the user utterance:


The cumulative -gram representations used by this model are just unweighted sums of all word vectors in the utterance. Ideally, the model should learn to recognise which parts of the utterance are more relevant for the subsequent classification task. For instance, it could learn to ignore verbs or stop words and pay more attention to adjectives and nouns which are more likely to express slot values.


Our second model draws inspiration from successful applications of Convolutional Neural Networks (CNNs) for language understanding

Collobert et al. (2011); Kalchbrenner et al. (2014); Kim (2014). These models typically apply a number of convolutional filters to -grams in the input sentence, followed by non-linear activation functions and max-pooling. Following this approach, the NBT-CNN model applies different filters for -gram lengths of and (Figure 5). Let denote the collection of filters for each value of , where is the word vector dimensionality. If denotes the concatenation of word vectors starting at index , let be the list of -grams that convolutional filters of length run over. The three intermediate representations are then given by:


Each column of the intermediate matrices is produced by a single convolutional filter of length . We obtain summary

-gram representations by pushing these representations through a rectified linear unit (ReLU) activation function

Nair and Hinton (2010) and max-pooling over time (i.e. columns of the matrix) to get a single feature for each of the filters applied to the utterance:


where is a bias term broadcast across all filters. Finally, the three summary -gram representations are summed to obtain the final utterance representation vector (as in Equation 4). The NBT-CNN model is (by design) better suited to longer utterances, as its convolutional filters interact directly with subsequences of the utterance, and not just their noisy summaries given by the NBT-DNN’s cumulative -grams.

3.2 Semantic Decoding

The NBT diagram in Figure 3 shows that the utterance representation and the candidate slot-value pair representation directly interact through the semantic decoding module. This component decides whether the user explicitly expressed an intent matching the current candidate pair (i.e. without taking the dialogue context into account). Examples of such matches would be ‘I want Thai food’ with food=Thai or more demanding ones such as ‘a pricey restaurant’ with price=expensive. This is where the use of high-quality pre-trained word vectors comes into play: a delexicalisation-based model could deal with the former example but would be helpless in the latter case, unless a human expert had provided a semantic dictionary listing all potential rephrasings for each value in the domain ontology.

Let the vector space representations of a candidate pair’s slot name and value be given by and (with vectors of multi-word slot names/values summed together). The NBT model learns to map this tuple into a single vector of the same dimensionality as the utterance representation . These two representations are then forced to interact in order to learn a similarity metric which discriminates between interactions of utterances with slot-value pairs that they either do or do not express:


where denotes element-wise vector multiplication. The dot product, which may seem like the more intuitive similarity metric, would reduce the rich set of features in to a single scalar. The element-wise multiplication allows the downstream network to make better use of its parameters by learning non-linear interactions between sets of features in and .222We also tried to concatenate and and pass that vector to the downstream decision-making neural network. However, this set-up led to very weak performance since our relatively small datasets did not suffice for the network to learn to model the interaction between the two feature vectors.

3.3 Context Modelling

This ‘decoder’ does not yet suffice to extract intents from utterances in human-machine dialogue. To understand some queries, the belief tracker must be aware of context, i.e. the flow of dialogue leading up to the latest user utterance. While all previous system and user utterances are important, the most relevant one is the last system utterance, in which the dialogue system could have performed (among others) one of the following two system acts:

  1. System Request: The system asks the user about the value of a specific slot . If the system utterance is: ‘what price range would you like?’ and the user answers with any, the model must infer the reference to price range, and not to other slots such as area or food.

  2. System Confirm: The system asks the user to confirm whether a specific slot-value pair is part of their desired constraints. For example, if the user responds to ‘how about Turkish food?’ with ‘yes’, the model must be aware of the system act in order to correctly update the belief state.

If we make the Markovian decision to only consider the last set of system acts, we can incorporate context modelling into the NBT. Let and be the word vectors of the arguments for the system request and confirm acts (zero vectors if none). The model computes the following measures of similarity between the system acts, candidate pair and utterance representation :


where denotes dot product. The computed similarity terms act as gating mechanisms which only pass the utterance representation through if the system asked about the current candidate slot or slot-value pair. This type of interaction is particularly useful for the confirm system act: if the system asks the user to confirm, the user is likely not to mention any slot values, but to just respond affirmatively or negatively. This means that the model must consider the three-way interaction between the utterance, candidate slot-value pair and the slot value pair offered by the system. If (and only if) the latter two are the same should the model consider the affirmative or negative polarity of the user utterance when making the subsequent binary decision.

Binary Decision Maker

The intermediate representations are passed through another hidden layer and then combined. If is a layer which maps input vector to a vector of size , the input to the final binary softmax (which represents the decision) is given by:

4 Belief State Update Mechanism

In spoken dialogue systems, belief tracking models operate over the output of automatic speech recognition (ASR). Despite improvements to speech recognition, the need to make the most out of imperfect ASR will persist as dialogue systems are used in increasingly noisy environments.

In this work, we define a simple rule-based belief state update mechanism which can be applied to ASR -best lists. For dialogue turn , let denote the preceding system output, and let denote the list of ASR hypotheses

with posterior probabilities

. For any hypothesis , slot and slot value , NBT models estimate , which is the (turn-level) probability that was expressed in the given hypothesis. The predictions for such hypotheses are then combined as:

This turn-level belief state estimate is then combined with the (cumulative) belief state up to time to get the updated belief state estimate:

where is the coefficient which determines the relative weight of the turn-level and previous turns’ belief state estimates.333This coefficient was tuned on the DSTC2 development set. The best performance was achieved with . For slot , the set of its detected values at turn is then given by:

For informable (i.e. goal-tracking) slots, the value in with the highest probability is chosen as the current goal (if ). For requests, all slots in are deemed to have been requested. As requestable slots serve to model single-turn user queries, they require no belief tracking across turns.

5 Experiments

5.1 Datasets

Two datasets were used for training and evaluation. Both consist of user conversations with task-oriented dialogue systems designed to help users find suitable restaurants around Cambridge, UK. The two corpora share the same domain ontology, which contains three informable (i.e. goal-tracking) slots: food, area and price. The users can specify values for these slots in order to find restaurants which best meet their criteria. Once the system suggests a restaurant, the users can ask about the values of up to eight requestable slots (phone number, address, etc.). The two datasets are:

  1. DSTC2: We use the transcriptions, ASR hypotheses and turn-level semantic labels provided for the Dialogue State Tracking Challenge 2 Henderson et al. (2014a). The official transcriptions contain various spelling errors which we corrected manually; the cleaned version of the dataset is available at The training data contains 2207 dialogues and the test set consists of 1117 dialogues. We train NBT models on transcriptions but report belief tracking performance on test set ASR hypotheses provided in the original challenge.

  2. WOZ 2.0: Wen et al. Wen et al. (2017) performed a Wizard of Oz style experiment in which Amazon Mechanical Turk users assumed the role of the system or the user of a task-oriented dialogue system based on the DSTC2 ontology. Users typed instead of using speech, which means performance in the WOZ experiments is more indicative of the model’s capacity for semantic understanding than its robustness to ASR errors. Whereas in the DSTC2 dialogues users would quickly adapt to the system’s (lack of) language understanding capability, the WOZ experimental design gave them freedom to use more sophisticated language. We expanded the original WOZ dataset from Wen et al. Wen et al. (2017) using the same data collection procedure, yielding a total of 1200 dialogues. We divided these into 600 training, 200 validation and 400 test set dialogues. The WOZ 2.0 dataset is available at

Training Examples

The two corpora are used to create training data for two separate experiments. For each dataset, we iterate over all train set utterances, generating one example for each of the slot-value pairs in the ontology. An example consists of a transcription, its context (i.e. list of preceding system acts) and a candidate slot-value pair. The binary label for each example indicates whether or not its utterance and context express the example’s candidate pair. For instance, ‘I would like Irish food’ would generate a positive example for candidate pair food=Irish, and a negative example for every other slot-value pair in the ontology.


We focus on two key evaluation metrics introduced in

Henderson et al. (2014a):

  1. Goals (‘joint goal accuracy’): the proportion of dialogue turns where all the user’s search goal constraints were correctly identified;

  2. Requests: similarly, the proportion of dialogue turns where user’s requests for information were identified correctly.

5.2 Models

We evaluate two NBT model variants: NBT-DNN and NBT-CNN. To train the models, we use the Adam optimizer Kingma and Ba (2015)

with cross-entropy loss, backpropagating through all the NBT subcomponents while keeping the pre-trained word vectors fixed (in order to allow the model to deal with unseen words at test time). The model is trained separately for each slot. Due to the high class bias (most of the constructed examples are negative), we incorporate a fixed number of positive examples in each mini-batch.


Model hyperparameters were tuned on the respective validation sets. For both datasets, the initial Adam learning rate was set to

, and

th of positive examples were included in each mini-batch. The batch size did not affect performance: it was set to 256 in all experiments. Gradient clipping (to

) was used to handle exploding gradients. Dropout Srivastava et al. (2014) was used for regularisation (with 50% dropout rate on all intermediate representations). Both NBT

models were implemented in TensorFlow

Abadi et al. (2015).

Baseline Models

For each of the two datasets, we compare the NBT models to:

  1. A baseline system that implements a well-known competitive delexicalisation-based model for that dataset. For DSTC2, the model is that of Henderson et al. Henderson et al. (2014c, d). This model is an -gram based neural network model with recurrent connections between turns (but not inside utterances) which replaces occurrences of slot names and values with generic delexicalised features. For WOZ 2.0, we compare the NBT models to a more sophisticated belief tracking model presented in Wen et al. (2017)

    . This model uses an RNN for belief state updates and a CNN for turn-level feature extraction. Unlike

    NBT-CNN, their CNN operates not over vectors, but over delexicalised features akin to those used by Henderson:14d.

  2. The same baseline model supplemented with a task-specific semantic dictionary (produced by the baseline system creators). The two dictionaries are available atñm480/ The DSTC2 dictionary contains only three rephrasings. Nonetheless, the use of these rephrasings translates to substantial gains in DST performance (see Sect. 6.1). We believe this result supports our claim that the vocabulary used by Mechanical Turkers in DSTC2 was constrained by the system’s inability to cope with lexical variation and ASR noise. The WOZ dictionary includes 38 rephrasings, showing that the unconstrained language used by Mechanical Turkers in the Wizard-of-Oz setup requires more elaborate lexicons.

Both baseline models map exact matches of ontology-defined intents (and their lexicon-specified rephrasings) to one-hot delexicalised -gram features. This means that pre-trained vectors cannot be incorporated directly into these models.

6 Results

6.1 Belief Tracking Performance

Table 1 shows the performance of NBT models trained and evaluated on DSTC2 and WOZ 2.0 datasets. The NBT models outperformed the baseline models in terms of both joint goal and request accuracies. For goals, the gains are always statistically significant (paired -test, ). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries.

DST Model DSTC2 WOZ 2.0
Goals Requests Goals Requests
Delexicalisation-Based Model 69.1 95.7 70.8 87.1
Delexicalisation-Based Model + Semantic Dictionary 72.9* 95.7 83.7* 87.6
Neural Belief Tracker: NBT-DNN 72.6* 96.4 84.4* 91.2*
Neural Belief Tracker: NBT-CNN 73.4* 96.5 84.2* 91.6*
Table 1: DSTC2 and WOZ 2.0 test set accuracies for: a) joint goals; and b) turn-level requests. The asterisk indicates statistically significant improvement over the baseline trackers (paired -test; ).

While the NBT performs well across the board, we can compare its performance on the two datasets to understand its strengths. The improvement over the baseline is greater on WOZ 2.0, which corroborates our intuition that the NBT’s ability to learn linguistic variation is vital for this dataset containing longer sentences, richer vocabulary and no ASR errors. By comparison, the language of the subjects in the DSTC2 dataset is less rich, and compensating for ASR errors is the main hurdle: given access to the DSTC2 test set transcriptions, the NBT models’ goal accuracy rises to 0.96. This indicates that future work should focus on better ASR compensation if the model is to be deployed in environments with challenging acoustics.

6.2 The Importance of Word Vector Spaces

The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. Table 2 shows the performance of NBT-CNN555The NBT-DNN model showed the same trends. For brevity, Table 2 presents only the NBT-CNN figures. models making use of three different word vector collections: 1) ‘random’ word vectors initialised using the xavier initialisation Glorot and Bengio (2010); 2) distributional GloVe vectors Pennington et al. (2014), trained using co-occurrence information in large textual corpora; and 3) semantically specialised Paragram-SL999 vectors Wieting et al. (2015), which are obtained by injecting semantic similarity constraints from the Paraphrase Database Ganitkevitch et al. (2013) into the distributional GloVe vectors in order to improve their semantic content.

The results in Table 2 show that the use of semantically specialised word vectors leads to considerable performance gains: Paragram-SL999 vectors (significantly) outperformed GloVe and xavier vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g. north and south, expensive and inexpensive), offsetting the useful semantic content embedded in this vector spaces.

Word Vectors DSTC2 WOZ 2.0
Goals Requests Goals Requests
xavier (No Info.) 64.2 81.2 81.2 90.7
GloVe 69.0* 96.4* 80.1 91.4
Paragram-SL999 73.4* 96.5* 84.2* 91.6
Table 2: DSTC2 and WOZ 2.0 test set performance (joint goals and requests) of the NBT-CNN model making use of three different word vector collections. The asterisk indicates statistically significant improvement over the baseline xavier (random) word vectors (paired -test; ).

7 Conclusion

In this paper, we have proposed a novel neural belief tracking (NBT) framework designed to overcome current obstacles to deploying dialogue systems in real-world dialogue domains. The NBT models offer the known advantages of coupling Spoken Language Understanding and Dialogue State Tracking, without relying on hand-crafted semantic lexicons to achieve state-of-the-art performance. Our evaluation demonstrated these benefits: the NBT models match the performance of models which make use of such lexicons and vastly outperform them when these are not available. Finally, we have shown that the performance of NBT models improves with the semantic quality of the underlying word vectors. To the best of our knowledge, we are the first to move past intrinsic evaluation and show that semantic specialisation boosts performance in downstream tasks.

In future work, we intend to explore applications of the NBT for multi-domain dialogue systems, as well as in languages other than English that require handling of complex morphological variation.


The authors would like to thank Ivan Vulić, Ulrich Paquet, the Cambridge Dialogue Systems Group and the anonymous ACL reviewers for their constructive feedback and helpful discussions.


  • Abadi et al. (2015) Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015.

    TensorFlow: Large-scale machine learning on heterogeneous systems.

  • Bohus and Rudnicky (2006) Dan Bohus and Alex Rudnicky. 2006. A “k hypotheses + other” belief updating model. In Proceedings of the AAAI Workshop on Statistical and Empirical Methods in Spoken Dialogue Systems.
  • Celikyilmaz and Hakkani-Tur (2015) Asli Celikyilmaz and Dilek Hakkani-Tur. 2015. Convolutional Neural Network Based Semantic Tagging with Entity Embeddings. In Proceedings of NIPS Workshop on Machine Learning for Spoken Language Understanding and Interaction.
  • Collobert et al. (2011) Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12:2493–2537.
  • Dernoncourt et al. (2016) Franck Dernoncourt, Ji Young Lee, Trung H. Bui, and Hung H. Bui. 2016. Robust dialog state tracking for large ontologies. In Proceedings of IWSDS.
  • Dušek and Jurčíček (2015) Ondřej Dušek and Filip Jurčíček. 2015. Training a Natural Language Generator From Unaligned Data. In Proceedings of ACL.
  • Ganitkevitch et al. (2013) Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of NAACL HLT.
  • Glorot and Bengio (2010) Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of AISTATS.
  • Henderson et al. (2012) Matthew Henderson, Milica Gašić, Blaise Thomson, Pirros Tsiakoulis, Kai Yu, and Steve Young. 2012. Discriminative Spoken Language Understanding Using Word Confusion Networks. In Spoken Language Technology Workshop, 2012. IEEE.
  • Henderson et al. (2014a) Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014a. The Second Dialog State Tracking Challenge. In Proceedings of SIGDIAL.
  • Henderson et al. (2014b) Matthew Henderson, Blaise Thomson, and Jason D. Wiliams. 2014b. The Third Dialog State Tracking Challenge. In Proceedings of IEEE SLT.
  • Henderson et al. (2014c) Matthew Henderson, Blaise Thomson, and Steve Young. 2014c. Robust Dialog State Tracking using Delexicalised Recurrent Neural Networks and Unsupervised Adaptation. In Proceedings of IEEE SLT.
  • Henderson et al. (2014d) Matthew Henderson, Blaise Thomson, and Steve Young. 2014d. Word-Based Dialog State Tracking with Recurrent Neural Networks. In Proceedings of SIGDIAL.
  • Jang et al. (2016) Youngsoo Jang, Jiyeon Ham, Byung-Jun Lee, Youngjae Chang, and Kee-Eung Kim. 2016. Neural dialog state tracker for large ontologies by attention mechanism. In Proceedings of IEEE SLT.
  • Kalchbrenner et al. (2014) Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A Convolutional Neural Network for Modelling Sentences. In Proceedings of ACL.
  • Kim (2014) Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of EMNLP.
  • Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of ICLR.
  • Lee and Kim (2016) Byung-Jun Lee and Kee-Eung Kim. 2016.

    Dialog History Construction with Long-Short Term Memory for Robust Generative Dialog State Tracking.

    Dialogue & Discourse 7(3):47–64.
  • Liu and Lane (2016a) Bing Liu and Ian Lane. 2016a. Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling. In Proceedings of Interspeech.
  • Liu and Lane (2016b) Bing Liu and Ian Lane. 2016b. Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks. In Proceedings of SIGDIAL.
  • Liu and Perez (2017) Fei Liu and Julien Perez. 2017. Gated End-to-End Memory Networks. In Proceedings of EACL.
  • Mairesse et al. (2009) F. Mairesse, M. Gasic, F. Jurcicek, S. Keizer, B. Thomson, K. Yu, and S. Young. 2009. Spoken Language Understanding from Unaligned Data using Discriminative Classification Models. In Proceedings of ICASSP.
  • Mesnil et al. (2015) Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Dong Yu, and Geoffrey Zweig. 2015. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing 23(3):530–539.
  • Mrkšić et al. (2016) Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting Word Vectors to Linguistic Constraints. In Proceedings of HLT-NAACL.
  • Mrkšić et al. (2015) Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multi-domain Dialog State Tracking using Recurrent Neural Networks. In Proceedings of ACL.
  • Nair and Hinton (2010) Vinod Nair and Geoffrey E. Hinton. 2010.

    Rectified linear units improve restricted Boltzmann machines.

    In Proceedings of ICML.
  • Peng et al. (2015) Baolin Peng, Kaisheng Yao, Li Jing, and Kam-Fai Wong. 2015. Recurrent Neural Networks with External Memory for Language Understanding. In Proceedings of the National CCF Conference on Natural Language Processing and Chinese Computing.
  • Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of EMNLP.
  • Perez (2016) Julien Perez. 2016. Spectral decomposition method of dialog state tracking via collective matrix factorization. Dialogue & Discourse 7(3):34–46.
  • Perez and Liu (2017) Julien Perez and Fei Liu. 2017. Dialog state tracking, a machine reading approach using Memory Network. In Proceedings of EACL.
  • Raymond and Ricardi (2007) Christian Raymond and Giuseppe Ricardi. 2007. Generative and discriminative algorithms for spoken language understanding. In Proceedings of Interspeech.
  • Rocktäschel et al. (2016) Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. 2016. Reasoning about entailment with neural attention. In ICLR.
  • Saleh et al. (2014) Iman Saleh, Shafiq Joty, Lluís Màrquez, Alessandro Moschitti, Preslav Nakov, Scott Cyphers, and Jim Glass. 2014. A study of using syntactic and semantic structures for concept segmentation and labeling. In Proceedings of COLING.
  • Shi et al. (2016) Hongjie Shi, Takashi Ushio, Mitsuru Endo, Katsuyoshi Yamagami, and Noriaki Horii. 2016. Convolutional Neural Networks for Multi-topic Dialog State Tracking. In Proceedings of IWSDS.
  • Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research .
  • Su et al. (2016a) Pei-Hao Su, Milica Gašić, Nikola Mrkšić, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016a. Continuously learning neural dialogue management. In arXiv preprint: 1606.02689.
  • Su et al. (2016b) Pei-Hao Su, Milica Gašić, Nikola Mrkšić, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016b. On-line active reward learning for policy optimisation in spoken dialogue systems. In Proceedings of ACL.
  • Sun et al. (2014) Kai Sun, Lu Chen, Su Zhu, and Kai Yu. 2014. The SJTU System for Dialog State Tracking Challenge 2. In Proceedings of SIGDIAL.
  • Sun et al. (2016) Kai Sun, Qizhe Xie, and Kai Yu. 2016. Recurrent Polynomial Network for Dialogue State Tracking. Dialogue & Discourse 7(3):65–88.
  • Thomson and Young (2010) Blaise Thomson and Steve Young. 2010. Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems. Computer Speech and Language .
  • Tur et al. (2013) Gokhan Tur, Anoop Deoras, and Dilek Hakkani-Tur. 2013. Semantic Parsing Using Word Confusion Networks With Conditional Random Fields. In Proceedings of Interspeech.
  • Vlachos and Clark (2014) Andreas Vlachos and Stephen Clark. 2014.

    A new corpus and imitation learning framework for context-dependent semantic parsing.

    TACL 2:547–559.
  • Vodolán et al. (2017) Miroslav Vodolán, Rudolf Kadlec, and Jan Kleindienst. 2017. Hybrid Dialog State Tracker with ASR Features. In Proceedings of EACL.
  • Vu et al. (2016) Ngoc Thang Vu, Pankaj Gupta, Heike Adel, and Hinrich Schütze. 2016. Bi-directional recurrent neural network with ranking loss for spoken language understanding. In Proceedings of ICASSP.
  • Vulić et al. (2017) Ivan Vulić, Nikola Mrkšić, Roi Reichart, Diarmuid Ó Séaghdha, Steve Young, and Anna Korhonen. 2017. Morph-fitting: Fine-tuning word vector spaces with simple language-specific rules. In Proceedings of ACL.
  • Wang (1994) Wayne Wang. 1994. Extracting Information From Spontaneous Speech. In Proceedings of Interspeech.
  • Wang and Lemon (2013) Zhuoran Wang and Oliver Lemon. 2013. A Simple and Generic Belief Tracking Mechanism for the Dialog State Tracking Challenge: On the believability of observed information. In Proceedings of SIGDIAL.
  • Wen et al. (2015a) Tsung-Hsien Wen, Milica Gašić, Dongho Kim, Nikola Mrkšić, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking. In Proceedings of SIGDIAL.
  • Wen et al. (2015b) Tsung-Hsien Wen, Milica Gašić, Nikola Mrkšić, Pei-Hao Su, David Vandyke, and Steve Young. 2015b. Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems. In Proceedings of EMNLP.
  • Wen et al. (2017) Tsung-Hsien Wen, David Vandyke, Nikola Mrkšić, Milica Gašić, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of EACL.
  • Wieting et al. (2015) John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. TACL 3:345–358.
  • Williams (2014) Jason D. Williams. 2014. Web-style ranking and SLU combination for dialog state tracking. In Proceedings of SIGDIAL.
  • Williams et al. (2016) Jason D. Williams, Antoine Raux, and Matthew Henderson. 2016. The Dialog State Tracking Challenge series: A review. Dialogue & Discourse 7(3):4–33.
  • Williams et al. (2013) Jason D. Williams, Antoine Raux, Deepak Ramachandran, and Alan W. Black. 2013. The Dialogue State Tracking Challenge. In Proceedings of SIGDIAL.
  • Williams and Young (2007) Jason D. Williams and Steve Young. 2007.

    Partially observable markov decision processes for spoken dialog systems.

    Computer Speech and Language 21:393–422.
  • Yao et al. (2014) Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short-term memory neural networks. In Proceedings of ASRU.
  • Young et al. (2010) Steve Young, Milica Gašić, Simon Keizer, François Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for POMDP-based spoken dialogue management. Computer Speech and Language 24:150–174.
  • Zhang and Wang (2016) Xiaodong Zhang and Houfeng Wang. 2016. A Joint Model of Intent Determination and Slot Filling for Spoken Language Understanding. In Proceedings of IJCAI.
  • Zilka and Jurcicek (2015) Lukas Zilka and Filip Jurcicek. 2015. Incremental LSTM-based dialog state tracker. In Proceedings of ASRU.