The Dialog State Tracking Challenge with Bayesian Approach

02/20/2017 ∙ by Quan Nguyen, et al. ∙ 0

Generative model has been one of the most common approaches for solving the Dialog State Tracking Problem with the capabilities to model the dialog hypotheses in an explicit manner. The most important task in such Bayesian networks models is constructing the most reliable user models by learning and reflecting the training data into the probability distribution of user actions conditional on networks states. This paper provides an overall picture of the learning process in a Bayesian framework with an emphasize on the state-of-the-art theoretical analyses of the Expectation Maximization learning algorithm.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problems of understanding users’ intention has long been pursued by engineers and scientists in speech processing [Thomson et al.(2010)Thomson, Jurcícek, Ga�ic, Keizer, Mairesse, Yu, and Young]. Why is that such a hard problem? One main reason is that for a long time there are no effective dialog models that could match speech signals to some proper hypotheses about what the speakers are intending to do.

Figure 1 illustrates one effective model suggested by [Williams et al.(2016)Williams, Raux, and Henderson]. Basically any dialog model needs to be capable of handling the following three tasks:

  • Understanding the meaning of users’ utterance given the current state of the dialog.

  • Understanding the changes of dialog’s states given the meaning of users’ utterance.

  • Taking appropriate actions based on the new states of the dialog.

It can be observed that the model in figure 1 has dedicated modules to fulfill all those requirements. Since the analysis results of one task is the input for the subsequent tasks, the accuracy of the first two tasks is crucial for a dialog system to derive suitable actions every time the system takes initiative.

Figure 1: End-to-End model of a dialog system [Williams et al.(2016)Williams, Raux, and Henderson]

Assume that we have already had a functioning system and we want to find the bottleneck of the system. In other words, we want to know what would be the source of errors. Consider how the speech signal is transformed throughout the system: firstly the Automatic Speech Recognition (ASR) transcribes the sound into words, then the Speech Language Understanding unit (SLU) constructs the perceived words into small semantic units. The Dialog Manager (DM) takes in these semantic units and tries to fill in a number of slots. If the list of words the ASR churns out are highly incorrect, the subsequent modules (SLU and DM) are often mislead into erroneous understanding and actions.

Unfortunately, high error rate of ASR module is still common [Williams et al.(2016)Williams, Raux, and Henderson]. Consequently, in order to achieve a certain level of robustness, the SLU and especially DM urge to find measures to overcome the non-optimal output of ASR.

To address the above issues of ambiguity and misunderstanding, one main principle is implemented in the dynamics of the above three modules: maintaining multiple hypotheses on a probabilistic manners. Each hypothesis has an associate weight value that indicates the level of certainty that the system has on that hypothesis. For example, in the Figure 1 above the ASR thinks that it is most likely that the speaker said something about leaving downtown with a probability of 60 percent. Similarly, multiple instances of flight reservations (the most important semantic information) are stored in the DM in deciding which Dialog Policy should follow in the next step.

2 The Dialog State Tracking Challenge

The Dialog State Tracking Challenge (DSTC) provides a common testing framework for dialog state trackers. The main idea behind this contest-format testing framework is that for the same training and testing data, various trackers built upon different models compete together to find out the best model (i.e. highest performance score) in different scenarios and testing schemes (i.e. mis-match distribution between train and test data, changes of user’s goals, open versus closed dictionary, etc) [Williams et al.(2016)Williams, Raux, and Henderson].

Figure 2: Based on the output of ASR and SLU modules, the dialog state tracker establish and maintain multiple hypotheses of the user’s intention. The probability distribution of these hypotheses is subsequently used for deciding agent’s actions [Williams et al.(2016)Williams, Raux, and Henderson]

Figure 2 demonstrates the typical output of a tracker’s output. A number of different (both contradicting and complementary) states are maintained and scored in the tracker. These scores are effectively constructed from not just a single speaking phase but multiple phases. As the dialog progresses, it is an expected behavior that the DM gradually update the scores of the states and the most probable states become more and more obvious. In this setting although the output of ASR is not very accurate, the system is capable of combining multiple outputs and selectively eliminating the most unlikely states. This behavior is very similar to that of Particle Filter where the current position of the robot become more and more apparent as the robot gets more information about the environment.

As mentioned previously, the system mainly operations in probabilistic manner with some stochastic models corresponds to how dynamics of a module leads to the states of the subsequent modules. There are two popular models emerge in this setting: the Discriminative model and Generative model. This paper only presents the learning algorithm in the Generative model employ by [Williams and Young(2007)].

3 The Bayesian Method

In generative models, the probability inference of the observation is formulated as a stochastic sequence generation mechanism from some latent variables. One simple yet effective model is Hidden Markov Model where the observation and transition states are ruled by Markov property in which, the probability of encountering the current state or observation depends only on the immediate previous state.

Figure 3:

A Markov chain with

being the hidden states and being the observable states. The Markov property indicates that the likelihood of all observed states at time can be attributed to the single hidden state of time .  [Sridharan(2014)]

Figure 3

illustrates the graphical model of a simple Hidden Markov Model (HMM). The sequence of hidden states and observations is called Markov chain. There are two main objectives of such model: the first one is to find the most likely hidden states that results in a given sequence of observations, the second one is to construct the best transition and observation probability from a randomly initialized setting. The accuracy of the former objective is entirely dependent on the accuracy of the latter objective. Therefore many system put a strong emphasize on evaluating and estimating these probabilities in the HMM.

In the context of spoken dialog system, the speech signal is the ”observation” and the underlying hypotheses and states of dialog are the ”latent variables”. Since the the sequence of events is generated based on hidden variables, the system wants to explicitly formulate the probability that it will ”hear” something given the current understanding of the dialog.

In the next section 4, we will discuss in detail the Expectation Maximization algorithm as the main learning methods in a generative model for DSTC.

4 Expectation Maximization Algorithm

The Expectation Maximization algorithm (EM) is one the most commonly used optimization algorithm [Syed and Williams(2008)]. The main idea of EM is to find some appropriate probability distribution of the latent variables and based on that distribution, the parameters are repeatedly estimated with a better values than the previous ones. The objective function of EM is the log-likelihood function of the observed data.

4.1 Jensen Inequality

Generally, Jensen Inequality states that if a function is convex, then the function of the expectation is always smaller than or equal to the expectation of that function [Borman(2004)].


The same property can be stated for a concave function with reversed inequality (greater than or equal instead of smaller than or equal). This strict evaluation (smaller or greater) of two terms: expected value of a function and that function’s value at the expected value of its domain can be proved based on the convexity of the function [Syed and Williams(2008)]

Figure 4: An example of convex function . Function is guaranteed to be convex if the expected value of its function is larger than its value on the expected value of its domain. [Borman(2004)]

Figure 4 illustrates the relative comparison between the two quantities in a parabolic convex function. It is observable (and mathematically provable) that the equality in Jensen inequality holds only for the case when the variable is identical with its expected value

Why is this inequality useful? Consider the logarithm function

Its second-order derivative is

This indicates that the logarithm function is concave and so is the objective log-likelihood function. This fact enables us to apply Jensen inequality on the objective function and attain a tractable lower bound of the objective function.

4.2 Expectation Maximization Algorithm

The reason behind finding a lower bound estimation of the objective function lies in the intractability of the objective function itself. Since the distribution of latent variables is unknown in most of the case, directly maximizing the objective function by traversing through all possible configuration of hidden states is downright unfeasible. A better method would be finding a strict lower bound function and maximizing the lower bound function instead [Borman(2004)]. The most crucial property that this method need to possess is the convergence of the final state. In other words, it is absolutely required that the after each optimization step, the new parameters is strictly better than the previous one.

Figure 5: Optimizing an (intractable) objective function by optimizing its lower bound. The next parameter at time step has to be chosen selectively to ensure the convergence of the algorithm [Borman(2004)]

Figure 5 demonstrates one optimization step in EM algorithm. Notice when the algorithm moves from the current parameter to the new one , the actual likelihood function increases following the increasing of the lower bound function . After the lower bound function has achieved the maximal values, EM algorithm will stop although the actual likelihood still increases. This stop condition ensure that function is monotonic and guaranteed a (local maximal) convergence.

0:  Domain-space is well-defined

  Random initialization procedure of parameters vector is available

   maximum iterations
   randomly initialized
  for t=1,…,T do
  end for
  return   and
Algorithm 1 Find MLE by EM

Algorithm 1 is a step-by-step illustration of EM algorithm. Formally, the algorithm incrementally looks for the most optimal configuration of parameters. The use of Jensen Inequality allows us to transform the initial intractable optimization problem to a new tractable problem at the cost of losing the generality in finding the global maximum. Nevertheless it has been proved empirically that the EM algorithm achieves very good performance in the DSTC [Williams et al.(2016)Williams, Raux, and Henderson].

4.3 Forward-Backward Algorithm

In the process of finding the optimal parameters in EM algorithm, one frequent sub-procedure is to calculate the probability of obtaining a subset of events or the whole events given some values of the states. This computation is a non-trivial task since it requires some manipulation over the graphical model depicted by the HMM above. This section provide an overview on how to perform such computation not only on the HMM itself but on general acyclic graphs. The main principle behind this computation is Dynamic Programming which is an tractable method to calculate any values of states given all the causal transition states before [Sridharan(2014)].

0:  Probability distribution of initial unobserved states is well-defined
   set of all observed events
   set of all latent variables
  Run Forward algorithm
  Run Backward algorithm
  for k = 1,…,N do
     compute all
  end for
Algorithm 2 Forward-Backward algorithm

Algorithm 2

is a demonstration of how the Forward-Backward algorithm is implemented in general. As its name suggested, the algorithm requires three runs over all states, with the last run combines the results from the previous two. The aim of the final run is to calculate the likelihood probability of having a sequence of hidden states from the beginning to each hidden state. To obtain that result, the algorithm needs two information: the joint probability of these two terms and the posterior probability of the observed events given a prefix of sequence of hidden states.

Figure 6: Markov chain with Forward and Backward flow of probabilities. [Sridharan(2014)]

An illustration of the Forward-Backward algorithm can be found in Figure 6. Since the graph has no cycle, any arbitrary node on the graph contains all the information of its ancestors (depends on which direction the algorithm is running, the ancestors could be the previous or subsequent hidden states and observations). The algorithm basically consider one node at one time and never go in reverse direction. This key observation is crucial in making the algorithm tractable.

0:  Transition probabilities are well-defined
0:  Prior distribution is well-defined
0:  Probability distribution of initial unobserved states is well-defined
   set of all observed events
   set of all unobserved events
  for k = 1,…,N do
     Recursively compute by Dynamic Programming algorithm
  end for
Algorithm 3 Forward: compute joint probability of both observed and unobserved states
0:  Transition probabilities are well-defined
0:  Prior distribution is well-defined
0:  Probability distribution of initial unobserved states is well-defined
   set of all observed events
   set of all unobserved events
  for k = 1,…,N do
     Recursively compute by Dynamic Programming algorithm
  end for
Algorithm 4 Backward: compute likelihood of a range of observed states given a single prior unobserved state

Algorithm 3 and Algorithm 4 illustrates the simplicity in implementation of the two procedure: Forward run and Backward run. Indeed, it is the simplicity and efficiency of the algorithm being one of the reason that make it popular in every circumstances when the probabilistic graphical model is a acyclic graph.

5 Empirical results

The empirical performance of EM algorithm in comparison with the two other transcribed dialog methods can be found in Figure 7 and Figure 8. In general, it can be observed that EM works better than Automatic transcribed logs but worse than Manual transcribed logs.

Figure 7: Performance of EM is improved with increasing size of training set [Syed and Williams(2008)]

The learning curve depicted in Figure 7 indicates a monotonic increasing relationship between performance of an algorithm and number of dialog in training set. The justification is obvious: with more data in training set, the closer the estimated model to the optimal setting. The exact log-likelihood value of each method can be found in Figure 8.

Figure 8: The normalized log-likelihood of three different algorithms. EM shows an improvement over automatic transcribed dialog but still behind manually transcribed dialog [Syed and Williams(2008)].

The discrepancy between manual and automatic transcribed logs can be explained by the erroneous of ASR module. Since ASR is not optimal, a system without Dialog Manager will perform worse than a system with optimization step like Bayesian method. For the same reason, the manual transcribed method apparently eliminate all possible errors from the ASR and thus achieve the best result among the three. The aim of research in the field is to get the performance of generative model closer and closer to the manual transcribed method.

6 Conclusion and Discussion

The convergence of EM algorithm has been proved in [Collins(1997)]. Another proof can be found in [Yihua Chen(2010)]. However, the gradual optimization in EM is only as good as gradient descent which makes it prone to saddle points [Collins(1997)]

. It should be noted by gradient descent, we are referring to the optimization performed on the original likelihood function by calculating the derivative of log-likelihood function and add the derivatives to the parameters, similar to the back-propagation learning algorithm in neural networks.

The inherent weakness of generative model is the necessary to model the prior distribution of latent variables . While in some circumstances modeling this prior distribution could be beneficial in the sense that it tells us how the latent variables are spanned in their domain space, we can hardly have enough data and computational resources to accurately estimate this distribution. Indeed it has been proven in [Williams et al.(2016)Williams, Raux, and Henderson] that in all three DSTC the discriminative models always outperform the generative models by a large margin. However, it should be noted that the superior of discriminative models come in the condition of enough volume of data. In the cases where data is not enough to build a good model, discriminative models are easily overfitting while unable to tells us any meaningful information about the nature of the system.

As shown in section 5, the performance of generative models are far from the manual transcribed dialog and the absolute truth. While a better ASR will certainly increases the performance of the whole system, building a better model to exploit the output of ASR and SLU is still an active research field. We have seen above that the performance of the model increases by training on more and more data, so incorporating the system into a big data architecture with proper scaling could be one promising measure in the way to achieve a human-like performance of dialog models. Another method which includes rigorous mathematical analysis is to find tighter lower bound estimations for the likelihood. While the Jensen inequality has proven to be able to achieve reasonable results, having a stricter evaluation on the lower bound will certainly benefits the optimization process by increasing the optimal values of converged states and allowing longer training time for better use of the increasing amount of data and computational powers.


  • [Borman(2004)] Sean Borman. The Expectation Maximization algorithm: A short tutorial. Technical report, 2004.
  • [Collins(1997)] Michael Collins. The em algorithm. In In fulfillment of Written Preliminary Exam II requirement, 1997.
  • [Sridharan(2014)] Ramesh Sridharan. HMMs and the Forward-Backward algorithm, 2014.
  • [Syed and Williams(2008)] Umar Syed and Jason D. Williams. Using Automatically Transcribed Dialogs to Learn User Models in a Spoken Dialog System. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, HLT-Short ’08, pages 121–124, Stroudsburg, PA, USA, 2008. Association for Computational Linguistics. URL
  • [Thomson et al.(2010)Thomson, Jurcícek, Ga�ic, Keizer, Mairesse, Yu, and Young] B. Thomson, F. Jurcícek, M. Ga�ic, S. Keizer, F. Mairesse, K. Yu, and S. Young. Parameter learning for POMDP spoken dialogue models. 2010 IEEE Workshop on Spoken Language Technology, SLT 2010 - Proceedings. pp, pages 271–276, 2010.
  • [Williams et al.(2016)Williams, Raux, and Henderson] Jason Williams, Antoine Raux, and Matthew Henderson. The dialog state tracking challenge series: A review. Dialogue & Discourse, 7(3):4–33, 2016.
  • [Williams and Young(2007)] Jason D. Williams and Steve Young.

    Partially Observable Markov Decision Processes for Spoken Dialog Systems.

    Comput. Speech Lang., 21(2):393–422, April 2007. ISSN 0885-2308. doi: 10.1016/j.csl.2006.06.008. URL
  • [Yihua Chen(2010)] Maya R. Gupta Yihua Chen. Em demystified: an expectation-maximization tutorial. Technical Report UWEETR-2010-0002, Department of Electrical Engineering, University of Washington, Seattle, WA 98195, February 2010.