1 Introduction
A taskoriented spoken dialogue system (SDS) is a system that can interact with a user to accomplish a predefined task through speech. It usually has three modules: input, output and control, shown in Figure 1
. The input module consists of automatic speech recognition (ASR) and spoken language understanding (SLU), with which the user speech is converted into text and semanticslevel user dialogue acts are extracted. Once the user dialogue acts are received, the control module, also called dialogue management accomplishes two missions. One mission is called dialogue state tracking (DST), which is a process to estimate the distribution of the dialogue states, an encoding of the machine’s understanding about the conversion as a dialogue progresses. Another mission is to choose semanticslevel machine dialogue acts to direct the dialogue given the information of the dialogue state, referred to as dialogue decision making. The output module converts the machine acts into text via natural language generation and generates speech according to the text via texttospeech synthesis.
Dialogue management is the core of a SDS. Traditionally, dialogue states are assumed to be observable and handcrafted rules are employed for dialogue management in most commercial SDSs. However, because of unpredictable user behaviour, inevitable ASR and SLU errors, dialogue state tracking and decision making are difficult (Williams and Young, 2007)
. Consequently, in recent years, there is a research trend from rulebased dialogue management towards statistical dialogue management. Partially observable Markov decision process (POMDP) framework offers a wellfounded theory to both dialogue state tracking and decision making in statistical dialogue management
(Roy et al., 2000; Zhang et al., 2001; Williams and Young, 2005, 2007; Thomson and Young, 2010; Gašić and Young, 2011; Young et al., 2010). In previous studies of POMDP, dialogue state tracking and decision making are usually investigated together. In recent years, to advance the research of statistical dialogue management, the DST problem is raised out of the statistical dialogue management framework so that a bunch of models can be investigated for DST.Most early studies of POMDP on DST were devoted to generative models (Young et al., 2010)
, which learn joint probability distribution over observation and labels. Fundamental weaknesses of generative model was revealed by the result of
(Williams, 2012). In contrast, discriminative state tracking models have been successfully used for SDSs (Deng et al., 2013). Compared to generative models where assumptions about probabilistic dependencies of features are usually needed, discriminative models directly model probability distribution of labels given observation, enabling rich features to be incorporated. The results of the Dialog State Tracking Challenge (DSTC) (Williams et al., 2013; Henderson et al., 2014b, a) further demonstrated the power of discriminative statistical models, such as Maximum Entropy (MaxEnt) (Lee and Eskenazi, 2013), Conditional Random Field (Lee, 2013), Deep Neural Network (DNN)
(Sun et al., 2014a), and Recurrent Neural Network (RNN)
(Henderson et al., 2014d). In addition to discriminative statistical models, discriminative rulebased models have also been investigated for DST due to their efficiency, portability and interpretability and some of them showed good performance and generalisation ability in DSTC (Zilka et al., 2013; Wang and Lemon, 2013). However, both rulebased and statistical approaches have some disadvantages. Statistical approaches have shown large variation in performance and poor generalisation ability due to the lack of data (Williams, 2012). Moreover, statistical models usually have more complex model structure and features than rulebased models, and thus can hardly achieve efficiency, portability and interpretability as rulebased models. As for rulebased models, their performance is usually not competitive to the best statistical approaches. Additionally, since they require lots of expert knowledge and there is no general way to design rulebased models with prior knowledge, they are typically difficult to design and maintain. Furthermore, there lacks a way to improve their performance when training data are available.Recent studies on constrained Markov Bayesian polynomial (CMBP) framework take the first step towards bridging the gap between rulebased and statistical approaches for DST (Sun et al., 2014b; Yu et al., 2015). CMBP formulate rulebased DST in a general way and allow datadriven rules to be generated. Concretely, in the CMBP framework, DST models are defined as polynomial functions of a set of features whose coefficients are integer and satisfy a set of constraints where prior knowledge is encoded. The optimal DST model is selected by evaluating each model on training data. Yu et al. (2015) further extended CMBP to realcoefficient polynomial where the real coefficients can be estimated by optimizing the DST performance on training data using grid search. CMBP offers a way to improve the performance when training data are available and achieves competitive performance to the stateoftheart statistical approaches, while at the same time keeping most of the advantages of rulebased models. Nevertheless, adding features to CMBP is not as easy as to most statistical approaches because on the one hand, the features usually need to be probability related features, on the other hand, additional prior knowledge is needed to constrain the search space. For the same reason, increasing the model complexity, such as by using higherorder polynomial, by introducing hidden variables, etc. also requires additional suitable prior knowledge to be introduced to limit the search space not too large. Moreover, CMBP can hardly fully utilize the labelled data because in practice its polynomial coefficients are set by grid search.
In this paper, a novel hybrid framework, referred to as recurrent polynomial network (RPN), is proposed to further bridge the gap between rulebased and statistical approaches for DST. Although the basic idea for transforming rules to neural networks has been there for many years (Cloete and Zurada, 2000), few work has been done for dialogue state tracking. RPN can be regarded as a kind of human interpretable computation networks, and its unique structure enables the framework to have all the advantages of CMBP including efficiency, portability and interpretability. Additionally, RPN achieves more properties of statistical approaches than CMBP. In general, RPN has neither restriction to feature type, nor search space issue to be concerned about, so adding features and increasing the model complexity are much easier in RPN. Furthermore, RPN can better explore the parameter space than CMBP with labelled data.
The DSTCs have provided the first common testbed in a standard format, along with a suite of evaluation metrics to facilitate direct comparisons among DST models
(Williams et al., 2013). To evaluate the effectiveness of RPN for DST, both the dataset from the second Dialog State Tracking Challenge (DSTC2) which is in restaurants domain (Henderson et al., 2014b) and the dataset from the third Dialog State Tracking Challenge (DSTC3) which is in tourists domain (Henderson et al., 2014a) are used. For both of the datasets, the dialogue state tracker receives SLU best hypotheses for each user turn, each hypothesis having a set of actslotvalue tuples with a confidence score. The dialogue state tracker is supposed to output a set of distributions of the dialogue state. In this paper, only joint goal tracking, which is the most difficult and general task of DSTC2/3, is of interest.2 Bridge Rulebased and Statistical Approaches
Broadly, it is straightforward to come up with two possible ways to bridge rulebased and statistical approaches: one starts from rulebased models, while the other starts from statistical models. CMBP takes the first way, which is derived as an extension of a rulebased model (Sun et al., 2014b; Yu et al., 2015). Inspired by the observation that many rulebased models such as models proposed by Wang and Lemon (2013) and Zilka et al. (2013)
are based on Bayes’ theorem, in the CMBP framework, a DST rule is defined as a polynomial function of a set of probabilities since Bayes’ theorem is essentially summation and multiplication of probabilities. Here, the polynomial coefficients can be seen as parameters. To make the model have good DST performance, prior knowledge or intuition is encoded to the polynomial functions by setting certain constraints to the polynomial coefficients, and the coefficients can further be optimized by datadriven optimization. Therefore, starting from rulebased models, CMBP can directly incorporate prior knowledge or intuition into DST, while at the same time, the model is allowed to be datadriven.
More concretely, assuming that both slot and value are independent, a CMBP model can be defined as
(1)  
where , , , , , are all probabilistic features which are defined as below:

: belief of “the value being at turn ”

: sum of scores of SLU hypotheses informing or affirming value at turn

: sum of scores of SLU hypotheses denying or negating value at turn



: belief of the value being ‘None’ (the value not mentioned) at turn , i.e.
and is a multivariate polynomial function^{1}^{1}1The notation of is shorthand for series of nested sums over ranges bounded, i.e. .
(2) 
where is the number of input variables, is the order of the polynomial, is the parameter of CMBP. Order 3 gives good tradeoff between complexity and performance, hence order 3 is used in our previous work (Sun et al., 2014b; Yu et al., 2015) and this paper.
The constraints in equation (1) encode all necessary probabilistic conditions (Yu et al., 2015). For instance,
(3) 
(4) 
The constraints in equation (1) also encode prior knowledge or intuition (Yu et al., 2015). For example, the rule “goal belief should be unchanged or positively correlated with the positive scores from SLU” can be represented by
(5) 
The definition of CMBP formulates a search space of rulebased models, where it is easy to employ datadriven criterion to find a rulebased model with good performance. Considering CMBP is originally motivated from Bayesian probability operation which leads to the natural use of integer polynomial coefficients (
), the datadriven optimization can be formulated by an integer programming program (Sun et al., 2014b; Yu et al., 2015). Additionally, CMBP can also be viewed as a statistical approach. Hence, the polynomial coefficients can be extended to real numbers. The optimization of realcoefficient can be done by first getting an integercoefficient CMBP and then performing hill climbing search (Yu et al., 2015).3 Recurrent Polynomial Network
Recurrent polynomial network, which is proposed in this paper, takes the other way to bridge rulebased and statistical approaches. The basic idea of RPN is to enable a kind of statistical model to take advantage of prior knowledge or intuition by using the parameters of rulebased models to initialize the parameters of statistical models.
Computational networks have been researched for decades from very basic architectures such as perceptron
(Rosenblatt, 1958)to today’s various kinds of deep neural networks. Recurrent computational networks, a class of computational networks which have recurrent connections, have also been researched for a long time, from fully recurrent network to networks with relatively complex structures such as Long shortterm memory (LSTM)
(Hochreiter and Schmidhuber, 1997). Like common neural networks, RPN is a statistical approach so it is as easy to add features and try complex structures in RPN as in neural networks. However, compared with common neural networks which are “black boxes”, an RPN can essentially be seen as a polynomial function. Hence, considering that a CMBP is also a polynomial function, the encoded prior knowledge and intuition in CMBP can be transferred to RPN by using the parameters of CMBP to initialize RPN. In this way, it bridges rulebased models and statistical models.A recurrent polynomial network is a computational network. The network contains multiple edges and loops. Each node is either an input node, which is used to represent an input value, or a computation node. Each node is set an initial value at time , and its value is updated at time . Both the type of edges and the type of nodes decide how the nodes’ values are updated. There are two types of edges. One type, referred to as type1, indicates the value updating at time takes the value of a node at time , i.e. type1 edges are recurrent edges, while the other type, referred to as type2, indicates the value updating at time takes another node’s value at time . Except for loops made of “type1” edges, the network should not have loops. For simplicity, let be the set of nodes which are linked to node by a type1 edge, be the set of nodes which are linked to node by a type2 edge. Based on these definitions, two types of computation nodes, sum and product, are introduced. Specifically, at time , if node is a sum node, its value is updated by
(6) 
where are the weights of edges.
Similarly, if node is a product node, its value is updated by
(7) 
where and are integers, denoting the multiplicity of the type1 edge , and the multiplicity of the type2 edge respectively. It is noted that only are parameters of RPN while are constant given the structure of an RPN.
Let ,
denote the vector of computation nodes’ values and the vector of input nodes’ values at time
respectively, then a welldefined RPN can be seen as a polynomial function as below.(8) 
where is defined by equation (2). For example, for the RPN in figure 2, its corresponding polynomial function is
(9) 
Each computation node can be regarded as an output node. For example, for the RPN in figure 2, node and node can be set as output nodes.
4 RPN for Dialogue State Tracking
As introduced in section 1, in this paper, the dialogue state tracker receives SLU best hypotheses for each user turn, each hypothesis having a set of actslotvalue tuples with a confidence score. The dialogue state tracker is supposed to output a set of distributions of the joint user goal, i.e., the value for each slot. For simplicity and consistency with the work of Sun et al. (2014b) and Yu et al. (2015), slot and value independence are assumed in the RPN model for dialogue state tracking^{2}^{2}2For DSTC2/3 tasks, one slot can have at most one value, i.e. . Since value independence is assumed, to strictly maintain that relation, the belief is rescaled to ensure the sum of the belief of each value plus the belief of ‘None’ to be when the belief is being output. Actually, to enable RPN strictly maintain that relation, our original design of RPN had a “normalization” step when passing the belief from turn to turn . The “normalization” step will rescale the belief to make the sum of the belief of each value plus the belief of ‘None’ to be 1. Our later experiment, however, demonstrated that there was no significant performance difference between the RPN with and without the “normalization” step. Therefore, in practice, for simplicity, the “normalization” step can be omitted, value independence can be assumed, and the only thing needed is to rescale the belief when it is being output., though neither CMBP nor RPN is limited to the assumptions. In the rest of the paper, are abbreviated by respectively in circumstances where there is no ambiguity.
4.1 Structure
Before describing details of the structure used in the real situations, to help understand the corresponding relationship between RPN and CMBP, let’s first look at a simplified case with a smaller feature set and a smaller order, which is a corresponding relationship between the RPN shown in figure 3 and 2order polynomial (10) with features :
(10)  
Recall that a CMBP of polynomial order 2 with 3 features is the following equation (refer to equation (2)):
(11) 
The RPN in figure 3 has three layers. The first layer only contains input nodes. The second layer only contains product nodes. The third layer only contains sum nodes. Every product node in the second layer denotes a monomial of order 2 such as and so on. Every product node in the second layer is linked to the sum node in the third layer whose value is a weighted sum of value of product nodes. With weight set according to coefficients in equation (10), the value of sum node in the third layer is essentially the in equation (10).
Like the above simplified case, a layered RPN structure shown in figure 4 is used for dialogue state tracking in our first trial which essentially corresponds to 3order CMBP, though the RPN framework is not limited to the layered topology. Recall that a CMBP of polynomial order 3 is used as shown in the following equation (refer to equation (2)):
(12) 
Let denote the index of th node in the th layer. The detailed definitions of each layer are as follows:

First layer / Input layer:
Input nodes are features at turn , which corresponds to variables in CMBP in section 2. i.e.
While features are used in previous work of CMBP (Sun et al., 2014b; Yu et al., 2015), only of them are used in RPN with feature removed^{3}^{3}3 is the belief of value being ‘None’, whose precise definition is given in section 2.. Since our experiments showed the performance of CMBP would not become worse without feature , to make the structure more compact, is not used in this paper for RPN. In accordance to this, CMBP mentioned in the rest of paper does not use this feature either.


Second layer:
The value of every product node in the second layer is a monomial like the simplified case. And every product node has indegree 3 which is corresponding to the order of CMBP.
Every monomial in CMBP is the product of three repeatable features. Correspondingly, the value of every product node in second layer is the product of values of three repeatable nodes in the first layer. Every triple is enumerated to create a product node in second layer that nodes are linked to. i.e. . And thus .
And different node in the second layer is created by a distinct triple. So given the 6 input features, there are nodes in the second layer.
To simplify the notation, a bijection from nodes to monomials is defined as:
(13) (14) where is the number of nodes in the first layer, i.e. input feature dimension.

Third layer:
The value of sum node in the third layer is corresponding to the output value of CMBP.
Every product nodes in the second layer are linked to it. Node ’s value is a weighted sum of values of product node where the weights correspond to in equation (12).
With only sum and product operation involved, every node’s value is essentially a polynomial of input features. And just like recurrent neural network, node at time can be linked to node at time . That is why this model is called recurrent polynomial network.
4.2 Activation Function
In DST, the output value is a belief which should lie in , while values of computational nodes are not bound by certain interval in RPN. Experiments showed that if weights are not properly set in RPN and a belief output by RPN is larger than 1, then may grow much larger because is the weighted sum of monomials such as . Belief of later turns such as will tend to infinity.
Therefore, an activation function is needed to map
to a legal belief value (referred to as ) in . 3 kinds of functions, the logistic function, the clip function, and the softclip function have been considered. A logistic function is defined as(15) 
It can map to by setting . However, since basically the RPN designed for dialogue state tracking does similar operation as CMBP which is motivated from Bayesian probability operation (Yu et al., 2015), intuitively we expect the activation function to be linear on so that little distortion is added to the belief.
As an alternation, a clip function is defined as
(16) 
It is linear on . However, if , and
is the loss function,
(17) 
Thus, would be whatever
is. This gradient vanishing phenomenon may affect the effectiveness of backpropagation training in section
4.5.So an activation function is introduced, which is a combination of logistic function and clip function. Let denote a small value such as 0.01,
denote the offset of sigmoid function such that
. Here the sigmoid function refers to the special case of the logistic function defined by the formula(18) 
The softclip function is defined as
(19) 
is a nondecreasing, continuous function. However, It is not differentiable when or . So we defined its derivative as follows:
(20) 
It is like a clip function. However, its derivative may be small on some inputs but is not zero. Figure 5 shows the comparison among clip function, logistic function, and softclip function. In practice, softclip function has demonstrated better performance than both clip and logistic function^{4}^{4}4An experiment is done on the DSTC2 dataset where RPNs using different activation functions are trained on dstc2trn and tested on dstc2dev. The accuracy and L2 of RPNs with , , and function are (0.779, 0.329), (0.789, 0.352), (0.790, 0.317) respectively. In particular, logistic functions with several different are evaluated and the result reported here is the best one., and was used in the rest of the paper.
With the activation function, a new type of computation node, referred to as activation node, is introduced. Activation node only takes one input and only has one input edge of type2, i.e and . The value of an activation node is calculated as
(21) 
where denotes the input node of node . i.e. .
4.3 Further Exploration on Structure
Adding features to CMBP is not easy because additional prior knowledge is needed to add to keep the search space not too large. Concretely, adding features can introduce new monomials. Since the trivial search space is exponentially increasing as the number of monomials, the search space tends to be too large to explore when new features are added. Hence, to reduce the search space, additional prior knowledge is needed, which can introduce new constraints to the polynomial coefficients. For the same reason, increasing the model complexity also requires additional suitable prior knowledge to be added to limit the search space not too large in CMBP.
In contrast to that, since RPN can be seen as a statistically model, it is as easy as most statistical approaches such as RNN to add new features to RPN and use more complex structures. At the same time, no matter what new features are used and how complex the structure is, RPN can always take advantage prior knowledge and intuition which is discussed in section 4.4. In this paper, both new features and complex structure are explored.
Adding new features can be done by just adding input nodes which correspond to the new features, and then adding product nodes corresponding to the new possible monomials introduced by the new features. In this paper, for slot , value at turn , in addition to which are defined as , , , , , and respectively, 4 new features are investigated. and are features of system acts at the last turn: for slot , value at turn ,

= if the system cannot offer a venue with the constraint or the value of slot is not known for the selected venue, otherwise .

= if the system asks the user to pick a suggested value for slot , otherwise .
and are introduced because user is likely to change their goal if given machine acts , and . and are features of user acts at the current turn: for slot , value at turn ,

= if one of SLU hypotheses from the user is informing slot is , otherwise .

= if one of SLU hypotheses from the user is denying slot is , otherwise .
and are features about SLU acttype, introduced to make system robust when the confidence scores of SLU hypothesis are not reliable.
In this paper, the complexity of evaluating and training RPN for DST would not increase sharply because a constant order 3 is used and number of product nodes in the second layer grows from 56 to 220 when number of features grows from 6 to 10.
In addition to new features, RPN of more complex structure is also investigated in this paper. To capture some property just like belief of dialogue process, a new sum node in the third layer is introduced. The connection of is the same as , so it introduces a new recurrent connection. The exact meaning of its value is unknown. However, it is the only value used to record information other than of previous turns. Every other input features except are features of current turn . Compared with , there are fewer restrictions on the value of since its value is not directly supervised by the label. Hence, introducing may help to reduce the effect of inaccurate labels.
4.4 RPN Initialization
Like most neural network models such as RNN, the initialization of RPN can be done by setting each weight, i.e. and , to be a small random value. However, with its unique structure, the initialization can be much better by taking advantage of the relationship between CMBP and RPN which is introduced in section 4.1.
When RPN is initialized according to a CMBP, prior knowledge and constraints are used to set RPN’s initial parameters as a suboptimum point in the whole parameter space. RPN as a statistical model can fully utilize the advantages of statistical approaches. However, RPN is better than real CMBP while they both use data samples to train parameters. In the work of Yu et al. (2015), realcoefficient CMBP uses hill climbing to adjust parameters that are initially not zero and the change of parameters are always a multiple of 0.1. RPN can adjust all parameters including parameters initialized as 0 concurrently, while the complexity of adjusting all parameters concurrently is nearly the same as adjusting one parameter in CMBP. Besides, the change of parameters can be large or small, depending on learning rate. Thus, RPN and CMBP both are bridging rulebased models and statistical ones, while RPN is a statistical model utilizing rule advantages and CMBP is a rule model utilizing statistical advantages.
In fact, given a CMBP, an RPN can achieve the same performance as the CMBP just by setting its weights according to the coefficients of the CMBP. To illustrate that, the steps of initializing the RPN in figure 7 with a CMBP of features is described below.
First, to ensure that the new added sum node will not influence the output in RPN with initial parameters, is set to for all . So node ’s value is always .
Next, considering the RPN in figure 7 has more features than CMBP does, the weights related the new features should be set to 0. Specifically, suppose node is the sum node in the third layer in RPN denoting before activation and node is one of the product nodes in the second layer denoting a monomial, if product node is products of features or the added sum node, then node ’s value is not a monomial in CMBP, then weights should be set to .
Finally, if product node is the product of features , suppose the order of CMBP is 3, then defined in equation (13) should satisfy . Weights should be initialized as which is the coefficient of in CMBP. Thus,
(22) 
For RPN of other structures, the initialization can be done by following similar steps.
Experiments show that after training, there are only a few weights larger than 0.1, no matter using CMBP or random initialization.
4.5 Training RPN
Suppose is the number of turns in dialogue , is the set of values corresponding to slot appearing in SLU hypothesis in turn in dialogue , is the output belief of value in dialogue and is the indicator of goal being part of joint goal at turn in the label of dialogue . The cost function is defined as
(23) 
Training process of a minibatch can be divided into two parts: forward pass and backward pass.
Forward Pass
For each training sample, every node’s value at every time is evaluated first. When evaluating , values of nodes in and should be evaluated before. The computation formula should be based on the type of node . In particular, for a layered RPN structure, we can simply evaluate earlier than if or and ’s layer number is smaller than ’s.
Backward Pass
Backpropagation through time (BPTT) is used in training RPN. Let error of node at time be . If a node is an output node, then should be initialized according to its label and output value , otherwise should be initialized to . After a node’s error is determined, it can be passed to and . Error passing should follow the reversed edge’s direction. So the order of nodes passing error can follow the reverse order of evaluating nodes’ values.
When every has been evaluated, the increment on weight can be calculated by
(24)  
where is the learning rate. can be evaluated similarly.
Note that only and are parameters of RPN.
The complete formula of evaluating node value and passing error can be found in appendix.
In this paper, minibatch is used in training RPN for DST with batch size
. In each training epoch,
and are calculated for every training sample and added together. The weight and is updated by(25)  
(26) 
The pseudocode of training is shown in algorithm 1.
5 Experiment
As introduced in section 1, in this paper, DSTC2 and DSTC3 tasks are used to evaluate the proposed approach. Both tasks provide training dialogues with turnlevel ASR hypotheses, SLU hypotheses and user goal labels. The DSTC2 task provides 2118 training dialogues in restaurants domain (Henderson et al., 2014b), while in DSTC3, only 10 indomain training dialogues in tourists domain are provided, because the DSTC3 task is to adapt the tracker trained on DSTC2 data to the new domain with very few dialogues (Henderson et al., 2014a). Table 1 summarizes the size of datasets of DSTC2 and DSTC3.
Task  Dataset  #Dialogues  Usage 

dstc2trn  1612  Training  
DSTC2  dstc2dev  506  Training 
dstc2eval  1117  Test  
DSTC3  dstc3seed  10  Not used 
dstc3eval  2265  Test 
The DST evaluation criteria are the joint goal accuracy and the L2 (Henderson et al., 2014b, a). Accuracy is defined as the fraction of turns in which the tracker’s 1best joint goal hypothesis is correct, the larger the better. L2 is the L2 norm between the distribution of all hypotheses output by the tracker and the correct goal distribution (a delta function), the smaller the better. Besides, schedule 2 and labelling scheme A defined in (Henderson et al., 2013) are used in both tasks. Specifically, schedule 2 only counts the turns where new information about some slots either in a system confirmation action or in the SLU list is observed. Labelling scheme A is that the labelled state is accumulated forwards through the whole dialogue. For example, the goal for slot is “None” until it is informed as by the user, from then on, it is labelled as until it is again informed otherwise.
It has been shown that the organiserprovided live SLU confidence was not good enough (Zhu et al., 2014; Sun et al., 2014a). Hence, most of the stateoftheart results from DSTC2 and DSTC3 used refined SLU (either explicitly rebuild a SLU component or take the ASR hypotheses into the trackers (Williams, 2014; Sun et al., 2014a; Henderson et al., 2014d, c; Kadlec et al., 2014; Sun et al., 2014b)). In accordance to this, except for the results directly taken from other papers (shown in table 5 and 6), all experiments in this paper used the output from a refined semantic parser (Zhu et al., 2014; Sun et al., 2014a) instead of the live SLU provided by the organizer.
For all experiments, MSE^{5}^{5}5MSE is chosen because of two reasons: (i) MSE can directly reflect L2 performance which is one of the main metrics in DSTCs. (ii) Experiment has shown that some other criterion such as crossentropy loss cannot lead to better performance. is used as the training criterion and fullbatch batch is used. For both DSTC2 and DSTC3 tasks, dstc2trn and dstc2dev are used, 60% of the data is used for training and 40% for validation, unless otherwise stated. Validation is performed every 5 epochs. Learning rate is set to 0.6 initially. During the training, learning rate is halved each time the performance does not increases. Training is stopped when the learning rate is sufficiently small, or the maximum number of training epochs is reached. Here, the maximum number of training epochs is set to 40. L2 regularization is used for all the experiments^{6}^{6}6The parameter of L2 regularization is set to be the one leading to the best performance on dstc2dev when trained on dstc2trn. L1 regularization is not used since it cannot yield better performance..
5.1 Investigation on RPN Configurations
This section describes the experiments comparing different configurations of RPN. All experiments were performed on both the DSTC2 and DSTC3 tasks.
As indicated in section 4.4, an RPN can be initialized by a CMBP. Table 2 shows the performance comparison between initialization with a CMBP and with random values. In this experiment, the structure shown in figure 6 is used. The performance of initialization with random values^{7}^{7}7The random scheme used is the one with the best performance on dstc2dev when trained on dstc2trn among various kinds of random initialization scheme.
reported here is the average performance of 10 different random seeds, and their standard deviations are given in parentheses.
Initialization  dstc2eval  dstc3eval  

Acc  L2  Acc  L2  
Random  0.753 (0.008)  0.468 (0.020)  0.633 (0.005)  0.667 (0.026) 
CMBP  0.756  0.373  0.648  0.553 
The performance of the RPN initialized by random values is compared with the performance of the RPN initialized by the integercoefficient CMBP. Here, the CMBP has nonzero coefficients and has the best performance on dstc2dev when trained on dstc2trn. It can be seen from table 2 that the RPN initialized by the CMBP coefficients outperforms the RPN initialized by random values moderately on dstc2eval and significantly on dstc3eval (pvalue). This demonstrates the encoded prior knowledge and intuition in CMBP can be transferred to RPN to improve RPN’s performance, which is one of RPN’s advantage, bridging rulebased models and statistical models. In the rest of the experiments, all RPNs use CMBP coefficients for initialization.
Since section 4.3 shows that it is convenient to add features and try more complex structures, it is interesting to investigate RPNs with different feature sets and structures, as shown in table 3. It can be seen that while no obvious correlation between the performance and different configurations of feature sets and structures can be observed on dstc2eval and dstc3eval, RPNs with new features and new recurrent connections have achieved slightly better performance. Thus, in the rest of the paper, both new features and new recurrent connections are used in RPN, unless otherwise stated.
Feature Set  New Recurrent  dstc2eval  dstc3eval  

Connections  Acc  L2  Acc  L2  
No  0.756  0.373  0.648  0.553  
0.757  0.374  0.650  0.557  
Yes  0.756  0.373  0.648  0.553  
0.757  0.374  0.650  0.549 
5.2 Comparison with Other DST Approaches
The previous subsection investigates how to get the RPN with the best configuration. In this subsection, the performance of RPN is compared to both rulebased and statistical approaches. To make fair comparison, all statistical models together with RPN in this subsection use similar feature set. Altogether, 2 rulebased trackers and 3 statistical trackers were built for performance comparison.
Type  System  dstc2eval  dstc3eval  

Acc  L2  Acc  L2  
Rule  MaxConf  0.668  0.647  0.548  0.861 
HWU  0.720  0.445  0.594  0.570  
Statistical  DNN  0.719  0.469  0.628  0.556 
MaxEnt  0.710  0.431  0.607  0.563  
LSTM  0.736  0.418  0.632  0.549  
Mixed  CMBP  0.755  0.372  0.627  0.546 
RPN  0.756  0.373  0.648  0.553 

MaxConf is a rulebased model commonly used in spoken dialogue systems which always selects the value with the highest confidence score from the turn to the current turn. It was used as one of the primary baselines in DSTC2 and DSTC3.

HWU is a rulebased model proposed by Wang and Lemon (2013). It is regarded as a simple, yet competitive baseline of DSTC2 and DSTC3.

DNN is a statistical model using deep neural network model (Sun et al., 2014a) with probability feature as RPN. Since DNN does not have recurrent structures while RPN does, to fairly take into account this, the DNN feature set at the turn is defined as
where is the highest confidence score from the turn to the turn. The DNN has 3 hidden layers with 64 nodes per layer.

MaxEnt is also a statistical model using Maximum Entropy model (Sun et al., 2014a) with the same input features as DNN.

LSTM is another statistical model using long shortterm memory model (Hochreiter and Schmidhuber, 1997) with the same input features as RPN. It has similar structure as the DNN model (Sun et al., 2014a) except for its hidden layers using LSTM blocks. The LSTM used here has 3 hidden layers with 100 LSTM blocks per layer.
It can be observed that, with similar feature set, RPN can outperform both rulebased and statistical approaches in terms of joint goal accuracy. Statistical significance tests were also performed assuming a binomial distribution for each turn. RPN was shown to significantly outperform both rulebased and statistical approaches at 95% confidence level. For L2, RPN is competitive to both rulebased and the statistical approaches.
5.3 Comparison with Stateoftheart DSTC Trackers
In the DSTCs, the stateoftheart trackers mostly employed statistical approaches. Usually, richer feature set and more complicated model structures than the statistical models in section 5.2
are used. In this section, the proposed RPN approach is compared to the best submitted trackers in DSTC2/3 and the best CMBP trackers, regardless of fairness of feature selection and the SLU refinement approach. RPN is compared and the results are shown in table
5 and table 6. Note that structure shown in figure 7 with richer feature set and a new recurrent connection is used here.System  Approach  Rank  Acc  L2 

Baseline*  Rule  5  0.719  0.464 
Williams (2014)  LambdaMART  1  0.784  0.735 
Henderson et al. (2014d)  RNN  2  0.768  0.346 
Sun et al. (2014a)  DNN  3  0.750  0.416 
Yu et al. (2015)  Real CMBP  2.5  0.762  0.436 
RPN  RPN  2.5  0.757  0.374 
Note that, in DSTC2, the Williams (2014)’s system employed batch ASR hypothesis information (i.e. offline ASR redecoded results) and cannot be used in the normal online model in practice. Hence, the practically best tracker is Henderson et al. (2014d). It can be observed from table 5, RPN ranks only second to the best practical tracker among the submitted trackers in DSTC2 in accuracy and L2. Considering that RPN only used probabilistic features and very limited added features and can operate very efficiently, it is quite competitive.
System  Approach  Rank  Acc  L2 

Baseline*  Rule  6  0.575  0.691 
Henderson et al. (2014c)  RNN  1  0.646  0.538 
Kadlec et al. (2014)  Rule  2  0.630  0.627 
Sun et al. (2014b)  Int CMBP  3  0.610  0.556 
Yu et al. (2015)  Real CMBP  1.5  0.634  0.579 
RPN  RPN  0.5  0.650  0.549 
It can be seen from table 6, RPN trained on DSTC2 can achieve stateoftheart performance on DSTC3 without modifying tracking method^{8}^{8}8The parser is refined for DSTC3 (Zhu et al., 2014)., outperforming all the submitted trackers in DSTC3 including the RNN system. This demonstrates that RPN successfully inherits the advantage of good generalization ability of rulebased model. Considering the feature set and structure of RPN are relatively simple in this paper, future work will investigate richer features and more complex structures.
6 Conclusion
This paper proposes a novel hybrid framework, referred to as recurrent polynomial network, to bridge the rulebased model and statistical approaches. With the ability of incorporating prior knowledge into a statistical framework, RPN has the advantages of both rulebased and statistical approaches. Experiments on two DSTC tasks showed that the proposed approach not only is more stable than many major statistical approaches, but also has competitive performance, outperforming many stateoftheart trackers.
Since the RPN in this paper only used probabilistic features and very limited added features, the performance of RPN can be influenced by how reliable the SLU’s confidence scores are. Therefore, future work will investigate the influence of SLUs on the performance of RPN, and rich features for RPN. Moreover, future work will also address applying RPN to other domains, such as the bus timetables domain in DSTC1, and theoretic analysis of RPN.
References
 Cloete and Zurada (2000) Ian Cloete and Jacek M Zurada. Knowledgebased neurocomputing. MIT press, 2000.

Deng et al. (2013)
Li Deng, Jinyu Li, JuiTing Huang, Kaisheng Yao, Dong Yu, Frank Seide, Michael
Seltzer, Geoff Zweig, Xiaodong He, Jason Williams, et al.
Recent advances in deep learning for speech research at Microsoft.
In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8604–8608. IEEE, 2013.  Gašić and Young (2011) Milica Gašić and Steve Young. Effective handling of dialogue state in the hidden information state POMDPbased dialogue manager. ACM Transactions on Speech and Language Processing (TSLP), 7(3):4, 2011.
 Henderson et al. (2013) Matthew Henderson, Blaise Thomson, and Jason Williams. Dialog state tracking challenge 2 & 3. 2013.
 Henderson et al. (2014a) Matthew Henderson, Blaise Thomson, and Jason D. Williams. The third dialog state tracking challenge. In Proceedings of IEEE Spoken Language Technology Workshop (SLT), December 2014a.
 Henderson et al. (2014b) Matthew Henderson, Blaise Thomson, and Jason D Williams. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272, Philadelphia, PA, U.S.A., June 2014b. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W144337.
 Henderson et al. (2014c) Matthew Henderson, Blaise Thomson, and Steve Young. Robust dialog state tracking using delexicalised recurrent neural networks and unsupervised adaptation. In Proceedings of IEEE Spoken Language Technology Workshop (SLT), December 2014c.
 Henderson et al. (2014d) Matthew Henderson, Blaise Thomson, and Steve Young. Wordbased dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292–299, Philadelphia, PA, U.S.A., June 2014d. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W144340.
 Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735–1780, 1997.
 Kadlec et al. (2014) Rudolf Kadlec, Miroslav Vodolán, Jindrich Libovický, Jan Macek, and Jan Kleindienst. Knowledgebased dialog state tracking. In Proceedings 2014 IEEE Spoken Language Technology Workshop, South Lake Tahoe, USA, December 2014.
 Lee (2013) Sungjin Lee. Structured discriminative model for dialog state tracking. In Proceedings of the SIGDIAL 2013 Conference, pages 442–451, Metz, France, August 2013. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W13/W134069.
 Lee and Eskenazi (2013) Sungjin Lee and Maxine Eskenazi. Recipe for building robust spoken dialog state trackers: Dialog state tracking challenge system description. In Proceedings of the SIGDIAL 2013 Conference, pages 414–422, Metz, France, August 2013. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W13/W134066.
 Rosenblatt (1958) Frank Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.
 Roy et al. (2000) Nicholas Roy, Joelle Pineau, and Sebastian Thrun. Spoken dialogue management using probabilistic reasoning. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 93–100. Association for Computational Linguistics, 2000.
 Sun et al. (2014a) Kai Sun, Lu Chen, Su Zhu, and Kai Yu. The SJTU system for dialog state tracking challenge 2. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 318–326, Philadelphia, PA, U.S.A., June 2014a. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W144343.
 Sun et al. (2014b) Kai Sun, Lu Chen, Su Zhu, and Kai Yu. A generalized rule based tracker for dialogue state tracking. In Proceedings of IEEE Spoken Language Technology Workshop (SLT), December 2014b.
 Thomson and Young (2010) Blaise Thomson and Steve Young. Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems. Computer Speech & Language, 24(4):562–588, 2010.
 Wang and Lemon (2013) Zhuoran Wang and Oliver Lemon. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference, pages 423–432, Metz, France, August 2013. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W13/W134067.
 Williams et al. (2013) Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 404–413, Metz, France, August 2013. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W13/W134065.
 Williams (2012) Jason D Williams. Challenges and opportunities for state tracking in statistical spoken dialog systems: Results from two public deployments. Selected Topics in Signal Processing, IEEE Journal of, 6(8):959–970, 2012.
 Williams (2014) Jason D Williams. Webstyle ranking and SLU combination for dialog state tracking. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 282–291, Philadelphia, PA, U.S.A., June 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W144339.
 Williams and Young (2005) Jason D Williams and Steve Young. Scaling up pomdps for dialog management: The“summary pomdp”method. In Automatic Speech Recognition and Understanding, 2005 IEEE Workshop on, pages 177–182. IEEE, 2005.
 Williams and Young (2007) Jason D Williams and Steve Young. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393–422, 2007.
 Young et al. (2010) Steve Young, Milica Gašić, Simon Keizer, François Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. The hidden information state model: A practical framework for POMDPbased spoken dialogue management. Computer Speech & Language, 24(2):150–174, 2010.
 Yu et al. (2015) Kai Yu, Kai Sun, Lu Chen, and Su Zhu. Constrained markov bayesian polynomial for efficient dialogue state tracking. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(12):2177–2188, December 2015.
 Zhang et al. (2001) Bo Zhang, Qingsheng Cai, Jianfeng Mao, Eric Chang, and Baining Guo. Spoken dialogue management as planning and acting under uncertainty. In INTERSPEECH, pages 2169–2172, 2001.
 Zhu et al. (2014) Su Zhu, Lu Chen, Kai Sun, Da Zheng, and Kai Yu. Semantic parser enhancement for dialogue domain extension with little data. In Proceedings of IEEE Spoken Language Technology Workshop (SLT), December 2014.
 Zilka et al. (2013) Lukas Zilka, David Marek, Matej Korvas, and Filip Jurcicek. Comparison of bayesian discriminative and generative models for dialogue state tracking. In Proceedings of the SIGDIAL 2013 Conference, pages 452–456, Metz, France, August 2013. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W13/W134070.
Appendix
Derivative calculation
Using MSE as the criterion, is initialized as following^{9}^{9}9The symbols used in this section such as , , , follow the definitions in section 3.:
(27) 
Suppose node is an activation node and , let ,
(28)  
Suppose node is a sum node, then when node passes its error, the error of node is updated as
(29)  
Similarly, error of node is updated as
(30)  
Suppose node is a product node, then when node passes its error, error of node is updated as
(31)  
Similarly, error of node is updated as
(32)  