In the past decade, goal-oriented spoken dialogue systems (SDS), such as the virtual personal assistants Microsoft’s Cortana and Apple’s Siri, are being incorporated in various devices and allow users to speak to systems freely in order to finish tasks more efficiently. A key component of these conversational systems is the natural language understanding (NLU) module—-it refers to the targeted understanding of human speech directed at machines [Tur and De Mori2011]. The goal of such “targeted” understanding is to convert the recognized user speech into a task-specific semantic representation of the user’s intention, at each turn, that aligns with the back-end knowledge and action sources for task completion. The dialogue manager then interprets the semantics of the user’s request and associated back-end results, and decides the most appropriate system action, by exploiting semantic context and user specific meta-information, such as geo-location and personal preferences [McTear2004, Rudnicky and Xu1999].
A typical pipeline of NLU includes: domain classification, intent determination, and slot filling [Tur and De Mori2011]. NLU first decides the domain of user’s request given the input utterance, and based on the domain, predicts the intent and fills associated slots corresponding to a domain-specific semantic template. For example, Figure 1 shows a user utterance, “show me the flights from seattle to san francisco” and its semantic frame, find_flight(origin=“seattle”, dest=“san francisco”)
. It is easy to see the relationship between the origin city and the destination city in this example, although these do not appear next to each other. Traditionally, domain detection and intent prediction are framed as utterance classification problems, where several classifiers such as support vector machines and maximum entropy have been employed[Haffner et al.2003, Chelba et al.2003, Chen et al.2014]. Then slot filling is framed as a word sequence tagging task, where the IOB (in-out-begin) format is applied for representing slot tags as illustrated in Figure 1
, and hidden Markov models (HMM) or conditional random fields (CRF) have been employed for slot tagging[Pieraccini et al.1992, Wang et al.2005].
Hierarchical structures and semantic relationships contain linguistic characteristics of input word sequences forming sentences, and such information may help interpret their meaning. Furthermore, prior knowledge would help in the tagging of sequences, especially when dealing with previously unseen sequences [Tur et al.2010, Deoras and Sarikaya2013]
. Prior work exploited external web-scale knowledge graphs such as Freebase and Wikipedia for improving NLU[Heck et al.2013, Ma et al.2015b, Chen et al.2014] liu2013query and chen2015matrix proposed approaches that leverage linguistic knowledge encoded in parse trees for language understanding, where the extracted syntactic structural features and semantic dependency features enhance inference model learning, and the model achieves better language understanding performance in various domains.
Even with the emerging paradigm of integrating deep learning and linguistic knowledge for different NLP tasks [Socher et al.2014], most of the previous work utilized such linguistic knowledge and knowledge bases as additional features as input to neural networks, and then learned the models for tagging sequences. These feature enrichment based approaches have some possible limitations: 1) poor generalization and 2) error propagation. Poor generalization comes from the mismatch between knowledge bases and the input data, and then the incorrectly extracted features due to errors in previous processing propagate errors to the neural models. In order to address the issues and better learn the sequence tagging models, this paper proposes knowledge-guided structural attention networks, K-SAN, a generalization of RNNs that automatically learn the attention guided by external or prior knowledge and generate sentence-based representations specifically for modeling sequence tagging. The main difference between K-SAN and previous approaches is that knowledge plays the role of a teacher to guide networks where and how much to focus attention considering the whole linguistic structure simultaneously. Our main contributions are three-fold:
To our knowledge, this is the first neural network approach that utilizes general knowledge as guidance in an end-to-end fashion, where the model automatically learns important substructures with an attention mechanism.
Generalization for different knowledge
There is no required schema of knowledge, and different types of parsing results, such as dependency relations, knowledge graph-specific relations, and parsing output of hand-crafted grammars, can serve as the knowledge guidance in this model.
Efficiency and parallelizability
Because the substructures from the input utterance are modeled separately, modeling time may not increase linearly with respect to the number of words in the input sentence.
In the following sections, we empirically show the benefit of K-SAN on the targeted NLU task.
2 Related Work
There is an emerging trend of learning representations at different levels, such as word embeddings [Mikolov et al.2013], character embeddings [Ling et al.2015], and sentence embeddings [Le and Mikolov2014, Huang et al.2013]. In addition to fully unsupervised embedding learning, knowledge bases have been widely utilized to learn entity embeddings with specific functions or relations [Celikyilmaz and Hakkani-Tur2015, Yang et al.2014]. Different from prior work, this paper focuses on learning composable substructure embeddings that are informative for understanding.
Recently linguistic structures are taken into account in the deep learning framework. ma2015dependency and tai2015improved both proposed dependency-based approaches to combine deep learning and linguistic structures, where the model used tree-based n-grams instead of surface ones to capture knowledge-guided relations for sentence modeling and classification. roth2016neural utilized lexicalized dependency paths to learn embedding representations for semantic role labeling. However, the performance of these approaches highly depends on the quality of “whole” sentence parsing, and there is no control of degree of attentions on different substructures. Learning robust representations incorporating whole structures still remains unsolved. In this paper, we address the limitation by proposing K-SAN to learn robust representations of whole sentences, where the whole representation is composed of the salient substructures in order to avoid error propagation.
Neural Attention and Memory Model
One of the earliest work with a memory component applied to language processing is memory networks [Weston et al.2015, Sukhbaatar et al.2015], which encode facts into vectors and store them in the memory for question answering (QA). Following their success, xiong2016dynamic proposed dynamic memory networks (DMN) to additionally capture position and temporality of transitive reasoning steps for different QA tasks. The idea is to encode important knowledge and store it into memory for future usage with attention mechanisms. Attention mechanisms allow neural network models to selectively pay attention to specific parts. There are also various tasks showing the effectiveness of attention mechanisms.
However, most previous work focused on the classification or prediction tasks (predicting a single word given a question), and there are few studies for NLU tasks (slot tagging). Based on the fact that the linguistic or knowledge-based substructures can be treated as prior knowledge to benefit language understanding, this work borrows the idea from memory models to improve NLU. Unlike the prior NLU work that utilized representations learned from knowledge bases to enrich features of the current sentence, this paper directly learns a sentence representation incorporating memorized substructures with an automatically decided attention mechanism in an end-to-end manner.
3 Knowledge-Guided Structural Attention Networks (K-SAN)
For the NLU task, given an utterance with a sequence of words/tokens , our model is to predict corresponding semantic tags for each word/token by incorporating knowledge-guided structures. The proposed model is illustrated in Figure 2. The knowledge encoding module first leverages external knowledge to generate a linguistic structure for the utterance, where a discrete set of knowledge-guided substructures is encoded into a set of vector representations (§ 3.1). The model learns the representation for the whole sentence by paying different attention on the substructures (§ 3.2). Then the learned vector encoding the knowledge-guided structure is used for improving the semantic tagger (§ 4).
3.1 Knowledge Encoding Module
The prior knowledge obtained from external resources, such as dependency relations, knowledge bases, etc., provides richer information to help decide the semantic tags given an input utterance. This paper takes dependency relations as an example for knowledge encoding, and other structured relations can be applied in the same way. The input utterance is parsed by a dependency parser, and the substructures are built according to the paths from the root to all leaves [Chen and Manning2014]. For example, the dependency parsing of the utterance “show me the flights from seattle to san francisco” is shown in Figure 3, where the associated substructures are obtained from the parsing tree for knowledge encoding. Here we do not utilize the dependency relation labels in the experiments for better generalization, because the labels may not be always available for different knowledge resources. Note that the number of substructures may be less than the number of words in the utterance, because non-leaf nodes do not have corresponding substructure in order to reduce the duplicated information in the model. The top-left component of Figure 2 illustrates the module for modeling knowledge-guided substructures.
3.2 Model Architecture
The model embeds all knowledge-guided substructures into a continuous space and stores embeddings of all
’s in the knowledge memory. The representation of the input utterance is then compared with encoded knowledge representations to integrate the carried structure guided by knowledge via an attention mechanism. Then the knowledge-guided representation of the sentence is taken together with the word sequence for estimating the semantic tags. Four main procedures are described below.
Encoded Knowledge Representation
To store the knowledge-guided structure, we convert each substructure (e.g. path starting from the root to the leaf in the dependency tree), , into a structure vector with dimension by embedding the substructure in a continuous space through the knowledge encoding model . The input utterance is also embedded to a vector with the same dimension through the model .
We apply the three types for knowledge encoding models, and , in order to model multiple words from a substructure or an input sentence
into a vector representation: 1) fully-connected neural networks (NN) with linear activation, 2) recurrent neural networks (RNN), and 3) convolutional neural networks (CNN) with a window size 3 and a max-pooling operation. For example, one of substructures shown in Figure3, “show flights seattle from”, is encoded into a vector embedding. In the experiments, the weights of and are tied together based on their consistent ability of sequence encoding.
Knowledge Attention Distribution
In the embedding space, we compute the match between the current utterance vector and its substructure vector by taking their inner product followed by a softmax.
where and can be viewed as attention distribution for modeling important substructures from external knowledge in order to understand the current utterance.
In order to encode the knowledge-guided structure, a vector is a sum over the encoded knowledge embeddings weighted by the attention distribution.
which indicates that the sentence pays different attention to different substructures guided from external knowledge. Because the function from input to output is smooth, we can easily compute gradients and back propagate through it. Then the sum of the substructure vector and the current input embedding are then passed through a neural network model to generate an output knowledge-guided representation .
where we employ a fully-connected dense network for .
To estimate the tag sequence corresponding to an input word sequence , we use an RNN module for training a slot tagger, where the knowledge-guided representation is fed into the input of the model in order to incorporate the structure information.
4 Recurrent Neural Network Tagger
4.1 Chain-Based RNN Tagger
Given , the model is to predict where the tag is aligned with the word . We use the Elman RNN architecture, consisting of an input layer, a hidden layer, and an output layer [Elman1990]
. The input, hidden and output layers consist of a set of neurons representing the input, hidden, and output at each time step, , , and , respectively.
where is a smooth bounded function such as tanh, and
is the probability distribution over of semantic tags given the current hidden state. The sequence probability can be formulated as
The model can be trained using backpropagation to maximize the conditional likelihood of the training set labels.
To overcome the frequent vanishing gradients issue when modeling long-term dependencies, gated RNN was designed to use a more sophisticated activation function than a usual activation function, consisting of affine transformation followed by a simple element-wise nonlinearity by using gating units[Chung et al.2014]Hochreiter and Schmidhuber1997, Cho et al.2014]. RNNs employing either of these recurrent units have been shown to perform well in tasks that require capturing long-term dependencies [Mesnil et al.2015, Yao et al.2014, Graves et al.2013, Sutskever et al.2014]. In this paper, we use RNN with GRU cells to allow each recurrent unit to adaptively capture dependencies of different time scales [Cho et al.2014, Chung et al.2014], because RNN-GRU can yield comparable performance as RNN-LSTM with need of fewer parameters and less data for generalization [Chung et al.2014]
A GRU has two gates, a reset gate , and an update gate [Cho et al.2014, Chung et al.2014]. The reset gate determines the combination between the new input and the previous memory, and the update gate decides how much the unit updates its activation, or content.
is a logistic sigmoid function.
Then the final activation of the GRU at time ,
, is a linear interpolation between the previous activationand the candidate activation :
where is an element-wise multiplication. When the reset gate is off, it effectively makes the unit act as if it is reading the first symbol of an input sequence, allowing it to forget the previously computed state. Then can be computed by (8).
4.2 Knowledge-Guided RNN Tagger
In order to model the encoded knowledge from previous turns, for each time step , the knowledge-guided sentence representation in (5) is fed into the RNN model together with the word . For the plain RNN, the hidden layer can be formulated as
to replace (7) as illustrated in the right block of Figure 2. RNN-GRU can incorporate the encoded knowledge in the similar way, where can be added into gating mechanisms for modeling contextual knowledge similarly.
4.3 Joint RNN Tagger
Because the chain-based tagger and the knowledge-guided tagger carry different information, the joint RNN tagger is proposed to balance the information between two model architectures. Figure 4 presents the architecture of the joint RNN tagger.
where is the weight for balancing chain-based and knowledge-guided information. By jointly considering chain-based information () and knowledge-guided information (), the joint RNN tagger is expected to achieve better generalization, and the performance may be less sensitive to poor structures from external knowledge. In the experiments, is set to for balancing two sides. The objective of the proposed model is to maximize the sequence probability in (9), and the model can be trained in an end-to-end manner, where the error would be back-propagated through the whole architecture.
in the t-test.)
5.1 Experimental Setup
The dataset for experiments is the benchmark ATIS corpus, which is extensively used by the NLU community [Mesnil et al.2015]
. There are 4978 training utterances selected from Class A (context independent) in the ATIS-2 and ATIS-3, while there are 893 utterances selected from the ATIS-3 Nov93 and Dec94. In the experiments, we only use lexical features. In order to show the robustness to data scarcity, we conduct the experiments with 3 different sizes of training data (Small, Medium, and Large), where Small is 1/40 of the original set, Medium is 1/10 of the original set, and Large is the full set. The evaluation metrics for NLU is F-measure on the predicted slots111The used evaluation script is conlleval..
For experiments with K-SAN, we parse all data with the Stanford dependency parser [Chen and Manning2014]
and represent words as their embeddings trained on the in-domain data, where the parser is pre-trained on PTB. The loss function is cross-entropy, and the optimizer we use is adam with the default setting[Kingma and Ba2014], where the learning rate , , , and . The maximum iteration for training our K-SAN models is set as 300. The dimensionality of input word embeddings is 100, and the hidden layer sizes are in . The dropout rates are set as
. All reported results are from the joint RNN tagger, and the hyperparameters are tuned in the dev set for all experiments.
To validate the effectiveness of the proposed model, we compare the performance with the following baselines.
Structural: The NLU models utilize linguistic information when tagging slots, where DCNN and Tree-RNN are the state-of-the-art approaches for embedding sentences with linguistic structures.
CRF Tagger [Tur et al.2010]: predicts slots based on the lexical (5-word window) and syntactic (dependent head in the parsing tree) features.
DCNN [Ma et al.2015a]: predicts slots by incorporating sentence embeddings learned by a convolutional model with consideration of dependency tree structures.
Tree-RNN [Tai et al.2015]: predicts slots with sentence embeddings learned by an RNN model based on the tree structures of sentences.
5.3 Slot Filling Results
Table 1 shows the performance of slot filling on different size of training data, where there are three datasets (Small, Medium, and Large use 1/40, 1/10, and whole training data). For baselines (models without knowledge features), CNN Encoder-Tagger achieves the best performance on all datasets.
Among structural models (models with knowledge encoding), Tree-RNN Encoder-Tagger performs better for Small data but slightly worse than the DCNN Encoder-Tagger.
CNN [Kim2014] performs better compared to DCNN [Ma et al.2015a] and Tree-RNN [Tai et al.2015], even though CNN does not leverage external knowledge when encoding sentences. When comparing the NLU performance between baselines and other state-of-the-art structural models, there is no significant difference. This suggests that encoding sentence information without distinguishing substructure may not capture salient semantics in order to improve understanding performance.
Among the proposed K-SAN models, CNN for encoding performs best on Small (75% on F1) and Medium (88% on F1), and RNN for encoding performs best on the Large set (95% on F1). Also, most of the proposed models outperform all baselines, where the improvement for the small dataset is more significant. This suggests that the proposed models carry better generalization and are less sensitive to unseen data. For example, given an utterance “which flights leave on monday from montreal and arrive in chicago in the morning”, “morning” can be correctly tagged with a semantic tag B-arrive_time.period_of_day by K-SAN, but it is incorrectly tagged with B-depart_time.period_of_day by baselines, because knowledge guides the model to pay correct attention to salient substructures. The proposed model presents the state-of-the-art performance on the large dataset (RNN-BLSTM in baselines), showing the effectiveness of leveraging knowledge-guided structures for learning embeddings that can be used for specific tasks and the robustness to data scarcity and mismatch.
5.4 Attention Analysis
In order to show the effectiveness of boosting performance by learning correct attention from much smaller training data through the proposed model, we present the visualization of the attention for both words and relations decoded by K-SAN with CNN in the Figure 5. The darker color of blocks and lines indicates the higher attention for words and relations respectively. From the figure, the words and the relations with higher attention are the most crucial parts for predicting correct slots, e.g. origin, destination, and time. Furthermore, the difference of attention distribution between three datasets is not significant; this suggests that our proposed model is able to pay correct attention to important substructures guided by the external knowledge even the training data is scarce.
|Approach||Knowledge (Max #Substructure)||Small||Medium||Large|
|K-SAN (CNN)||Dependency Tree||Stanford||53||74.60||87.99||94.86|
5.5 Knowledge Generalization
In order to show the capacity of generalization to different knowledge resources, we perform the K-SAN model for different knowledge bases. Below we compare two types of knowledge formats: dependency tree and Abstract Meaning Representation (AMR). AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph [Banarescu et al.2013], where nodes represent concepts, and labeled directed edges represent the relations between two concepts. The formalism is based on propositional logic and neo-Davidsonian event representations [Parsons1990, Davidson1967]. The semantic concepts in AMR were leveraged to benefit multiple NLP tasks [Liu et al.2015]. Unlike syntactic information from dependency trees, the AMR graph contains semantic information, which may offer more specific conceptual relations. Figure 6 shows the comparison of a dependency tree and an AMR graph associated with the same example utterance and how the knowledge-guided substructures are constructed.
Table 2 presents the performance of CRF and K-SAN with CNN taggers that utilize dependency relations and AMR edges as knowledge guidance on the same datasets, where CRF takes the head words from either dependency trees or AMR graphs as additional features and K-SAN incorporates knowledge-guided substructures as illustrated in Figure 6. The dependency trees are obtained from the Stanford dependency parser or the SyntaxNet parser222https://github.com/tensorflow/models/tree/master/syntaxnet, and AMR graphs are generated by a rule-based AMR parser or JAMR333https://github.com/jflanigan/jamr.
Among four knowledge resources (different types and obtained from different parsers), all results show the similar performance for three sizes of datasets. The maximum number of substructures for the dependency tree is larger than the number in the AMR graph (53 and 25 v.s. 19 and 8), because syntax is more general and may provide richer cues for guiding more attention while semantics is more specific and may offer stronger guidance. In sum, the models applying four different resources achieve similar performance, and all significantly outperform the state-of-the-art NLU tagger, showing the effectiveness, generalization, and robustness of the proposed K-SAN model.
This paper proposes a novel model, knowledge-guided structural attention networks (K-SAN), that leverages prior knowledge as guidance to incorporate non-flat topologies and learn suitable attention for different substructures that are salient for specific tasks. The structured information can be captured from small training data, so the model has better generalization and robustness. The experiments show benefits and effectiveness of the proposed model on the language understanding task, where all knowledge-guided substructures captured by different resources help tagging performance, and the state-of-the-art performance is achieved on the ATIS benchmark dataset.
- [Banarescu et al.2013] Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the Linguistic Annotation Workshop and Interoperability with Discourse.
Asli Celikyilmaz and Dilek Hakkani-Tur.
Convolutional neural network based semantic tagging with entity
NIPS Workshop on Machine Learning for SLU and Interaction.
- [Chelba et al.2003] Ciprian Chelba, Monika Mahajan, and Alex Acero. 2003. Speech utterance classification. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP), volume 1, pages I–280. IEEE.
- [Chen and Manning2014] Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740–750.
- [Chen et al.2014] Yun-Nung Chen, Dilek Hakkani-Tur, and Gokan Tur. 2014. Deriving local relational surface forms from dependency-based entity embeddings for unsupervised spoken language understanding. In 2014 IEEE Spoken Language Technology Workshop (SLT), pages 242–247. IEEE.
- [Chen et al.2015] Yun-Nung Chen, William Yang Wang, Anatole Gershman, and Alexander I Rudnicky. 2015. Matrix factorization with knowledge graph propagation for unsupervised spoken language understanding. Proceedings of ACL-IJCNLP.
- [Cho et al.2014] Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.
- [Chung et al.2014] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
- [Davidson1967] Donald Davidson. 1967. The logical form of action sentences.
- [Deoras and Sarikaya2013] Anoop Deoras and Ruhi Sarikaya. 2013. Deep belief network based semantic taggers for spoken language understanding. In INTERSPEECH, pages 2713–2717.
- [Elman1990] Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211.
- [Graves et al.2013] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6645–6649. IEEE.
- [Haffner et al.2003] Patrick Haffner, Gokhan Tur, and Jerry H Wright. 2003. Optimizing svms for complex call classification. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP), volume 1, pages I–632. IEEE.
- [Heck et al.2013] Larry P Heck, Dilek Hakkani-Tür, and Gokhan Tur. 2013. Leveraging knowledge graphs for web-scale unsupervised semantic parsing. In INTERSPEECH, pages 1594–1598.
- [Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780.
- [Huang et al.2013] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2333–2338. ACM.
- [Kim2014] Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
- [Kingma and Ba2014] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
- [Le and Mikolov2014] Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053.
- [Ling et al.2015] Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096.
- [Liu et al.2013] Jingjing Liu, Panupong Pasupat, Yining Wang, Scott Cyphers, and James Glass. 2013. Query understanding enhanced by hierarchical parsing structures. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 72–77. IEEE.
- [Liu et al.2015] Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A Smith. 2015. Toward abstractive summarization using semantic representations. In In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1077–1086.
[Ma et al.2015a]
Mingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou.
Dependency-based convolutional neural networks for sentence
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 174–179.
- [Ma et al.2015b] Yi Ma, Paul A Crook, Ruhi Sarikaya, and Eric Fosler-Lussier. 2015b. Knowledge graph inference for spoken dialog systems. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5346–5350. IEEE.
- [McTear2004] Michael F McTear. 2004. Spoken dialogue technology: toward the conversational user interface. Springer Science & Business Media.
- [Mesnil et al.2015] Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2015. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):530–539.
- [Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119.
- [Parsons1990] Terence Parsons. 1990. Events in the semantics of english: A study in subatomic semantics.
- [Pieraccini et al.1992] Roberto Pieraccini, Evelyne Tzoukermann, Zakhar Gorelov, Jean-Luc Gauvain, Esther Levin, Chin-Hui Lee, and Jay G Wilpon. 1992. A speech understanding system based on statistical representation of semantics. In 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 1, pages 193–196. IEEE.
- [Ravuri and Stolcke2015] Suman Ravuri and Andreas Stolcke. 2015. Recurrent neural network and lstm models for lexical utterance classification. In Sixteenth Annual Conference of the International Speech Communication Association.
- [Roth and Lapata2016] Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. arXiv preprint arXiv:1605.07515.
- [Rudnicky and Xu1999] Alexander Rudnicky and Wei Xu. 1999. An agenda-based dialog management architecture for spoken language systems. In IEEE Automatic Speech Recognition and Understanding Workshop, volume 13, page 17.
- [Sarikaya et al.2011] Ruhi Sarikaya, Geoffrey E Hinton, and Bhuvana Ramabhadran. 2011. Deep belief nets for natural language call-routing. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5680–5683. IEEE.
- [Sarikaya et al.2014] Ruhi Sarikaya, Geoffrey E Hinton, and Anoop Deoras. 2014. Application of deep belief networks for natural language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(4):778–784.
- [Socher et al.2014] Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207–218.
- [Sukhbaatar et al.2015] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439.
- [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112.
- [Tai et al.2015] Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075.
- [Tur and De Mori2011] Gokhan Tur and Renato De Mori. 2011. Spoken language understanding: Systems for extracting semantic information from speech. John Wiley & Sons.
- [Tur et al.2010] Gokhan Tur, Dilek Hakkani-Tür, and Larry Heck. 2010. What is left to be understood in atis? In Spoken Language Technology Workshop (SLT), 2010 IEEE, pages 19–24. IEEE.
- [Tur et al.2012] Gokhan Tur, Li Deng, Dilek Hakkani-Tür, and Xiaodong He. 2012. Towards deeper understanding: Deep convex networks for semantic utterance classification. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5045–5048. IEEE.
- [Wang et al.2005] Ye-Yi Wang, Li Deng, and Alex Acero. 2005. Spoken language understanding. IEEE Signal Processing Magazine, 22(5):16–31.
- [Weston et al.2015] Jason Weston, Sumit Chopra, and Antoine Bordesa. 2015. Memory networks. In International Conference on Learning Representations (ICLR).
- [Xiong et al.2016] Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. arXiv preprint arXiv:1603.01417.
- [Xu and Sarikaya2013] Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular CRF for joint intent detection and slot filling. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 78–83. IEEE.
- [Yang et al.2014] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575.
- [Yao et al.2013] Kaisheng Yao, Geoffrey Zweig, Mei-Yuh Hwang, Yangyang Shi, and Dong Yu. 2013. Recurrent neural networks for language understanding. In INTERSPEECH, pages 2524–2528.
- [Yao et al.2014] Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short-term memory neural networks. In 2014 IEEE Spoken Language Technology Workshop (SLT), pages 189–194. IEEE.