CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding

09/16/2019 ∙ by Yijin Liu, et al. ∙ BEIJING JIAOTONG UNIVERSITY Tencent 0

Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works. However, most existing models fail to fully utilize co-occurrence relations between slots and intents, which restricts their potential performance. To address this issue, in this paper we propose a novel Collaborative Memory Network (CM-Net) based on the well-designed block, named CM-block. The CM-block firstly captures slot-specific and intent-specific features from memories in a collaborative manner, and then uses these enriched features to enhance local context representations, based on which the sequential information flow leads to more specific (slot and intent) global utterance representations. Through stacking multiple CM-blocks, our CM-Net is able to alternately perform information exchange among specific memories, local contexts and the global utterance, and thus incrementally enriches each other. We evaluate the CM-Net on two standard benchmarks (ATIS and SNIPS) and a self-collected corpus (CAIS). Experimental results show that the CM-Net achieves the state-of-the-art results on the ATIS and SNIPS in most of criteria, and significantly outperforms the baseline models on the CAIS. Additionally, we make the CAIS dataset publicly available for the research community.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models

Haffner et al. (2003); Sarikaya et al. (2011) for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs Xu and Sarikaya (2013); Gupta et al. (2019), RNNs Guo et al. (2014b); Liu and Lane (2016), and asynchronous bi-model Wang et al. (2018). Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents.

Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure 1 and Table 1. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms Goo et al. (2018); Li et al. (2018), while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms Zhang et al. (2018a) is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling Mesnil et al. (2014), is not explicitly modeled.

In this paper, we try to address these issues, and thus propose a novel ollaborative emory etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components:

  • Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner.

  • Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention.

  • Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation.

Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net.

We firstly conduct experiments on two popular benchmarks, SNIPS Coucke et al. (2018) and ATIS Hemphill et al. (1990); Tur et al. (2010). Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net.

Our main contributions are as follows:

  • We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks.

  • Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria.

  • We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community.

2 Background

In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance with words and its corresponding slot tags , the slot filling task aims to learn a parameterized mapping function from input words to slot tags. For the intent detection, it is designed to predict the intent label for the entire utterance from the predefined label set .

Typically, the input utterance is firstly encoded into a sequence of distributed representations

by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings into context-sensitive representations . An external CRF Lafferty et al. (2001)

layer is widely utilized to calculate conditional probabilities of slot tags:

(1)

Here is the set of all possible sequences of tags, and is the score function calculated by:

(2)

where is the transition matrix that indicates the score of a transition from to , and is the score matrix output by RNNs. indicates the score of the tag of the word in a sentence Lample et al. (2016).

When testing, the Viterbi algorithm Forney (1973) is used to search the sequence of slot tags with maximum score:

(3)

As to the prediction of intent, the word-level hidden states are firstly summarized into a utterance-level representation

via mean pooling (or max pooling or self-attention,

etc.):

(4)

The most probable intent label is predicted by softmax normalization over the intent label set:

(5)

Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows:

(6)

where and are golden labels, and

is hyperparameter, and

is the size of intent label set, and similarly for .

3 CM-Net

Figure 2: Overview of our proposed CM-Net. The input utterance is firstly encoded with the Embedding Layer (bottom), and then is transformed by multiple CM-blocks with the assistance of both slot and intent memories (on both sides). Finally we make predictions of slots and the intent in the Inference Layer (top).

3.1 Overview

In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure 2, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer.

3.2 Embedding Layers

Pre-trained Word Embedding

The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove

222https://nlp.stanford.edu/projects/glove/ Pennington et al. (2014) to initialize word embeddings, and keep them frozen.

Character-aware Word Embedding

It has been demonstrated that character level information (e.g. capitalization and prefix) Collobert et al. (2011) is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings.

3.3 CM-block

The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively.

Deliberate Attention

To fully model semantic relations between slots and intents, we build the slot memory and intent memory , and further devise a collaborative retrieval approach. For the slot memory, it keeps slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state as query, and obtain slot feature and intent feature from both memories by the deliberate attention mechanism, which will be illustrated in the following.

Specifically for the slot feature , we firstly get a rough intent representation by the word-aware attention with hidden state over the intent memory , and then obtain the final slot feature by the intent-aware attention over the slot memory with the intent-enhanced representation . Formally, the above-mentioned procedures are computed as follows:

(7)

where is the query function calculated by the weighted sum of all cells in memory () :

(8)

Here and are model parameters. We name the above calculations of two-round attentions (Equation 7) as “deliberate attention”.

The intent representation is computed by the deliberate attention as well:

(9)

These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features and intent features are utilized to provide guidances for the next local calculation layer.

Figure 3: The internal structure of our CM-Block, which is composed of deliberate attention, local calculation and global recurrent respectively.

Local Calculation

Local context information is highly useful for sequence modeling Kurata et al. (2016); Wang et al. (2016). Zhang et al. Zhang et al. (2018b) propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features and intent-specific features retrieved from memories.

Specifically, at each input position , we take the local window context , word embedding , slot feature and intent feature as inputs to conduct combinatorial calculation simultaneously. Formally, in the layer, the hidden state is updated as follows:

(10)

where is the concatenation of hidden states in a local window, and , , , and are gates to control information flows, and are model parameters. More details about the state transition can be referred in Zhang et al. (2018b). In the first CM-block, the hidden state is initialized with the corresponding word embedding. In other CM-blocks, the is inherited from the output of the adjacent lower CM-block.

At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section 5.2.

Global Recurrence

Bi-directional RNNs, especially the BiLSTMs Hochreiter and Schmidhuber (1997) are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks Hammerton (2003); Sundermeyer et al. (2012). The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state inherited from the local calculation layer as input, and conduct recurrent steps as follows:

(11)

The output “states” of the BiLSTMs are taken as “states” input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section 5.2.

3.4 Inference Layer

After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states along with the retrieved slot representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (3) in Section 2:

(12)

For the prediction of intent label, we firstly aggregate the hidden state and the retrieved intent representation at each word position (from the final CM-block as well) via mean pooling:

(13)

and then take the summarized vector

as input feature to conduct prediction of intent consistently with the Equation (5) in Section 2.

Dataset SNIPS ATIS CAIS
Vocab Size 11241 722 2146
Average Length 9.15 11.28 8.65
# Intents 7 18 11
# Slots 72 128 75
# Train Set 13084 4478 7995
# Validation Set 700 500 994
# Test Set 700 893 1012
Table 2: Dataset statistics.
Models SNIPS ATIS
Slot () Intent () Slot () Intent ()
Joint GRU Zhang and Wang (2016) 95.49 98.10
Self-Attention, Intent GateLi et al. (2018) 96.52 98.77
Bi-model Wang et al. (2018) 96.89 98.99
Attention Bi-RNN Liu and Lane (2016) * 87.80 96.70 95.98 98.21
Joint Seq2Seq Hakkani-Tür et al. (2016) * 87.30 96.90 94.20 92.60
Slot-Gated (Intent Atten.) Goo et al. (2018) 88.30 96.80 95.20 94.10
Slot-Gated (Full Atten.) Goo et al. (2018) 88.80 97.00 94.80 93.60
CAPSULE-NLUZhang et al. (2018a) 91.80 97.70 95.20 95.00
Dilated CNN, Label-Recurrent Gupta et al. (2019) 93.11 98.29 95.54 98.10
Sentence-State LSTM Zhang et al. (2018b) 95.80 98.30 95.65 98.21
BiLSTMs + EMLoL Siddhant et al. (2018) 93.29 98.83 95.62 97.42
BiLSTMs + EMLo Siddhant et al. (2018) 93.90 99.29 95.42 97.30
Joint BERT Chen et al. (2019) 97.00 98.60 96.10 97.50
CM-Net (Ours) 97.15 99.29 96.20 99.10
Table 3: Results on test sets of the SNIPS and ATIS, where our CM-Net achieves state-of-the-art performances in most cases. “*” indicates that results are retrieved from Slot-Gated Goo et al. (2018), and “” indicates our implementation.

4 Experiments

4.1 Datasets and Metrics

We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table 2.

Atis

The Airline Travel Information Systems (ATIS) corpus Hemphill et al. (1990) is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains Zhang and Wang (2016); Guo et al. (2014a), therefore we train our model purely on the training set without additional hand-crafted features.

Snips

SNIPS Natural Language Understanding benchmark 333https://github.com/snipsco/nlu-benchmark/tree/master/2017-06-custom-intent-engines Coucke et al. (2018) is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works Goo et al. (2018); Zhang et al. (2018a).

Cais

We collect utterances from the hinese rtificial ntelligence peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme Ratinov and Roth (2009) in the sequence labeling field.

Metrics

Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval 444https://www.clips.uantwerpen.be/conll2000/chunking/
conlleval.txt
as the token-level metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works Tur et al. (2010); Zhang and Wang (2016), we count an utterrance as a correct classification if any ground truth label is predicted.

4.2 Implementation Details

All trainable parameters in our model are initialized by the method described in Glorot and Bengio Glorot and Bengio (2010). We apply dropout Srivastava et al. (2014) to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer Kingma and Ba (2014)

with gradient clipping of 3

Pascanu et al. (2013). The initial learning rate

is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight

(finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material.

4.3 Main Results

Main results of our CM-Net on the SNIPS and ATIS are shown in Table 3. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling score and intent detection accuracy, except for the score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section 4.1. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling score on the SNIPS is able to demonstrate the superiority of our CM-Net.

It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models Siddhant et al. (2018); Chen et al. (2019). We conduct auxiliary experiments by leveraging the well-known BERT Devlin et al. (2018) as an external resource for a relatively fair comparison with those models, and report details in Section 5.3.

Figure 4: Investigations of the collaborative retrieval approach on slot filling (on the left) and intent detection (on the right), where “no slot2int” indicates removing slow-aware attention for the intent representation, and similarly for “no int2slot” and “neither”.
# Models SNIPS
Slot () Intent ()
0 CM-Net 97.15 99.29
1 – slot memory 96.64 99.14
2 – intent memory 96.95 98.84
3 – local calculation 96.73 99.00
4 – global recurrence 96.80 98.57
Table 4: Ablation experiments on the SNIPS to investigate the impacts of various components, where “- slot memory” indicates removing the slot memory and its interactions with other components correspondingly. Similarly for the other options.

5 Analysis

Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments.

5.1 Whether Memories Promote Each Other?

In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure 4.

We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure 4), the scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int”), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot”). Similar observations can be found for the intent detection task (right part in Figure 4).

In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other.

5.2 Ablation Experiments

We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table 4.

Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section 5.1. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0).

Models SNIPS
Slot () Intent ()
BiLSTMs + EMLoL 93.29 98.83
BiLSTMs + EMLo 93.90 99.29
Joint BERT 97.00 98.60
CM-Net + BERT 97.31 99.32
Table 5: Results on the SNIPS benchmark with the assistance of pre-trained language model, where we establish new state-of-the-art results on the SNIPS.

5.3 Effects of Pre-trained Language Models

Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT Devlin et al. (2018) and EMLo Peters et al. (2018)). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table 5 show that we establish new state-of-the-art results on both tasks of the SNIPS.

5.4 Evaluation on the CAIS

We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture Huang et al. (2015) for sequence labeling task, and the other one is the more powerful sententce-state LSTM Zhang et al. (2018b). The results listed in Table 6 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages.

Models CAIS
Slot () Intent ()
BiLSTMs + CRF 85.32 93.25
S-LSTM + CRF 85.74 94.36
CM-Net 86.16 94.56
Table 6: Results on our CAIS dataset, where “” indicates our implementation of the S-LSTM.

6 Related Work

Memory Network

Memory network is a general machine learning framework introduced by

Weston et al. Weston et al. (2014), which have been shown effective in question answering Weston et al. (2014); Sukhbaatar et al. (2015), machine translation Wang et al. (2016); Feng et al. (2017), aspect level sentiment classification Tang et al. (2016), etc. For spoken language understanding, Chen et al. Chen et al. (2016) introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach.

Interactions between slots and intents

Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms Goo et al. (2018); Li et al. (2018). Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. Zhang et al. Zhang et al. (2018a) propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents.

Sentence-State LSTM

Zhang et al. 2018b propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments.

7 Conclusion

We propose a novel ollaborative emory etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community.

Acknowledgments

Liu, Chen and Xu are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions.

References

  • Q. Chen, Z. Zhuo, and W. Wang (2019) BERT for joint intent classification and slot filling. External Links: 1902.10909 Cited by: Table 3, §4.3.
  • Y. Chen, D. Hakkani-Tür, G. Tür, J. Gao, and L. Deng (2016) End-to-end memory networks with knowledge carryover for multi-turn spoken language understanding.. In Interspeech, pp. 3245–3249. Cited by: §6.
  • R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa (2011) Natural language processing (almost) from scratch. Journal of Machine Learning Research 12 (Aug), pp. 2493–2537. Cited by: §3.2.
  • A. Coucke, A. Saade, A. Ball, T. Bluche, A. Caulier, D. Leroy, C. Doumouro, T. Gisselbrecht, F. Caltagirone, T. Lavril, M. Primet, and J. Dureau (2018) Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. External Links: 1805.10190 Cited by: §1, §4.1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §4.3, §5.3.
  • Y. Feng, S. Zhang, A. Zhang, D. Wang, and A. Abel (2017)

    Memory-augmented neural machine translation

    .
    External Links: 1708.02005 Cited by: §6.
  • G. D. Forney (1973) The viterbi algorithm. Proceedings of the IEEE 61 (3), pp. 268–278. Cited by: §2.
  • X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    ,
    pp. 249–256. Cited by: §4.2.
  • C. Goo, G. Gao, Y. Hsu, C. Huo, T. Chen, K. Hsu, and Y. Chen (2018) Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 753–757. External Links: Link, Document Cited by: §1, Table 3, §4.1, §6.
  • D. Guo, G. Tur, W. Yih, and G. Zweig (2014a) Joint semantic utterance classification and slot filling with recursive neural networks. In 2014 IEEE Spoken Language Technology Workshop (SLT), pp. 554–559. Cited by: §4.1.
  • D. Guo, G. Tur, W. Yih, and G. Zweig (2014b) Joint semantic utterance classification and slot filling with recursive neural networks. In 2014 IEEE Spoken Language Technology Workshop (SLT), pp. 554–559. Cited by: §1.
  • A. Gupta, J. Hewitt, and K. Kirchhoff (2019) Simple, fast, accurate intent classification and slot labeling. External Links: 1903.08268 Cited by: §1, Table 3.
  • P. Haffner, G. Tur, and J. H. Wright (2003) Optimizing svms for complex call classification. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03)., Vol. 1, pp. I–I. Cited by: §1.
  • D. Hakkani-Tür, G. Tür, A. Celikyilmaz, Y. Chen, J. Gao, L. Deng, and Y. Wang (2016) Multi-domain joint semantic frame parsing using bi-directional rnn-lstm.. In Interspeech, pp. 715–719. Cited by: Table 3.
  • J. Hammerton (2003) Named entity recognition with long short-term memory. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pp. 172–175. Cited by: §3.3.
  • C. T. Hemphill, J. J. Godfrey, and G. R. Doddington (1990) The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990, External Links: Link Cited by: §1, §4.1.
  • S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural Computation 9 (8), pp. 1735–1780. External Links: Document, Link Cited by: §3.3.
  • Z. Huang, W. Xu, and K. Yu (2015) Bidirectional LSTM-CRF models for sequence tagging. CoRR abs/1508.01991. External Links: Link, 1508.01991 Cited by: §5.4.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.2.
  • G. Kurata, B. Xiang, B. Zhou, and M. Yu (2016) Leveraging sentence-level information with encoder lstm for semantic slot filling. arXiv preprint arXiv:1601.01530. Cited by: §3.3.
  • J. Lafferty, A. McCallum, and F. C. Pereira (2001) Conditional random fields: probabilistic models for segmenting and labeling sequence data. Cited by: §2.
  • G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer (2016) Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 260–270. External Links: Document, Link Cited by: §2.
  • C. Li, L. Li, and J. Qi (2018)

    A self-attentive model with gate mechanism for spoken language understanding

    .
    In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 3824–3833. External Links: Link Cited by: §1, Table 3, §6.
  • B. Liu and I. Lane (2016)

    Attention-based recurrent neural network models for joint intent detection and slot filling

    .
    arXiv preprint arXiv:1609.01454. Cited by: §1, Table 3.
  • G. Mesnil, Y. Dauphin, K. Yao, Y. Bengio, L. Deng, D. Hakkani-Tur, X. He, L. Heck, G. Tur, D. Yu, et al. (2014) Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing 23 (3), pp. 530–539. Cited by: §1.
  • R. Pascanu, T. Mikolov, and Y. Bengio (2013) On the difficulty of training recurrent neural networks. In International conference on machine learning, pp. 1310–1318. Cited by: §4.2.
  • J. Pennington, R. Socher, and C. D. Manning (2014) GloVe: global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. External Links: Link Cited by: §3.2.
  • M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. In NAACL, pp. 2227–2237. External Links: Link Cited by: §5.3.
  • L. Ratinov and D. Roth (2009) Design challenges and misconceptions in named entity recognition. In Proceedings of the thirteenth conference on computational natural language learning, pp. 147–155. Cited by: §4.1.
  • R. Sarikaya, G. E. Hinton, and B. Ramabhadran (2011) Deep belief nets for natural language call-routing. In 2011 IEEE International conference on acoustics, speech and signal processing (ICASSP), pp. 5680–5683. Cited by: §1.
  • A. Siddhant, A. Goyal, and A. Metallinou (2018)

    Unsupervised transfer learning for spoken language understanding in intelligent agents

    .
    AAAI. Cited by: Table 3, §4.3.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, pp. 1929–1958. External Links: Link Cited by: §4.2.
  • S. Sukhbaatar, J. Weston, R. Fergus, et al. (2015) End-to-end memory networks. In Advances in neural information processing systems, pp. 2440–2448. Cited by: §6.
  • M. Sundermeyer, R. Schlüter, and H. Ney (2012) LSTM neural networks for language modeling. In Thirteenth annual conference of the international speech communication association, Cited by: §3.3.
  • D. Tang, B. Qin, and T. Liu (2016) Aspect level sentiment classification with deep memory network. arXiv preprint arXiv:1605.08900. Cited by: §6.
  • G. Tur, D. Hakkani-Tür, and L. Heck (2010) What is left to be understood in atis?. In 2010 IEEE Spoken Language Technology Workshop, pp. 19–24. Cited by: §1, §4.1.
  • M. Wang, Z. Lu, H. Li, and Q. Liu (2016) Memory-enhanced decoder for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 278–286. External Links: Link, Document Cited by: §6.
  • X. Wang, W. Jiang, and Z. Luo (2016)

    Combination of convolutional and recurrent neural network for sentiment analysis of short texts

    .
    In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 2428–2437. Cited by: §3.3.
  • Y. Wang, Y. Shen, and H. Jin (2018) A bi-model based RNN semantic frame parsing model for intent detection and slot filling. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), New Orleans, Louisiana, pp. 309–314. External Links: Link, Document Cited by: §1, Table 3.
  • J. Weston, S. Chopra, and A. Bordes (2014) Memory networks. External Links: 1410.3916 Cited by: §6.
  • P. Xu and R. Sarikaya (2013) Convolutional neural network based triangular crf for joint intent detection and slot filling. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 78–83. Cited by: §1.
  • C. Zhang, Y. Li, N. Du, W. Fan, and P. S. Yu (2018a) Joint slot filling and intent detection via capsule neural networks. arXiv preprint arXiv:1812.09471. Cited by: §1, Table 3, §4.1, §6.
  • X. Zhang and H. Wang (2016) A joint model of intent determination and slot filling for spoken language understanding.. In IJCAI, pp. 2993–2999. Cited by: Table 3, §4.1, §4.1.
  • Y. Zhang, Q. Liu, and L. Song (2018b) Sentence-state lstm for text representation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 317–327. External Links: Link Cited by: §3.3, §3.3, Table 3, §5.4, §6.