Concept tagging is a type of structured learning needed for natural language understanding (NLU) systems. In this task, meaning labels from a domain ontology are assigned to word sequences. In this paper, we review the algorithms developed over the last twenty five years. We perform a comparative evaluation of generative, discriminative and deep learning methods on two public datasets. We report on the statistical variability performance measurements. The third contribution is the release of a repository of the algorithms, datasets and recipes for NLU evaluation.READ FULL TEXT VIEW PDF
The NLU component of a conversational system requires an automatic extraction of concept tags, dialogue acts, domain labels and entities. In this paper we describe and review the algorithm development of the concept tagging (a.k.a. slot filling or entity extraction) task. It aims at computing a sequence of concept units, , from a sequence of words in natural language, . The task can be seen as a structured learning problem where words are the input and concepts are the output labels. In other words, the objective is to map a sentence (utterance) “I want to go from Boston to Atlanta on Monday” to the sequence of domain labels“null null null null null fromloc.city null toloc.city null depart_date.day_name”, that would allow to identify, for instance that Boston is a departure city . Difficulties may arise from different factors, such as the variable token span of concepts, the long-distance word dependencies, a large and ever changing vocabulary, or subtle semantic implications that might be hard to capture at a surface level or without some prior context knowledge.
Since the early nineties [Pieraccini and Levin1992], the task has been designed as a core component of the natural language understanding process in domain-limited conversational systems. Over the years, algorithms have been developed for generative, discriminative and, more recently, for deep learning frameworks. In this paper, we provide a comprehensive review of the algorithms, their parameters and their respective state-of-the-art performances. We discuss the relative advantages and differences amongst algorithms in terms of performances and statistical variability and the optimal parameter settings. Last but not least, we have designed and provided a repository of the data, algorithms, implementations and parameter settings on two public datasets. The GitHub repository111 www.github.com/fruttasecca/concept-tagging-with-neural-networks
www.github.com/fruttasecca/concept-tagging-with-neural-networksis intended as a reference both for practitioners and for algorithm development researchers.
With the conversational AI gaining popularity, the area of NLU is too vast to mention all relevant or even recent studies. Moreover the objective of this paper is to benchmark a important subtask of NLU, concept tagging used by advanced conversational systems. We benchmark generative, discriminative and deep learning approaches to NLU, the work is in-line with the works of [Raymond and Riccardi2007, Mesnil et al.2015, Bechet and Raymond2018]. Unlike mentioned previous comparative performance analysis, in this paper, we benchmark deep learning architectures, and compare them to a generative and traditional discriminative algorithms. To the best of our knowledge, this is the first comprehensive comparison of concept tagging algorithms at this scale on public datasets and shared algorithm implementations ( and their parameter settings).
Among the algorithms considered for benchmarking, we include a representative from the generative class, the weighted finite state transducers (WFSTs); and two discriminative algorithms: Support Vector Machines (SVMs), Conditional Random Fields (CRFs), and a set of base neural networks architectures and their combinations.
Weighted Finite State Transducers222We use OpenFST (http://www.openfst.org) and OpenGRM (http://www.opengrm.org) libraries. cast concept tagging as a translation problem from words to concepts [Raymond and Riccardi2007], and usually consist of two components. The first component transduces words to concepts based on a score that can be either induced from data or manually designed; the second component is a stochastic conceptual language model, which re-scores concept sequences. The two components are composed to perform sequence-to-sequence translation and infer the best sequence using Viterbi algorithm.
Support Vector Machines (SVM) are used within Yamcha tool [Kudo and Matsumoto2001]
that performs sequence labeling using forward and backward moving classifiers. Automatic labels assigned to preceding tokens are used as dynamic features for the current token’s label decision.
Conditional Random Fields (CRF)333We use CRFSUITE [Okazaki2007] implementation of CRFs in out experiments. [Lafferty et al.2001] is a discriminative model based on a dependency graph and a set of features. Each feature has an associated weight . Features are generally hand-crafted and their weights are learned from the training data. Additionally, we experiment with word embeddings as additional features for CRFs (CRF+EMB).
Recurrent Neural Networks (RNN).
The first neural network architecture444 All neural architectures are implemented within the PyTorch framework (
All neural architectures are implemented within the PyTorch framework (https://pytorch.org) we have considered is an Elman RNN [Elman1990, Übeyli and Übeyli2012]. In RNN, a hidden state depends on the current input and the previous hidden state. The output (label), on the other hand, depends on the new hidden state.
try to tackle the vanishing gradient problem by introducing a more complex mechanisms to address information propagation and deletion, with the cost of a more complex model with more parameters to train due to the system of gates it uses. The memory of the model is represented by the cell state and the hidden state, which also represents the output for the current token. We experimented with a simple LSTM, an LSTM which receives as input the word embedding concatenated with character embeddings obtained through a convolutional layer[Józefowicz et al.2016] (LSTM-CHAR-REP), and an LSTM with pre-trained embeddings and dynamic embeddings learned from training data (LSTM-2CH). In LSTM-2CH two separate LSTM modules run in parallel and their outputs are concatenated for each word. Similar to the rest of the deep learning models, the output is then fed to a fully connected layer to map every token to the concept tag space.
Gated Recurrent Units (GRU) [Cho et al.2014] use a reset and an update gate, which are two vectors of weights that decide what information is deleted (or re-scaled) from the current hidden state and how it will contribute to the new hidden state, which will also be the output for the current input. Compared to the LSTM model this allows to train fewer parameters, but introduces a constraint on memory, since it is also used as an output.
Convolutional Neural Networks (CONV) [Majumder et al.2017, Kim2014] consider each sentence as a matrix of shape (# words in sentence, size of embedding) for convolution using kernels of different sizes to pass over the input sequence token-by-token, bigram by bigram and trigram by trigram. The result of convolution is used as a starting hidden memory for a GRU RNN. GRU RNN is used on embedded tokens and starts with the information on the sequence at a global level.
FC-INIT is similar to CONV. The difference is in the pre-elaboration of the hidden state, which is done by fully connected layers elaborating on the whole sequence.
ENCODER architecture [Cho et al.2014] casts the problem as a sequence-to-sequence translation and consists of two GRU RNNs. Encoder, the first GRU RNN, encodes the input sequence to a fixed vector (the hidden state). Decoder, another GRU RNN, uses the output of the encoder as a starting hidden state. At each step, the decoder receives the label predicted at the previous step as an input, starting with a special token.
ATTENTION architecture is similar to ENCODER with the addition of an attention mechanism [Bahdanau et al.2014] on the outputs of the encoder. This allows the network to focus on a specific parts of the input sequence. The attention weights are computed with a single fully connected layer that receives as input the embedding of the current word concatenated to the last hidden state.
is an architecture where the LSTM provides class scores for each token, and the Viterbi algorithm decides on the labels of the sequence at a global level using bigrams and transition probabilities that are trained with the rest of the parameters. We also experimented with a variant that considers character level information (LSTM-CRF-CHAR-REP).
The evaluation of algorithms is performed on two datasets. The Air Travel Information System (ATIS) dataset consists of sentences from users querying for information about flights, departure dates, arrivals, etc. The training set consists of 4,978 sentences, while there are 893 sentences that constitute the test set. The average length of a sentence is around 11 tokens, and there are a total of 127 unique tags (with IOB prefixes). Moreover, the large majority of tokens missing an embedding are either numbers or airport/basis/aircraft codes. The training set has a total of 18 types missing an embedding, and the test set has 9.
The second corpus (MOVIES) was produced from NL-SPARQL [Chen et al.2014] corpus semi-automatically aligning SPARQL query values to utterance tokens. The dataset follows the split of the original corpus having 3,338 sentences (with 1,728 unique tokens) and 1,084 sentences (with 1,039 tokens) in the training and test sets, respectively. The average length of a sentence is 6.50 and the OOV rate is 0.24. There are 43 concept tags in the dataset. Given the Google embeddings, once we consider every number as a class number, we obtain 66 token types without an embedding for the training set and 26 for the test set.
|WFST||order 4, kneser ney||(7907 states, 842178 arcs)||82.96|
|order 4, kneser ney||(4124 states, 76000 arcs)||93.08|
|SVM||(−4, 4) window of tokens, (-1, 0) of POS tag and prefix. Postfix and lemma of current word. Previous two labels.||10364||83.74|
|(−6, 4) window of tokens, (-1, 0) of prefix and postfix. Previous two labels .||16361||92.91|
|CRF||(−4, 4) window of token, (-1, 0) of POS tag and prefix. Postfix and lemma of current word. Previous + current word conjunction, current + next word conjunction. Bigram model.||1,200K||83.80|
|(−6, 4) window of tokens, (-1, 0) of prefix. Postfix of current word. Previous + current word conjunction. Bigram model.||2,201K||93.98|
|CRF+EMB||all above + (−4, 4) word embs + current token char embeddings||1,390K||85.85|
|all above + (−6, 4) word embs + current token char embeddings||3,185K||94.00|
|Model||hidden||epochs||batch size||lr||drop rate||emb norm||# of params||min||avg||best|
scores are obtained training the same model with the same hyperparameters, but different parameter initializations. Averages are from 50 runs for MOVIES and 25 for ATIS. For each architecture, the first row reports F-score for the MOVIES dataset and the second for ATIS. Hyperparameter search has been done randomly over ranges of values taken from published work. The number of parameters refers to the network parameters plus the embeddings, when those are unfrozen. Given a hidden layer size reported in hidden column, each component in the bidirectional architecture would have a hidden layer size of . Similarly, each of the two LSTM components in the LSTM-2CH model would have as an hidden layer size; and each bidirectional component would thus have a hidden layer size equal to .
One of our first observations is the fact that models such as WFST, SVM and CRF yield competitive results with simple setups and few hyperparameters to be tuned. The training of our deep learning models and the search of their hyperparameters would have been unfeasible without dedicated hardware, while it took a fraction of the effort for WFST, SVM and CRF. Moreover, adding word embeddings as features to the CRF allowed it to outperform most of the deep neural networks.
We attribute this to two factors: (1) since these models, unlike neural networks, do not learn feature representation from data, they are simpler and faster to train; and, most importantly, (2) these models usually perform global optimization over the label sequence, while neural networks usually do not. Augmenting neural networks with CRF is not expensive in terms of parameters. Having a CRF component on top of an LSTM increments the number of parameters up to the square of the tag-set size (about 2,500 for the MOVIES dataset), and provides the best performing model.
There seems to be no strong correlation between the number of parameters and the variance of a model performance with respect to the random initialization of its parameters. This is surprising, given the intuition that more parameters can potentially lead to a lower probability of being stuck in a local minima. The case may be that different initializations lead to different training times required to get to good local minimas.
One of the main outcomes of our experiments is that sequence-level optimization is key to achieve the best performance. Moreover, augmenting any neural architecture with a CRF layer on top has a very low cost in terms of parameters and a very good return in terms of performance. Our best performing models (in terms of average F) are LSTM-CRF and LSTM-CRF-CHAR-REP. In general we may say that adding a sequence level control to different type of NN architectures leads to very good model performances. Another important observation is the variance of performance of NN models with respect to initialization parameters. Consequently, we strongly believe that this variability should be taken into consideration and reported (with the lowest and highest performances) to improve the reliability and replicability of the published results.
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746–1751.
Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV ’15, pages 1529–1537, Washington, DC, USA. IEEE Computer Society.