sequence-tagging-tensorflow
None
view repo
State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55% accuracy for POS tagging and 91.21% F1 for NER.
READ FULL TEXT VIEW PDFNone
Bi-LSTM for POS Tagging
Linguistic sequence labeling, such as part-of-speech (POS) tagging and named entity recognition (NER), is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. Natural language processing (NLP) systems, like syntactic parsing
[Nivre and Scholz2004, McDonald et al.2005, Koo and Collins2010, Ma and Zhao2012a, Ma and Zhao2012b, Chen and Manning2014, Ma and Hovy2015] and entity coreference resolution [Ng2010, Ma et al.2016], are becoming more sophisticated, in part because of utilizing output information of POS tagging or NER systems.Most traditional high performance sequence labeling models are linear statistical models, including Hidden Markov Models (HMM) and Conditional Random Fields (CRF)
[Ratinov and Roth2009, Passos et al.2014, Luo et al.2015], which rely heavily on hand-crafted features and task-specific resources. For example, English POS taggers benefit from carefully designed word spelling features; orthographic features and external resources such as gazetteers are widely used in NER. However, such task-specific knowledge is costly to develop [Ma and Xia2014], making sequence labeling models difficult to adapt to new tasks or new domains.In the past few years, non-linear neural networks with as input distributed word representations, also known as word embeddings, have been broadly applied to NLP problems with great success. collobert2011natural proposed a simple but effective feed-forward neutral network that independently classifies labels for each word by using contexts within a window with fixed size. Recently, recurrent neural networks (RNN)
[Goller and Kuchler1996], together with its variants such as long-short term memory (LSTM)
[Hochreiter and Schmidhuber1997, Gers et al.2000]and gated recurrent unit (GRU)
[Cho et al.2014], have shown great success in modeling sequential data. Several RNN-based neural network models have been proposed to solve sequence labeling tasks like speech recognition [Graves et al.2013], POS tagging [Huang et al.2015] and NER [Chiu and Nichols2015, Hu et al.2016], achieving competitive performance against traditional models. However, even systems that have utilized distributed representations as inputs have used these to augment, rather than replace, hand-crafted features (e.g. word spelling and capitalization patterns). Their performance drops rapidly when the models solely depend on neural embeddings.
In this paper, we propose a neural network architecture for sequence labeling. It is a truly end-to-end model requiring no task-specific resources, feature engineering, or data pre-processing beyond pre-trained word embeddings on unlabeled corpora. Thus, our model can be easily applied to a wide range of sequence labeling tasks on different languages and domains. We first use convolutional neural networks (CNNs)
[LeCun et al.1989] to encode character-level information of a word into its character-level representation. Then we combine character- and word-level representations and feed them into bi-directional LSTM (BLSTM) to model context information of each word. On top of BLSTM, we use a sequential CRF to jointly decode labels for the whole sentence. We evaluate our model on two linguistic sequence labeling tasks — POS tagging on Penn Treebank WSJ [Marcus et al.1993], and NER on English data from the CoNLL 2003 shared task [Tjong Kim Sang and De Meulder2003]. Our end-to-end model outperforms previous state-of-the-art systems, obtaining 97.55% accuracy for POS tagging and 91.21% F1 for NER. The contributions of this work are (i) proposing a novel neural network architecture for linguistic sequence labeling. (ii) giving empirical evaluations of this model on benchmark data sets for two classic NLP tasks. (iii) achieving state-of-the-art performance with this truly end-to-end system.In this section, we describe the components (layers) of our neural network architecture. We introduce the neural layers in our neural network one-by-one from bottom to top.
Previous studies [Santos and Zadrozny2014, Chiu and Nichols2015] have shown that CNN is an effective approach to extract morphological information (like the prefix or suffix of a word) from characters of words and encode it into neural representations. Figure 1 shows the CNN we use to extract character-level representation of a given word. The CNN is similar to the one in chiu2015named, except that we use only character embeddings as the inputs to CNN, without character type features. A dropout layer [Srivastava et al.2014] is applied before character embeddings are input to CNN.
Recurrent neural networks (RNNs) are a powerful family of connectionist models that capture time dynamics via cycles in the graph. Though, in theory, RNNs are capable to capturing long-distance dependencies, in practice, they fail due to the gradient vanishing/exploding problems [Bengio et al.1994, Pascanu et al.2012].
LSTMs [Hochreiter and Schmidhuber1997] are variants of RNNs designed to cope with these gradient vanishing problems. Basically, a LSTM unit is composed of three multiplicative gates which control the proportions of information to forget and to pass on to the next time step. Figure 2 gives the basic structure of an LSTM unit.
Formally, the formulas to update an LSTM unit at time are:
where
is the element-wise sigmoid function and
is the element-wise product.is the input vector (e.g. word embedding) at time
, and is the hidden state (also called output) vector storing all the useful information at (and before) time . denote the weight matrices of different gates for input , and are the weight matrices for hidden state .denote the bias vectors. It should be noted that we do not include peephole connections
[Gers et al.2003] in the our LSTM formulation.For many sequence labeling tasks it is beneficial to have access to both past (left) and future (right) contexts. However, the LSTM’s hidden state takes information only from past, knowing nothing about the future. An elegant solution whose effectiveness has been proven by previous work [Dyer et al.2015] is bi-directional LSTM (BLSTM). The basic idea is to present each sequence forwards and backwards to two separate hidden states to capture past and future information, respectively. Then the two hidden states are concatenated to form the final output.
For sequence labeling (or general structured prediction) tasks, it is beneficial to consider the correlations between labels in neighborhoods and jointly decode the best chain of labels for a given input sentence. For example, in POS tagging an adjective is more likely to be followed by a noun than a verb, and in NER with standard BIO2 annotation [Tjong Kim Sang and Veenstra1999] I-ORG cannot follow I-PER. Therefore, we model label sequence jointly using a conditional random field (CRF) [Lafferty et al.2001], instead of decoding each label independently.
Formally, we use to represent a generic input sequence where is the input vector of the th word. represents a generic sequence of labels for . denotes the set of possible label sequences for
. The probabilistic model for sequence CRF defines a family of conditional probability
over all possible label sequences given with the following form:where are potential functions, and and are the weight vector and bias corresponding to label pair , respectively.
For CRF training, we use the maximum conditional likelihood estimation. For a training set
, the logarithm of the likelihood (a.k.a. the log-likelihood) is given by:Maximum likelihood training chooses parameters such that the log-likelihood is maximized.
Decoding is to search for the label sequence with the highest conditional probability:
For a sequence CRF model (only interactions between two successive labels are considered), training and decoding can be solved efficiently by adopting the Viterbi algorithm.
Finally, we construct our neural network model by feeding the output vectors of BLSTM into a CRF layer. Figure 3 illustrates the architecture of our network in detail.
For each word, the character-level representation is computed by the CNN in Figure 1 with character embeddings as inputs. Then the character-level representation vector is concatenated with the word embedding vector to feed into the BLSTM network. Finally, the output vectors of BLSTM are fed to the CRF layer to jointly decode the best label sequence. As shown in Figure 3, dropout layers are applied on both the input and output vectors of BLSTM. Experimental results show that using dropout significantly improve the performance of our model (see Section 4.5 for details).
In this section, we provide details about training the neural network. We implement the neural network using the Theano library
[Bergstra et al.2010]. The computations for a single model are run on a GeForce GTX TITAN X GPU. Using the settings discussed in this section, the model training requires about 12 hours for POS tagging and 8 hours for NER.Word Embeddings. We use Stanford’s publicly available GloVe 100-dimensional embeddings111http://nlp.stanford.edu/projects/glove/ trained on 6 billion words from Wikipedia and web text [Pennington et al.2014]
We also run experiments on two other sets of published embeddings, namely Senna 50-dimensional embeddings222http://ronan.collobert.com/senna/ trained on Wikipedia and Reuters RCV-1 corpus [Collobert et al.2011], and Google’s Word2Vec 300-dimensional embeddings333https://code.google.com/archive/p/word2vec/ trained on 100 billion words from Google News [Mikolov et al.2013]. To test the effectiveness of pretrained word embeddings, we experimented with randomly initialized embeddings with 100 dimensions, where embeddings are uniformly sampled from range where is the dimension of embeddings [He et al.2015]. The performance of different word embeddings is discussed in Section 4.4.
Character Embeddings. Character embeddings are initialized with uniform samples from , where we set .
Weight Matrices and Bias Vectors. Matrix parameters are randomly initialized with uniform samples from , where and are the number of of rows and columns in the structure [Glorot and Bengio2010]. Bias vectors are initialized to zero, except the bias for the forget gate in LSTM , which is initialized to 1.0 [Jozefowicz et al.2015].
Parameter optimization is performed with mini-batch stochastic gradient descent (SGD) with batch size 10 and momentum 0.9. We choose an initial learning rate of
( for POS tagging, and for NER, see Section 3.3.), and the learning rate is updated on each epoch of training as
, with decay rate andis the number of epoch completed. To reduce the effects of “gradient exploding”, we use a gradient clipping of
[Pascanu et al.2012]. We explored other more sophisticated optimization algorithms such as AdaDelta [Zeiler2012], Adam [Kingma and Ba2014]or RMSProp
[Dauphin et al.2015], but none of them meaningfully improve upon SGD with momentum and gradient clipping in our preliminary experiments.Early Stopping. We use early stopping [Giles2001, Graves et al.2013] based on performance on validation sets. The “best” parameters appear at around 50 epochs, according to our experiments.
Fine Tuning. For each of the embeddings, we fine-tune initial embeddings, modifying them during gradient updates of the neural network model by back-propagating gradients. The effectiveness of this method has been previously explored in sequential and structured prediction problems [Collobert et al.2011, Peng and Dredze2015].
Dropout Training. To mitigate overfitting, we apply the dropout method [Srivastava et al.2014] to regularize our model. As shown in Figure 1 and 3, we apply dropout on character embeddings before inputting to CNN, and on both the input and output vectors of BLSTM. We fix dropout rate at for all dropout layers through all the experiments. We obtain significant improvements on model performance after using dropout (see Section 4.5).
Layer | Hyper-parameter | POS | NER |
CNN | window size | 3 | 3 |
number of filters | 30 | 30 | |
LSTM | state size | 200 | 200 |
initial state | 0.0 | 0.0 | |
peepholes | no | no | |
Dropout | dropout rate | 0.5 | 0.5 |
batch size | 10 | 10 | |
initial learning rate | 0.01 | 0.015 | |
decay rate | 0.05 | 0.05 | |
gradient clipping | 5.0 | 5.0 |
Table 1 summarizes the chosen hyper-parameters for all experiments. We tune the hyper-parameters on the development sets by random search. Due to time constrains it is infeasible to do a random search across the full hyper-parameter space. Thus, for the tasks of POS tagging and NER we try to share as many hyper-parameters as possible. Note that the final hyper-parameters for these two tasks are almost the same, except the initial learning rate. We set the state size of LSTM to . Tuning this parameter did not significantly impact the performance of our model. For CNN, we use 30 filters with window length 3.
As mentioned before, we evaluate our neural network model on two sequence labeling tasks: POS tagging and NER.
POS Tagging. For English POS tagging, we use the Wall Street Journal (WSJ) portion of Penn Treebank (PTB) [Marcus et al.1993], which contains 45 different POS tags. In order to compare with previous work, we adopt the standard splits — section 0–18 as training data, section 19–21 as development data and section 22–24 as test data [Manning2011, Søgaard2011].
NER. For NER, We perform experiments on the English data from CoNLL 2003 shared task [Tjong Kim Sang and De Meulder2003]. This data set contains four different types of named entities: PERSON, LOCATION, ORGANIZATION, and MISC. We use the BIOES tagging scheme instead of standard BIO2, as previous studies have reported meaningful improvement with this scheme [Ratinov and Roth2009, Dai et al.2015, Lample et al.2016].
The corpora statistics are shown in Table 2. We did not perform any pre-processing for data sets, leaving our system truly end-to-end.
Dataset | WSJ | CoNLL2003 | |
---|---|---|---|
Train | SENT | 38,219 | 14,987 |
TOKEN | 912,344 | 204,567 | |
Dev | SENT | 5,527 | 3,466 |
TOKEN | 131,768 | 51,578 | |
Test | SENT | 5,462 | 3,684 |
TOKEN | 129,654 | 46,666 |
POS | NER | |||||||
---|---|---|---|---|---|---|---|---|
Dev | Test | Dev | Test | |||||
Model | Acc. | Acc. | Prec. | Recall | F1 | Prec. | Recall | F1 |
BRNN | 96.56 | 96.76 | 92.04 | 89.13 | 90.56 | 87.05 | 83.88 | 85.44 |
BLSTM | 96.88 | 96.93 | 92.31 | 90.85 | 91.57 | 87.77 | 86.23 | 87.00 |
BLSTM-CNN | 97.34 | 97.33 | 92.52 | 93.64 | 93.07 | 88.53 | 90.21 | 89.36 |
BRNN-CNN-CRF | 97.46 | 97.55 | 94.85 | 94.63 | 94.74 | 91.35 | 91.06 | 91.21 |
We first run experiments to dissect the effectiveness of each component (layer) of our neural network architecture by ablation studies. We compare the performance with three baseline systems — BRNN, the bi-direction RNN; BLSTM, the bi-direction LSTM, and BLSTM-CNNs, the combination of BLSTM with CNN to model character-level information. All these models are run using Stanford’s GloVe 100 dimensional word embeddings and the same hyper-parameters as shown in Table 1. According to the results shown in Table 3
, BLSTM obtains better performance than BRNN on all evaluation metrics of both the two tasks. BLSTM-CNN models significantly outperform the BLSTM model, showing that character-level representations are important for linguistic sequence labeling tasks. This is consistent with results reported by previous work
[Santos and Zadrozny2014, Chiu and Nichols2015]. Finally, by adding CRF layer for joint decoding we achieve significant improvements over BLSTM-CNN models for both POS tagging and NER on all metrics. This demonstrates that jointly decoding label sequences can significantly benefit the final performance of neural network models.Model | Acc. |
---|---|
gimenez2004svmtool | 97.16 |
toutanova2003feature | 97.27 |
manning2011part | 97.28 |
collobert2011natural | 97.29 |
santos2014learning | 97.32 |
shen2007guided | 97.33 |
sun2014structure | 97.36 |
sogaard:2011:ACL-HLT20111 | 97.50 |
This paper | 97.55 |
Table 4 illustrates the results of our model for POS tagging, together with seven previous top-performance systems for comparison. Our model significantly outperform Senna [Collobert et al.2011]
, which is a feed-forward neural network model using capitalization and discrete suffix features, and data pre-processing. Moreover, our model achieves 0.23% improvements on accuracy over the “CharWNN”
[Santos and Zadrozny2014], which is a neural network model based on Senna and also uses CNNs to model character-level representations. This demonstrates the effectiveness of BLSTM for modeling sequential data and the importance of joint decoding with structured prediction model.Comparing with traditional statistical models, our system achieves state-of-the-art accuracy, obtaining 0.05% improvement over the previously best reported results by sogaard:2011:ACL-HLT20111. It should be noted that huang2015bidirectional also evaluated their BLSTM-CRF model for POS tagging on WSJ corpus. But they used a different splitting of the training/dev/test data sets. Thus, their results are not directly comparable with ours.
Model | F1 |
---|---|
chieu2002named | 88.31 |
florian2003named | 88.76 |
ando2005framework | 89.31 |
collobert2011natural | 89.59 |
huang2015bidirectional | 90.10 |
chiu2015named | 90.77 |
ratinov2009design | 90.80 |
lin2009phrase | 90.90 |
passos-kumar-mccallum:2014:W14-16 | 90.90 |
lample:2016:NAACL | 90.94 |
luo-EtAl:2015:EMNLP2 | 91.20 |
This paper | 91.21 |
Table 5
shows the F1 scores of previous models for NER on the test data set from CoNLL-2003 shared task. For the purpose of comparison, we list their results together with ours. Similar to the observations of POS tagging, our model achieves significant improvements over Senna and the other three neural models, namely the LSTM-CRF proposed by huang2015bidirectional, LSTM-CNNs proposed by chiu2015named, and the LSTM-CRF by lample:2016:NAACL. huang2015bidirectional utilized discrete spelling, POS and context features, chiu2015named used character-type, capitalization, and lexicon features, and all the three model used some task-specific data pre-processing, while our model does not require any carefully designed features or data pre-processing. We have to point out that the result (90.77%) reported by chiu2015named is incomparable with ours, because their final model was trained on the combination of the training and development data sets
444We run experiments using the same setting and get 91.37% F1 score..To our knowledge, the previous best F1 score (91.20)555Numbers are taken from the Table 3 of the original paper [Luo et al.2015]. While there is clearly inconsistency among the precision (91.5%), recall (91.4%) and F1 scores (91.2%), it is unclear in which way they are incorrect. reported on CoNLL 2003 data set is by the joint NER and entity linking model [Luo et al.2015]. This model used many hand-crafted features including stemming and spelling features, POS and chunks tags, WordNet clusters, Brown Clusters, as well as external knowledge bases such as Freebase and Wikipedia. Our end-to-end model slightly improves this model by 0.01%, yielding a state-of-the-art performance.
Embedding | Dimension | POS | NER |
---|---|---|---|
Random | 100 | 97.13 | 80.76 |
Senna | 50 | 97.44 | 90.28 |
Word2Vec | 300 | 97.40 | 84.91 |
GloVe | 100 | 97.55 | 91.21 |
As mentioned in Section 3.1, in order to test the importance of pretrained word embeddings, we performed experiments with different sets of publicly published word embeddings, as well as a random sampling method, to initialize our model. Table 6 gives the performance of three different word embeddings, as well as the randomly sampled one. According to the results in Table 6, models using pretrained word embeddings obtain a significant improvement as opposed to the ones using random embeddings. Comparing the two tasks, NER relies more heavily on pretrained embeddings than POS tagging. This is consistent with results reported by previous work [Collobert et al.2011, Huang et al.2015, Chiu and Nichols2015].
For different pretrained embeddings, Stanford’s GloVe 100 dimensional embeddings achieve best results on both tasks, about 0.1% better on POS accuracy and 0.9% better on NER F1 score than the Senna 50 dimensional one. This is different from the results reported by chiu2015named, where Senna achieved slightly better performance on NER than other embeddings. Google’s Word2Vec 300 dimensional embeddings obtain similar performance with Senna on POS tagging, still slightly behind GloVe. But for NER, the performance on Word2Vec is far behind GloVe and Senna. One possible reason that Word2Vec is not as good as the other two embeddings on NER is because of vocabulary mismatch — Word2Vec embeddings were trained in case-sensitive manner, excluding many common symbols such as punctuations and digits. Since we do not use any data pre-processing to deal with such common symbols or rare words, it might be an issue for using Word2Vec.
POS | NER | |||||
---|---|---|---|---|---|---|
Train | Dev | Test | Train | Dev | Test | |
No | 98.46 | 97.06 | 97.11 | 99.97 | 93.51 | 89.25 |
Yes | 97.86 | 97.46 | 97.55 | 99.63 | 94.74 | 91.21 |
POS | NER | |||
Dev | Test | Dev | Test | |
IV | 127,247 | 125,826 | 4,616 | 3,773 |
OOTV | 2,960 | 2,412 | 1,087 | 1,597 |
OOEV | 659 | 588 | 44 | 8 |
OOBV | 902 | 828 | 195 | 270 |
POS | ||||||||
Dev | Test | |||||||
IV | OOTV | OOEV | OOBV | IV | OOTV | OOEV | OOBV | |
LSTM-CNN | 97.57 | 93.75 | 90.29 | 80.27 | 97.55 | 93.45 | 90.14 | 80.07 |
LSTM-CNN-CRF | 97.68 | 93.65 | 91.05 | 82.71 | 97.77 | 93.16 | 90.65 | 82.49 |
NER | ||||||||
Dev | Test | |||||||
IV | OOTV | OOEV | OOBV | IV | OOTV | OOEV | OOBV | |
LSTM-CNN | 94.83 | 87.28 | 96.55 | 82.90 | 90.07 | 89.45 | 100.00 | 78.44 |
LSTM-CNN-CRF | 96.49 | 88.63 | 97.67 | 86.91 | 92.14 | 90.73 | 100.00 | 80.60 |
To better understand the behavior of our model, we perform error analysis on Out-of-Vocabulary words (OOV). Specifically, we partition each data set into four subsets — in-vocabulary words (IV), out-of-training-vocabulary words (OOTV), out-of-embedding-vocabulary words (OOEV) and out-of-both-vocabulary words (OOBV). A word is considered IV if it appears in both the training and embedding vocabulary, while OOBV if neither. OOTV words are the ones do not appear in training set but in embedding vocabulary, while OOEV are the ones do not appear in embedding vocabulary but in training set. For NER, an entity is considered as OOBV if there exists at lease one word not in training set and at least one word not in embedding vocabulary, and the other three subsets can be done in similar manner. Table 8 informs the statistics of the partition on each corpus. The embedding we used is Stanford’s GloVe with dimension 100, the same as Section 4.2.
Table 9 illustrates the performance of our model on different subsets of words, together with the baseline LSTM-CNN model for comparison. The largest improvements appear on the OOBV subsets of both the two corpora. This demonstrates that by adding CRF for joint decoding, our model is more powerful on words that are out of both the training and embedding sets.
In recent years, several different neural network architectures have been proposed and successfully applied to linguistic sequence labeling such as POS tagging, chunking and NER. Among these neural architectures, the three approaches most similar to our model are the BLSTM-CRF model proposed by huang2015bidirectional, the LSTM-CNNs model by chiu2015named and the BLSTM-CRF by lample:2016:NAACL.
huang2015bidirectional used BLSTM for word-level representations and CRF for jointly label decoding, which is similar to our model. But there are two main differences between their model and ours. First, they did not employ CNNs to model character-level information. Second, they combined their neural network model with hand-crafted features to improve their performance, making their model not an end-to-end system. chiu2015named proposed a hybrid of BLSTM and CNNs to model both character- and word-level representations, which is similar to the first two layers in our model. They evaluated their model on NER and achieved competitive performance. Our model mainly differ from this model by using CRF for joint decoding. Moreover, their model is not truly end-to-end, either, as it utilizes external knowledge such as character-type, capitalization and lexicon features, and some data pre-processing specifically for NER (e.g. replacing all sequences of digits 0-9 with a single “0”). Recently, lample:2016:NAACL proposed a BLSTM-CRF model for NER, which utilized BLSTM to model both the character- and word-level information, and use data pre-processing the same as chiu2015named. Instead, we use CNN to model character-level information, achieving better NER performance without using any data pre-processing.
There are several other neural networks previously proposed for sequence labeling. labeau2015non proposed a RNN-CNNs model for German POS tagging. This model is similar to the LSTM-CNNs model in chiu2015named, with the difference of using vanila RNN instead of LSTM. Another neural architecture employing CNN to model character-level information is the “CharWNN” architecture [Santos and Zadrozny2014] which is inspired by the feed-forward network [Collobert et al.2011]. CharWNN obtained near state-of-the-art accuracy on English POS tagging (see Section 4.3 for details). A similar model has also been applied to Spanish and Portuguese NER [dos Santos et al.2015] ling-EtAl:2015:EMNLP2 and YangSC:arxiv16 also used BSLTM to compose character embeddings to word’s representation, which is similar to lample:2016:NAACL. peng-dredze:2016:ACL Improved NER for Chinese Social Media with Word Segmentation.
In this paper, we proposed a neural network architecture for sequence labeling. It is a truly end-to-end model relying on no task-specific resources, feature engineering or data pre-processing. We achieved state-of-the-art performance on two linguistic sequence labeling tasks, comparing with previously state-of-the-art systems.
There are several potential directions for future work. First, our model can be further improved by exploring multi-task learning approaches to combine more useful and correlated information. For example, we can jointly train a neural network model with both the POS and NER tags to improve the intermediate representations learned in our network. Another interesting direction is to apply our model to data from other domains such as social media (Twitter and Weibo). Since our model does not require any domain- or task-specific knowledge, it might be effortless to apply it to these domains.
This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.
The Journal of Machine Learning Research
, 6:1817–1853.On the properties of neural machine translation: Encoder–decoder approaches.
Syntax, Semantics and Structure in Statistical Translation, page 103.Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping.
In Advances in Neural Information Processing Systems 13: Proceedings of the 2000 Conference, volume 13, page 402. MIT Press.Svmtool: A general pos tagger generator based on support vector machines.
In In Proceedings of LREC-2004.International conference on artificial intelligence and statistics
, pages 249–256.Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.
InProceedings of the IEEE International Conference on Computer Vision
, pages 1026–1034.