Train, evaluate and deploy Deep Learning based text classifiers. Currently supports CNN
We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.READ FULL TEXT VIEW PDF
We present a general-purpose tagger based on convolutional neural networ...
In the Text Classification areas of Sentiment Analysis,
In sentence modeling and classification, convolutional neural network
TextCNN, the convolutional neural network for text, is a useful deep lea...
In sentence classification tasks, additional contexts, such as the
Real-world applications of object recognition often require the solution...
In this work we tackle the problem of sentence boundary detection applie...
Train, evaluate and deploy Deep Learning based text classifiers. Currently supports CNN
Implementation of "Convolutional Neural Networks for Sentence Classification" paper
Convolutional Neural Network model for Sentiment Analysis of IMDB movie reviews
A tensorflow implementation of Convolutional Neural Networks for Sentence Classification
Implementation of CNN for document classification in Keras on the IMDB movie review data set
in recent years. Within natural language processing, much of the work with deep learning methods has involved learning word vector representations through neural language models (Bengio et al., 2003; Yih et al., 2011; Mikolov et al., 2013) and performing composition over the learned word vectors for classification[Collobert et al.2011]. Word vectors, wherein words are projected from a sparse, 1-of- encoding (here is the vocabulary size) onto a lower dimensional vector space via a hidden layer, are essentially feature extractors that encode semantic features of words in their dimensions. In such dense representations, semantically close words are likewise close—in euclidean or cosine distance—in the lower dimensional vector space.
Convolutional neural networks (CNN) utilize layers with convolving filters that are applied to local features [LeCun et al.1998]. Originally invented for computer vision, CNN models have subsequently been shown to be effective for NLP and have achieved excellent results in semantic parsing [Yih et al.2014], search query retrieval [Shen et al.2014], sentence modeling [Kalchbrenner et al.2014], and other traditional NLP tasks [Collobert et al.2011].
In the present work, we train a simple CNN with one layer of convolution on top of word vectors obtained from an unsupervised neural language model. These vectors were trained by Mikolov et al. Mikolov:2013 on 100 billion words of Google News, and are publicly available.111https://code.google.com/p/word2vec/ We initially keep the word vectors static and learn only the other parameters of the model. Despite little tuning of hyperparameters, this simple model achieves excellent results on multiple benchmarks, suggesting that the pre-trained vectors are ‘universal’ feature extractors that can be utilized for various classification tasks. Learning task-specific vectors through fine-tuning results in further improvements. We finally describe a simple modification to the architecture to allow for the use of both pre-trained and task-specific vectors by having multiple channels.
Our work is philosophically similar to Razavian et al. Razavian:2014 which showed that for image classification, feature extractors obtained from a pre-trained deep learning model perform well on a variety of tasks—including tasks that are very different from the original task for which the feature extractors were trained.
The model architecture, shown in figure 1, is a slight variant of the CNN architecture of Collobert et al. Collobert:2011. Let be the -dimensional word vector corresponding to the -th word in the sentence. A sentence of length
(padded where necessary) is represented as
where is the concatenation operator. In general, let refer to the concatenation of words . A convolution operation involves a filter , which is applied to a window of words to produce a new feature. For example, a feature is generated from a window of words by
Here is a bias term and is a non-linear function such as the hyperbolic tangent. This filter is applied to each possible window of words in the sentence to produce a feature map
with . We then apply a max-over-time pooling operation [Collobert et al.2011] over the feature map and take the maximum value as the feature corresponding to this particular filter. The idea is to capture the most important feature—one with the highest value—for each feature map. This pooling scheme naturally deals with variable sentence lengths.
We have described the process by which feature is extracted from
filter. The model uses multiple filters (with varying window sizes) to obtain multiple features. These features form the penultimate layer and are passed to a fully connected softmax layer whose output is the probability distribution over labels.
In one of the model variants, we experiment with having two ‘channels’ of word vectors—one that is kept static throughout training and one that is fine-tuned via backpropagation (section 3.2).222We employ language from computer vision where a color image has red, green, and blue channels. In the multichannel architecture, illustrated in figure 1, each filter is applied to both channels and the results are added to calculate in equation (2). The model is otherwise equivalent to the single channel architecture.
For regularization we employ dropout on the penultimate layer with a constraint on -norms of the weight vectors [Hinton et al.2012]. Dropout prevents co-adaptation of hidden units by randomly dropping out—i.e., setting to zero—a proportion of the hidden units during foward-backpropagation. That is, given the penultimate layer (note that here we have filters), instead of using
for output unit in forward propagation, dropout uses
where is the element-wise multiplication operator and
is a ‘masking’ vector of Bernoulli random variables with probabilityof being 1. Gradients are backpropagated only through the unmasked units. At test time, the learned weight vectors are scaled by such that , and is used (without dropout) to score unseen sentences. We additionally constrain -norms of the weight vectors by rescaling to have whenever after a gradient descent step.
We test our model on various benchmarks. Summary statistics of the datasets are in table 1.
MR: Movie reviews with one sentence per review. Classification involves detecting positive/negative reviews [Pang and Lee2005].333https://www.cs.cornell.edu/people/pabo/movie-review-data/
SST-1: Stanford Sentiment Treebank—an extension of MR but with train/dev/test splits provided and fine-grained labels (very positive, positive, neutral, negative, very negative), re-labeled by Socher et al. Socher:2013.444http://nlp.stanford.edu/sentiment/ Data is actually provided at the phrase-level and hence we train the model on both phrases and sentences but only score on sentences at test time, as in Socher et al. Socher:2013, Kalchbrenner et al. Kalch:2014, and Le and Mikolov Le:2014. Thus the training set is an order of magnitude larger than listed in table 1.
SST-2: Same as SST-1 but with neutral reviews removed and binary labels.
CR: Customer reviews of various products (cameras, MP3s etc.). Task is to predict positive/negative reviews [Hu and Liu2004].666http://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html
For all datasets we use: rectified linear units, filter windows () of 3, 4, 5 with 100 feature maps each, dropout rate () of 0.5, constraint () of 3, and mini-batch size of 50. These values were chosen via a grid search on the SST-2 dev set.
We do not otherwise perform any dataset-specific tuning other than early stopping on dev sets. For datasets without a standard dev set we randomly select 10% of the training data as the dev set. Training is done through stochastic gradient descent over shuffled mini-batches with the Adadelta update rule[Zeiler2012].
Initializing word vectors with those obtained from an unsupervised neural language model is a popular method to improve performance in the absence of a large supervised training set (Collobert et al., 2011; Socher et al., 2011; Iyyer et al., 2014). We use the publicly available word2vec vectors that were trained on 100 billion words from Google News. The vectors have dimensionality of 300 and were trained using the continuous bag-of-words architecture [Mikolov et al.2013]. Words not present in the set of pre-trained words are initialized randomly.
|RAE [Socher et al.2011]|
|MV-RNN [Socher et al.2012]|
|RNTN [Socher et al.2013]|
|DCNN [Kalchbrenner et al.2014]|
|Paragraph-Vec [Le and Mikolov2014]|
|CCAE [Hermann and Blunsom2013]|
|Sent-Parser [Dong et al.2014]|
|NBSVM [Wang and Manning2012]|
|MNB [Wang and Manning2012]|
|G-Dropout [Wang and Manning2013]|
|F-Dropout [Wang and Manning2013]|
|Tree-CRF [Nakagawa et al.2010]|
|CRF-PR [Yang and Cardie2014]|
|SVM [Silva et al.2011]|
: Recursive Autoencoders with pre-trained word vectors from Wikipedia[Socher et al.2011]. MV-RNN: Matrix-Vector Recursive Neural Network with parse trees [Socher et al.2012]. RNTN
: Recursive Neural Tensor Network with tensor-based feature function and parse trees[Socher et al.2013]. DCNN
: Dynamic Convolutional Neural Network with k-max pooling[Kalchbrenner et al.2014]. Paragraph-Vec
: Logistic regression on top of paragraph vectors[Le and Mikolov2014]. CCAE: Combinatorial Category Autoencoders with combinatorial category grammar operators [Hermann and Blunsom2013]. Sent-Parser: Sentiment analysis-specific parser [Dong et al.2014]. NBSVM, MNB
: Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams from Wang and Manning Wang:2012.G-Dropout, F-Dropout: Gaussian Dropout and Fast Dropout from Wang and Manning Wang:2013. Tree-CRF: Dependency tree with Conditional Random Fields [Nakagawa et al.2010]. CRF-PR: Conditional Random Fields with Posterior Regularization [Yang and Cardie2014]. SVM: SVM with uni-bi-trigrams, wh word, head word, POS, parser, hypernyms, and 60 hand-coded rules as features from Silva et al. Silva:2011.
We experiment with several variants of the model.
CNN-rand: Our baseline model where all words are randomly initialized and then modified during training.
CNN-static: A model with pre-trained vectors from word2vec. All words—including the unknown ones that are randomly initialized—are kept static and only the other parameters of the model are learned.
CNN-non-static: Same as above but the pre-trained vectors are fine-tuned for each task.
CNN-multichannel: A model with two sets of word vectors. Each set of vectors is treated as a ‘channel’ and each filter is applied to both channels, but gradients are backpropagated only through one of the channels. Hence the model is able to fine-tune one set of vectors while keeping the other static. Both channels are initialized with word2vec.
In order to disentangle the effect of the above variations versus other random factors, we eliminate other sources of randomness—CV-fold assignment, initialization of unknown word vectors, initialization of CNN parameters—by keeping them uniform within each dataset.
Results of our models against other methods are listed in table 2. Our baseline model with all randomly initialized words (CNN-rand) does not perform well on its own. While we had expected performance gains through the use of pre-trained vectors, we were surprised at the magnitude of the gains. Even a simple model with static vectors (CNN-static) performs remarkably well, giving competitive results against the more sophisticated deep learning models that utilize complex pooling schemes [Kalchbrenner et al.2014] or require parse trees to be computed beforehand [Socher et al.2013]. These results suggest that the pre-trained vectors are good, ‘universal’ feature extractors and can be utilized across datasets. Fine-tuning the pre-trained vectors for each task gives still further improvements (CNN-non-static).
We had initially hoped that the multichannel architecture would prevent overfitting (by ensuring that the learned vectors do not deviate too far from the original values) and thus work better than the single channel model, especially on smaller datasets. The results, however, are mixed, and further work on regularizing the fine-tuning process is warranted. For instance, instead of using an additional channel for the non-static portion, one could maintain a single channel but employ extra dimensions that are allowed to be modified during training.
As is the case with the single channel non-static model, the multichannel model is able to fine-tune the non-static channel to make it more specific to the task-at-hand. For example, good is most similar to bad in word2vec, presumably because they are (almost) syntactically equivalent. But for vectors in the non-static channel that were fine-tuned on the SST-2 dataset, this is no longer the case (table 3). Similarly, good is arguably closer to nice than it is to great for expressing sentiment, and this is indeed reflected in the learned vectors.
For (randomly initialized) tokens not in the set of pre-trained vectors, fine-tuning allows them to learn more meaningful representations: the network learns that exclamation marks are associated with effusive expressions and that commas are conjunctive (table 3).
|Most Similar Words for|
|Static Channel||Non-static Channel|
Top 4 neighboring words—based on cosine similarity—for vectors in the static channel (left) and fine-tuned vectors in the non-static channel (right) from the multichannel model on the SST-2 dataset after training.
We report on some further experiments and observations:
Kalchbrenner et al. Kalch:2014 report much worse results with a CNN that has essentially the same architecture as our single channel model. For example, their Max-TDNN (Time Delay Neural Network) with randomly initialized words obtains on the SST-1 dataset, compared to for our model. We attribute such discrepancy to our CNN having much more capacity (multiple filter widths and feature maps).
Dropout proved to be such a good regularizer that it was fine to use a larger than necessary network and simply let dropout regularize it. Dropout consistently added 2%–4% relative performance.
When randomly initializing words not in word2vec, we obtained slight improvements by sampling each dimension from where
was chosen such that the randomly initialized vectors have the same variance as the pre-trained ones. It would be interesting to see if employing more sophisticated methods to mirror the distribution of pre-trained vectors in the initialization process gives further improvements.
We briefly experimented with another set of publicly available word vectors trained by Collobert et al. Collobert:2011 on Wikipedia,888http://ronan.collobert.com/senna/ and found that word2vec gave far superior performance. It is not clear whether this is due to Mikolov et al. Mikolov:2013’s architecture or the 100 billion word Google News dataset.
In the present work we have described a series of experiments with convolutional neural networks built on top of word2vec. Despite little tuning of hyperparameters, a simple CNN with one layer of convolution performs remarkably well. Our results add to the well-established evidence that unsupervised pre-training of word vectors is an important ingredient in deep learning for NLP.
We would like to thank Yann LeCun and the anonymous reviewers for their helpful feedback and suggestions.
Journal of Machine Learning Research3:1137–1155.
Speech recognition with deep recurrent neural networks.In Proceedings of ICASSP 2013.