When using convolutional neural networks (CNNs) for computer vision (CV), the convolutional filters can be visualized as small image patches that maximize the response to the filter(Krizhevsky and Hinton, 2009; Krizhevsky et al., 2012)
. Intuitively, the more similar a window of the image to the filter visualization, the higher the neuron activation is.
In natural language processing (NLP), discrete network inputs are first embedded into a continuous vector space. The projection that follows the embedding can be interpreted in a similar way as the filters in CV. We can retrieve the words whose embeddings have the highest response to the projection. In this abstract, we use this principle to reconstruct a CNN for sentence classification using explicit rules. We present a case study of this approach using models for sentiment analysis.
2 CNN for Sentiment Analysis
The goal of sentiment analysis is to decide if a snippet of text speaks positively or negatively about whatever its content is. We train and evaluate our models on the IMDB dataset (Maas et al., 2011) that contains 17k training, 7.5k validation and 25k test examples with a balanced number of positive and negative examples of movie reviews.
For our experiments, we use a convolutional network with max-pooling(Kim, 2014) depicted in Figure 1. We use word embeddings with dimension , kernel widths from 1 up to 5 of filters.
Formally, for a sequence of word embeddings of dimension , the output of the network is:
are trainable parameters. We apply the sigmoid function over the output and train the network towards the cross-entropy loss.
We trained the models until convergence and analyzed the learned weights. Our best model reached 89% accuracy, the state-of-the-art result with pre-trained sentence representation is 95% (Howard and Ruder, 2018).
3 Model Interpretation
For each weight vector in each filter, we find words whose embeddings have the highest dot-product with the weight vector. We interpret filters of size 1 as sets of these words. We interpret kernels of sizes larger than 1 both either as conjunctions or disjunctions of the neighboring words. In the conjunction case, we interpret a filter as a set of n-grams which consists of all combinations of the words extracted from the weight vectors. In the disjunction case, we interpret the filters as multiple independent filters of size 1. Examples of the words extracted from the filters are shown in Table1.
We interpret the max-pooling over time as an existential quantifier and thus the whole sentence representation as asking for the presence of particular words or n-grams, i.e., as a set of binary features.
We conduct two experiments with extracted features. First, based on the weight vector
, we sort the features as contributing to positive or negative sentiment and label the sentences with the prevailing class. Second, we train a linear classifier based on the binary features.
The quantitative results of the experiments are shown in Table 2. There is only a minor difference between interpreting the filters as conjunctions and disjunctions. This shows that the filters of width 1 are the most important ones and also that neither of our interpretation of the wider filters is entirely correct.
The experiments with the linear classifier show that when the filters are interpreted as simple feature extractors, the model performance can be fully recovered.
|CNN||Rules ()||Rules ()||Classifier|
4 Filter Analysis
We analyze the part-of-speech of the extracted words. We computed the most frequent POS tag for each word based on English Web Treebank (Silveira et al., 2014).We then computed statistics of the most frequent POS tag for words extracted from the network filters.
The statistics are shown in Table 3. The most frequent POS tag among the extracted word is adjective. With increasing network capacity, the model becomes sensitive to nouns and proper nouns. The proportion of function words decreases with the increasing kernel size which suggests that it is unlikely that the filters of large kernel size would capture more complex phrases.
We also compared the words extracted from the filters using Opinion Lexicon(Hu and Liu, 2004) containing 4.8k words contributing to negative and 2.0k contributing to positive sentiment. Regardless of the model, approximately 60 % of the extracted words appear in the lexicon. If we label the words the sign of the corresponding weight from vector , we get precision over 99 % for both words contributing to the negative and positive sentiment with respect to the lexicon.
5 Conclusions & Future Work
We showed that the first layer of a CNN for sentiment analysis can be interpreted as responding to particular words on input. Using these rules, we fully reconstruct a model for sentiment classification.
As future work, we would like to extend this approach for more complex architectures and other NLP tasks.
This work has been supported by the grant 18-02196S of the Czech Science Foundation.
- Howard and Ruder (2018) Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics.
- Hu and Liu (2004) Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. ACM.
- Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics.
- Krizhevsky and Hinton (2009) Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images. Technical report, University of Toronto.
- Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, 6, pages 1097–1105, Red Hook, NY, USA. Curran Associates, Inc.
- Maas et al. (2011) Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
- Silveira et al. (2014) Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014).