Log In Sign Up

Understanding Convolutional Neural Networks for Text Classification

by   Alon Jacovi, et al.

We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text. CNNs used for computer vision can be interpreted by projecting filters into image space, but for discrete sequence inputs CNNs remain a mystery. We aim to understand the method by which the networks process and classify text. We examine common hypotheses to this problem: that filters, accompanied by global max-pooling, serve as ngram detectors. We show that filters may capture several different semantic classes of ngrams by using different activation patterns, and that global max-pooling induces behavior which separates important ngrams from the rest. Finally, we show practical use cases derived from our findings in the form of model interpretability (explaining a trained model by deriving a concrete identity for each filter, bridging the gap between visualization tools in vision tasks and NLP) and prediction interpretability (explaining predictions).


page 1

page 2

page 3

page 4


In-depth Question classification using Convolutional Neural Networks

Convolutional neural networks for computer vision are fairly intuitive. ...

What's the relationship between CNNs and communication systems?

The interpretability of Convolutional Neural Networks (CNNs) is an impor...

Interpretable Text Classification Using CNN and Max-pooling

Deep neural networks have been widely used in text classification. Howev...

Studying The Effect of MIL Pooling Filters on MIL Tasks

There are different multiple instance learning (MIL) pooling filters use...

Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters

Convolutional neural networks (CNNs) have been successfully used in a ra...

What Deep CNNs Benefit from Global Covariance Pooling: An Optimization Perspective

Recent works have demonstrated that global covariance pooling (GCP) has ...