DoLFIn: Distributions over Latent Features for Interpretability

by   Phong Le, et al.

Interpreting the inner workings of neural models is a key step in ensuring the robustness and trustworthiness of the models, but work on neural network interpretability typically faces a trade-off: either the models are too constrained to be very useful, or the solutions found by the models are too complex to interpret. We propose a novel strategy for achieving interpretability that – in our experiments – avoids this trade-off. Our approach builds on the success of using probability as the central quantity, such as for instance within the attention mechanism. In our architecture, DoLFIn (Distributions over Latent Features for Interpretability), we do no determine beforehand what each feature represents, and features go altogether into an unordered set. Each feature has an associated probability ranging from 0 to 1, weighing its importance for further processing. We show that, unlike attention and saliency map approaches, this set-up makes it straight-forward to compute the probability with which an input component supports the decision the neural model makes. To demonstrate the usefulness of the approach, we apply DoLFIn to text classification, and show that DoLFIn not only provides interpretable solutions, but even slightly outperforms the classical CNN and BiLSTM text classifiers on the SST2 and AG-news datasets.


Interpretable Text Classification Using CNN and Max-pooling

Deep neural networks have been widely used in text classification. Howev...

Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers

To build an interpretable neural text classifier, most of the prior work...

MDM:Visual Explanations for Neural Networks via Multiple Dynamic Mask

The active region lookup of a neural network tells us which regions the ...

Interpretable Neural Predictions with Differentiable Binary Variables

The success of neural networks comes hand in hand with a desire for more...

From text saliency to linguistic objects: learning linguistic interpretable markers with a multi-channels convolutional architecture

A lot of effort is currently made to provide methods to analyze and unde...

Generalizing Backpropagation for Gradient-Based Interpretability

Many popular feature-attribution methods for interpreting deep neural ne...

Is Sparse Attention more Interpretable?

Sparse attention has been claimed to increase model interpretability und...

Please sign up or login with your details

Forgot password? Click here to reset