DoLFIn: Distributions over Latent Features for Interpretability

11/10/2020
by   Phong Le, et al.
0

Interpreting the inner workings of neural models is a key step in ensuring the robustness and trustworthiness of the models, but work on neural network interpretability typically faces a trade-off: either the models are too constrained to be very useful, or the solutions found by the models are too complex to interpret. We propose a novel strategy for achieving interpretability that – in our experiments – avoids this trade-off. Our approach builds on the success of using probability as the central quantity, such as for instance within the attention mechanism. In our architecture, DoLFIn (Distributions over Latent Features for Interpretability), we do no determine beforehand what each feature represents, and features go altogether into an unordered set. Each feature has an associated probability ranging from 0 to 1, weighing its importance for further processing. We show that, unlike attention and saliency map approaches, this set-up makes it straight-forward to compute the probability with which an input component supports the decision the neural model makes. To demonstrate the usefulness of the approach, we apply DoLFIn to text classification, and show that DoLFIn not only provides interpretable solutions, but even slightly outperforms the classical CNN and BiLSTM text classifiers on the SST2 and AG-news datasets.

READ FULL TEXT
research
10/14/2019

Interpretable Text Classification Using CNN and Max-pooling

Deep neural networks have been widely used in text classification. Howev...
research
10/01/2020

Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers

To build an interpretable neural text classifier, most of the prior work...
research
07/17/2022

MDM:Visual Explanations for Neural Networks via Multiple Dynamic Mask

The active region lookup of a neural network tells us which regions the ...
research
05/20/2019

Interpretable Neural Predictions with Differentiable Binary Variables

The success of neural networks comes hand in hand with a desire for more...
research
04/07/2020

From text saliency to linguistic objects: learning linguistic interpretable markers with a multi-channels convolutional architecture

A lot of effort is currently made to provide methods to analyze and unde...
research
07/06/2023

Generalizing Backpropagation for Gradient-Based Interpretability

Many popular feature-attribution methods for interpreting deep neural ne...
research
06/02/2021

Is Sparse Attention more Interpretable?

Sparse attention has been claimed to increase model interpretability und...

Please sign up or login with your details

Forgot password? Click here to reset