Improving the Interpretability of Neural Sentiment Classifiers via Data Augmentation

09/10/2019
by   Hanjie Chen, et al.
0

Recent progress of neural network models has achieved remarkable performance on sentiment classification, while the lack of classification interpretation may raise the trustworthy and many other issues in practice. In this work, we study the problem of improving the interpretability of existing sentiment classifiers. We propose two data augmentation methods that create additional training examples to help improve model interpretability: one method with a predefined sentiment word list as external knowledge and the other with adversarial examples. We test the proposed methods on both CNN and RNN classifiers with three benchmark sentiment datasets. The model interpretability is assessed by both human evaluators and a simple automatic evaluation measurement. Experiments show the proposed data augmentation methods significantly improve the interpretability of both neural classifiers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2023

Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability

Data augmentation strategies are actively used when training deep neural...
research
10/01/2020

Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers

To build an interpretable neural text classifier, most of the prior work...
research
02/25/2021

Retrieval Augmentation to Improve Robustness and Interpretability of Deep Neural Networks

Deep neural network models have achieved state-of-the-art results in var...
research
04/21/2020

Novel Deep Learning Models Trained Over Proposed Augmented Persian Sentiment Corpus

This paper focuses on how to extract opinions over each Persian sentence...
research
04/11/2020

DeepSentiPers: Novel Deep Learning Models Trained Over Proposed Augmented Persian Sentiment Corpus

This paper focuses on how to extract opinions over each Persian sentence...
research
02/24/2021

On the Impact of Interpretability Methods in Active Image Augmentation Method

Robustness is a significant constraint in machine learning models. The p...
research
12/10/2017

Inducing Interpretability in Knowledge Graph Embeddings

We study the problem of inducing interpretability in KG embeddings. Spec...

Please sign up or login with your details

Forgot password? Click here to reset