Towards Explainable NLP: A Generative Explanation Framework for Text Classification

11/01/2018
by   Hui Liu, et al.
2

Building explainable systems is a critical problem in the field of Natural Language Processing (NLP), since most machine learning models provide no explanations for the predictions. Existing approaches for explainable machine learning systems tend to focus on interpreting the outputs or the connections between inputs and outputs. However, the fine-grained information is often ignored, and the systems do not explicitly generate the human-readable explanations. To better alleviate this problem, we propose a novel generative explanation framework that learns to make classification decisions and generate fine-grained explanations at the same time. More specifically, we introduce the explainable factor and the minimum risk training approach that learn to generate more reasonable explanations. We construct two new datasets that contain summaries, rating scores, and fine-grained reasons. We conduct experiments on both datasets, comparing with several strong neural network baseline systems. Experimental results show that our method surpasses all baselines on both datasets, and is able to generate concise explanations at the same time.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/16/2023

Fine-Grained and High-Faithfulness Explanations for Convolutional Neural Networks

Recently, explaining CNNs has become a research hotspot. CAM (Class Acti...
research
01/01/2023

NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical Development Patterns of Preterm Infants

Deploying reliable deep learning techniques in interdisciplinary applica...
research
01/25/2022

Explanatory Learning: Beyond Empiricism in Neural Networks

We introduce Explanatory Learning (EL), a framework to let machines use ...
research
07/25/2018

Grounding Visual Explanations

Existing visual explanation generating agents learn to fluently justify ...
research
08/07/2019

Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks

To verify and validate networks, it is essential to gain insight into th...
research
01/10/2022

Effective Representation to Capture Collaboration Behaviors between Explainer and User

An explainable AI (XAI) model aims to provide transparency (in the form ...
research
08/14/2023

BSED: Baseline Shapley-Based Explainable Detector

Explainable artificial intelligence (XAI) has witnessed significant adva...

Please sign up or login with your details

Forgot password? Click here to reset