You Can Do Better! If You Elaborate the Reason When Making Prediction

03/27/2021
by   Dongfang Li, et al.
0

Neural predictive models have achieved groundbreaking performance improvements in various natural language processing tasks. However, most of neural predictive models suffer from the lack of explainability of predictions, limiting their practical utility, especially in the medical domain. This paper proposes a novel neural predictive framework coupled with large pre-trained language models to make a prediction and generate its corresponding explanation simultaneously. We conducted a preliminary empirical study on Chinese medical multiple-choice question answering, English natural language inference and commonsense question answering tasks. The experimental results show that the proposed approach can generate reasonable explanations for its predictions even with a small-scale training explanation text. The proposed method also achieves improved prediction accuracy on three datasets, which indicates that making predictions can benefit from generating the explanation in the decision process.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

WT5?! Training Text-to-Text Models to Explain their Predictions

Neural networks have recently achieved human-level performance on variou...
research
09/02/2022

Elaboration-Generating Commonsense Question Answering at Scale

In question answering requiring common sense, language models (e.g., GPT...
research
08/19/2019

Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models

Neural language representation models such as Bidirectional Encoder Repr...
research
02/01/2022

Research on Question Classification Methods in the Medical Field

Question classification is one of the important links in the research of...
research
12/31/2020

FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation

Natural language (NL) explanations of model predictions are gaining popu...
research
06/06/2023

CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models

Text classifiers built on Pre-trained Language Models (PLMs) have achiev...
research
09/02/2022

INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations

XAI with natural language processing aims to produce human-readable expl...

Please sign up or login with your details

Forgot password? Click here to reset