WT5?! Training Text-to-Text Models to Explain their Predictions

04/30/2020
by   Sharan Narang, et al.
0

Neural networks have recently achieved human-level performance on various challenging natural language processing (NLP) tasks, but it is notoriously difficult to understand why a neural network produced a particular prediction. In this paper, we leverage the text-to-text framework proposed by Raffel et al.(2019) to train language models to output a natural text explanation alongside their prediction. Crucially, this requires no modifications to the loss function or training and decoding procedures – we simply train the model to output the explanation after generating the (natural text) prediction. We show that this approach not only obtains state-of-the-art results on explainability benchmarks, but also permits learning from a limited set of labeled explanations and transferring rationalization abilities across datasets. To facilitate reproducibility and future work, we release our code use to train the models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2021

You Can Do Better! If You Elaborate the Reason When Making Prediction

Neural predictive models have achieved groundbreaking performance improv...
research
04/24/2023

Understanding and Predicting Human Label Variation in Natural Language Inference through Explanation

Human label variation (Plank 2022), or annotation disagreement, exists i...
research
03/29/2023

LMExplainer: a Knowledge-Enhanced Explainer for Language Models

Large language models (LMs) such as GPT-4 are very powerful and can proc...
research
01/21/2023

Rationalization for Explainable NLP: A Survey

Recent advances in deep learning have improved the performance of many N...
research
01/28/2021

Explaining Natural Language Processing Classifiers with Occlusion and Language Modeling

Deep neural networks are powerful statistical learners. However, their p...
research
12/19/2022

Explanation Regeneration via Information Bottleneck

Explaining the black-box predictions of NLP models naturally and accurat...
research
07/21/2022

Leveraging Natural Supervision for Language Representation Learning and Generation

Recent breakthroughs in Natural Language Processing (NLP) have been driv...

Please sign up or login with your details

Forgot password? Click here to reset