DeepAI AI Chat
Log In Sign Up

Explaining Prediction Uncertainty of Pre-trained Language Models by Detecting Uncertain Words in Inputs

01/11/2022
by   Hanjie Chen, et al.
0

Estimating the predictive uncertainty of pre-trained language models is important for increasing their trustworthiness in NLP. Although many previous works focus on quantifying prediction uncertainty, there is little work on explaining the uncertainty. This paper pushes a step further on explaining uncertain predictions of post-calibrated pre-trained language models. We adapt two perturbation-based post-hoc interpretation methods, Leave-one-out and Sampling Shapley, to identify words in inputs that cause the uncertainty in predictions. We test the proposed methods on BERT and RoBERTa with three tasks: sentiment classification, natural language inference, and paraphrase identification, in both in-domain and out-of-domain settings. Experiments show that both methods consistently capture words in inputs that cause prediction uncertainty.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/19/2021

Probing for Bridging Inference in Transformer Language Models

We probe pre-trained transformer language models for bridging inference....
06/06/2023

CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models

Text classifiers built on Pre-trained Language Models (PLMs) have achiev...
04/09/2021

Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks

Explaining neural network models is important for increasing their trust...
12/14/2020

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

The black-box nature of neural models has motivated a line of research t...
04/27/2022

Probing Simile Knowledge from Pre-trained Language Models

Simile interpretation (SI) and simile generation (SG) are challenging ta...
06/23/2016

Explaining Predictions of Non-Linear Classifiers in NLP

Layer-wise relevance propagation (LRP) is a recently proposed technique ...
04/20/2023

Multi-aspect Repetition Suppression and Content Moderation of Large Language Models

Natural language generation is one of the most impactful fields in NLP, ...