Analyzing Semantic Faithfulness of Language Models via Input Intervention on Conversational Question Answering

12/21/2022
by   Akshay Chaturvedi, et al.
0

Transformer-based language models have been shown to be highly effective for several NLP tasks. In this paper, we consider three transformer models, BERT, RoBERTa, and XLNet, in both small and large version, and investigate how faithful their representations are with respect to the semantic content of texts. We formalize a notion of semantic faithfulness, in which the semantic content of a text should causally figure in a model's inferences in question answering. We then test this notion by observing a model's behavior on answering questions about a story after performing two novel semantic interventions – deletion intervention and negation intervention. While transformer models achieve high performance on standard question answering tasks, we show that they fail to be semantically faithful once we perform these interventions for a significant number of cases ( 50 intervention, and  20 propose an intervention-based training regime that can mitigate the undesirable effects for deletion intervention by a significant margin (from  50 We analyze the inner-workings of the models to better understand the effectiveness of intervention-based training for deletion intervention. But we show that this training does not attenuate other aspects of semantic unfaithfulness such as the models' inability to deal with negation intervention or to capture the predicate-argument structure of texts. We also test InstructGPT, via prompting, for its ability to handle the two interventions and to capture predicate-argument structure. While InstructGPT models do achieve very high performance on predicate-argument structure task, they fail to respond adequately to our deletion and negation interventions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2019

Paraphrasing with Large Language Models

Recently, large language models such as GPT-2 have shown themselves to b...
research
09/15/2021

Transformer-based Language Models for Factoid Question Answering at BioASQ9b

In this work, we describe our experiments and participating systems in t...
research
08/22/2020

FAT ALBERT: Finding Answers in Large Texts using Semantic Similarity Attention Layer based on BERT

Machine based text comprehension has always been a significant research ...
research
10/14/2019

Whatcha lookin' at? DeepLIFTing BERT's Attention in Question Answering

There has been great success recently in tackling challenging NLP tasks ...
research
10/16/2020

Delaying Interaction Layers in Transformer-based Encoders for Efficient Open Domain Question Answering

Open Domain Question Answering (ODQA) on a large-scale corpus of documen...
research
10/21/2022

LittleBird: Efficient Faster Longer Transformer for Question Answering

BERT has shown a lot of sucess in a wide variety of NLP tasks. But it ha...
research
01/15/2020

Insertion-Deletion Transformer

We propose the Insertion-Deletion Transformer, a novel transformer-based...

Please sign up or login with your details

Forgot password? Click here to reset