Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference

11/21/2022
by   Eric Mitchell, et al.
0

While large pre-trained language models are powerful, their predictions often lack logical consistency across test inputs. For example, a state-of-the-art Macaw question-answering (QA) model answers 'Yes' to 'Is a sparrow a bird?' and 'Does a bird have feet?' but answers 'No' to 'Does a sparrow have feet?'. To address this failure mode, we propose a framework, Consistency Correction through Relation Detection, or ConCoRD, for boosting the consistency and accuracy of pre-trained NLP models using pre-trained natural language inference (NLI) models without fine-tuning or re-training. Given a batch of test inputs, ConCoRD samples several candidate outputs for each input and instantiates a factor graph that accounts for both the model's belief about the likelihood of each answer choice in isolation and the NLI model's beliefs about pair-wise answer choice compatibility. We show that a weighted MaxSAT solver can efficiently compute high-quality answer choices under this factor graph, improving over the raw model's predictions. Our experiments demonstrate that ConCoRD consistently boosts accuracy and consistency of off-the-shelf closed-book QA and VQA models using off-the-shelf NLI models, notably increasing accuracy of LXMERT on ConVQA by 5 https://ericmitchell.ai/emnlp-2022-concord/ for code and data.

READ FULL TEXT

page 1

page 3

page 9

page 14

page 16

research
11/10/2021

Pre-trained Transformer-Based Approach for Arabic Question Answering : A Comparative Study

Question answering(QA) is one of the most challenging yet widely investi...
research
07/11/2023

Self-consistency for open-ended generations

In this paper, we present a novel approach for improving the quality and...
research
06/08/2023

Modular Visual Question Answering via Code Generation

We present a framework that formulates visual question answering as modu...
research
05/30/2023

Generate then Select: Open-ended Visual Question Answering Guided by World Knowledge

The open-ended Visual Question Answering (VQA) task requires AI models t...
research
05/23/2023

Language Models with Rationality

While large language models (LLMs) are proficient at question-answering ...
research
09/29/2021

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Although pretrained language models (PTLMs) contain significant amounts ...
research
04/16/2021

Enriching a Model's Notion of Belief using a Persistent Memory

Although pretrained language models (PTLMs) have been shown to contain s...

Please sign up or login with your details

Forgot password? Click here to reset