Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models

06/05/2023
by   Fengzhu Zeng, et al.
0

Few-shot or zero-shot fact verification only relies on a few or no labeled training examples. In this paper, we propose a novel method called ProToCo, to Prompt pre-trained language models (PLMs) To be Consistent, for improving the factuality assessment capability of PLMs in the few-shot and zero-shot settings. Given a claim-evidence pair, ProToCo generates multiple variants of the claim with different relations and frames a simple consistency mechanism as constraints for making compatible predictions across these variants. We update PLMs by using parameter-efficient fine-tuning (PEFT), leading to more accurate predictions in few-shot and zero-shot fact verification tasks. Our experiments on three public verification datasets show that ProToCo significantly outperforms state-of-the-art few-shot fact verification baselines. With a small number of unlabeled instances, ProToCo also outperforms the strong zero-shot learner T0 on zero-shot verification. Compared to large PLMs using in-context learning (ICL) method, ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model in both few- and zero-shot settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2022

Prompt Consistency for Zero-Shot Task Generalization

One of the most impressive results of recent NLP history is the ability ...
research
05/23/2023

Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker

Re-rankers, which order retrieved documents with respect to the relevanc...
research
08/18/2022

Active PETs: Active Data Annotation Prioritisation for Few-Shot Claim Verification with Pattern Exploiting Training

To mitigate the impact of data scarcity on fact-checking systems, we foc...
research
09/30/2022

What Makes Pre-trained Language Models Better Zero/Few-shot Learners?

In this paper, we propose a theoretical framework to explain the efficac...
research
08/15/2023

Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification

Recent progress in large language models (LLMs) like GPT-4 and PaLM-2 ha...
research
05/31/2021

Zero-shot Fact Verification by Claim Generation

Neural models for automated fact verification have achieved promising re...
research
12/21/2022

Crowd Score: A Method for the Evaluation of Jokes using Large Language Model AI Voters as Judges

This paper presents the Crowd Score, a novel method to assess the funnin...

Please sign up or login with your details

Forgot password? Click here to reset