Towards Few-Shot Fact-Checking via Perplexity

03/17/2021
by   Nayeon Lee, et al.
12

Few-shot learning has drawn researchers' attention to overcome the problem of data scarcity. Recently, large pre-trained language models have shown great performance in few-shot learning for various downstream tasks, such as question answering and machine translation. Nevertheless, little exploration has been made to achieve few-shot learning for the fact-checking task. However, fact-checking is an important problem, especially when the amount of information online is growing exponentially every day. In this paper, we propose a new way of utilizing the powerful transfer learning ability of a language model via a perplexity score. The most notable strength of our methodology lies in its capability in few-shot learning. With only two training samples, our methodology can already outperform the Major Class baseline by more than absolute 10 experiments, we empirically verify the plausibility of the rather surprising usage of the perplexity score in the context of fact-checking and highlight the strength of our few-shot methodology by comparing it to strong fine-tuning-based baseline models. Moreover, we construct and publicly release two new fact-checking datasets related to COVID-19.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2022

Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models

Pre-trained masked language models successfully perform few-shot learnin...
research
05/24/2023

Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models

Fact-checking is an essential task in NLP that is commonly utilized for ...
research
04/07/2023

Interpretable Unified Language Checking

Despite recent concerns about undesirable behaviors generated by large l...
research
04/07/2023

Revisiting Automated Prompting: Are We Actually Doing Better?

Current literature demonstrates that Large Language Models (LLMs) are gr...
research
02/19/2023

Few-shot Multimodal Multitask Multilingual Learning

While few-shot learning as a transfer learning paradigm has gained signi...
research
10/05/2021

Task Affinity with Maximum Bipartite Matching in Few-Shot Learning

We propose an asymmetric affinity score for representing the complexity ...
research
07/22/2022

Multi-Level Fine-Tuning, Data Augmentation, and Few-Shot Learning for Specialized Cyber Threat Intelligence

Gathering cyber threat intelligence from open sources is becoming increa...

Please sign up or login with your details

Forgot password? Click here to reset