Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

08/14/2023
by   Alexandra Sasha Luccioni, et al.
0

Much of the recent discourse within the NLP research community has been centered around Large Language Models (LLMs), their functionality and potential – yet not only do we not have a working definition of LLMs, but much of this discourse relies on claims and assumptions that are worth re-examining. This position paper contributes a definition of LLMs, explicates some of the assumptions made regarding their functionality, and outlines the existing evidence for and against them. We conclude with suggestions for research directions and their framing in future work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2023

A PhD Student's Perspective on Research in NLP in the Era of Very Large Language Models

Recent progress in large language models has enabled the deployment of m...
research
10/25/2022

Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for Misinformation

Misinformation emerges in times of uncertainty when credible information...
research
11/21/2022

Deanthropomorphising NLP: Can a Language Model Be Conscious?

This work is intended as a voice in the discussion over the recent claim...
research
12/31/2022

A Survey for In-context Learning

With the increasing ability of large language models (LLMs), in-context ...
research
10/15/2021

When Combating Hype, Proceed with Caution

In an effort to avoid reinforcing widespread hype about the capabilities...
research
08/14/2023

Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI

A key challenge in professional fact-checking is its limited scalability...
research
06/14/2023

Towards AGI in Computer Vision: Lessons Learned from GPT and Large Language Models

The AI community has been pursuing algorithms known as artificial genera...

Please sign up or login with your details

Forgot password? Click here to reset