Still No Lie Detector for Language Models: Probing Empirical and Conceptual Roadblocks

06/30/2023
by   B. A. Levinstein, et al.
0

We consider the questions of whether or not large language models (LLMs) have beliefs, and, if they do, how we might measure them. First, we evaluate two existing approaches, one due to Azaria and Mitchell (2023) and the other to Burns et al. (2022). We provide empirical results that show that these methods fail to generalize in very basic ways. We then argue that, even if LLMs have beliefs, these methods are unlikely to be successful for conceptual reasons. Thus, there is still no lie-detector for LLMs. After describing our empirical results we take a step back and consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some recent arguments aiming to show that LLMs cannot have beliefs. We show that these arguments are misguided. We provide a more productive framing of questions surrounding the status of beliefs in LLMs, and highlight the empirical nature of the problem. We conclude by suggesting some concrete paths for future work.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2019

Assessing Developer Beliefs: A Reply to "Perceptions, Expectations, and Challenges in Defect Prediction"

It can be insightful to extend qualitative studies with a secondary quan...
research
04/10/2020

On the Existence of Tacit Assumptions in Contextualized Language Models

Humans carry stereotypic tacit assumptions (STAs) (Prince, 1978), or pro...
research
09/06/2023

Framework-Based Qualitative Analysis of Free Responses of Large Language Models: Algorithmic Fidelity

Today, using Large-scale generative Language Models (LLMs) it is possibl...
research
11/26/2021

Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs

Do language models have beliefs about the world? Dennett (1995) famously...
research
01/24/2021

Belief-based Generation of Argumentative Claims

When engaging in argumentative discourse, skilled human debaters tailor ...
research
03/28/2022

The Moral Debater: A Study on the Computational Generation of Morally Framed Arguments

An audience's prior beliefs and morals are strong indicators of how like...
research
09/29/2021

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Although pretrained language models (PTLMs) contain significant amounts ...

Please sign up or login with your details

Forgot password? Click here to reset