In-Context Impersonation Reveals Large Language Models' Strengths and Biases

05/24/2023
by   Leonard Salewski, et al.
0

In everyday conversations, humans can take on different roles and adapt their vocabulary to their chosen roles. We explore whether LLMs can take on, that is impersonate, different roles when they generate text in-context. We ask LLMs to assume different personas before solving vision and language tasks. We do this by prefixing the prompt with a persona that is associated either with a social identity or domain expertise. In a multi-armed bandit task, we find that LLMs pretending to be children of different ages recover human-like developmental stages of exploration. In a language-based reasoning task, we find that LLMs impersonating domain experts perform better than LLMs impersonating non-domain experts. Finally, we test whether LLMs' impersonations are complementary to visual information when describing different categories. We find that impersonation can improve performance: an LLM prompted to be a bird expert describes birds better than one prompted to be a car expert. However, impersonation can also uncover LLMs' biases: an LLM prompted to be a man describes cars better than one prompted to be a woman. These findings demonstrate that LLMs are capable of taking on diverse roles and that this in-context impersonation can be used to uncover their hidden strengths and biases.

READ FULL TEXT

page 6

page 7

page 8

research
06/21/2023

Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases

This paper investigates whether current large language models exhibit bi...
research
07/14/2022

A tool to overcome technical barriers for bias assessment in human language technologies

Automatic processing of language is becoming pervasive in our lives, oft...
research
05/22/2023

Meta-in-context learning in large language models

Large language models have shown tremendous performance in a variety of ...
research
06/21/2022

Using cognitive psychology to understand GPT-3

We study GPT-3, a recent large language model, using tools from cognitiv...
research
06/30/2023

Queer People are People First: Deconstructing Sexual Identity Stereotypes in Large Language Models

Large Language Models (LLMs) are trained primarily on minimally processe...
research
09/01/2021

Basic-level categorization of intermediate complexity fragments reveals top-down effects of expertise in visual perception

Visual expertise is usually defined as the superior ability to distingui...
research
08/03/2023

InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent

This research paper delves into the integration of OpenAI's ChatGPT into...

Please sign up or login with your details

Forgot password? Click here to reset