Susceptibility to Influence of Large Language Models

03/10/2023
by   Lewis D Griffin, et al.
0

Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement (through, for example, rating its interest) boosts a later truthfulness test rating. Data was collected from 1000 human participants using an online experiment, and 1000 simulated participants using engineered prompts and LLM completion. 64 ratings per participant were collected, using all exposure-test combinations of the attributes: truth, interest, sentiment and importance. The results for human participants reconfirmed the ITE, and demonstrated an absence of effect for attributes other than truth, and when the same attribute is used for exposure and test. The same pattern of effects was found for LLM-simulated participants. The second study concerns a specific mode of influence - populist framing of news to increase its persuasion and political mobilization. Data from LLM-simulated participants was collected and compared to previously published data from a 15-country experiment on 7286 human participants. Several effects previously demonstrated from the human study were replicated by the simulated study, including effects that surprised the authors of the human study by contradicting their theoretical expectations (anti-immigrant framing of news decreases its persuasion and mobilization); but some significant relationships found in human data (modulation of the effectiveness of populist framing according to relative deprivation of the participant) were not present in the LLM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2023

The influence of user personality and rating scale features on rating behaviour: an empirical study

User ratings are widely used in web systems and applications to provide ...
research
07/06/2019

Human detection of machine manipulated media

Recent advances in neural networks for content generation enable artific...
research
10/05/2020

Viable Threat on News Reading: Generating Biased News Using Natural Language Models

Recent advancements in natural language generation has raised serious co...
research
12/06/2022

Effects of Visual Priming on Rating Scale Usage

Rating scales are much used in survey research. Often, it is assumed tha...
research
05/08/2023

Do Large Language Models Show Decision Heuristics Similar to Humans? A Case Study Using GPT-3.5

A Large Language Model (LLM) is an artificial intelligence system that h...
research
05/10/2019

Markov versus quantum dynamic models of belief change during evidence monitoring

Two different dynamic models for belief change during evidence monitorin...

Please sign up or login with your details

Forgot password? Click here to reset