Investigating Emergent Goal-Like Behaviour in Large Language Models Using Experimental Economics

05/13/2023
by   Steve Phelps, et al.
0

In this study, we investigate the capacity of large language models (LLMs), specifically GPT-3.5, to operationalise natural language descriptions of cooperative, competitive, altruistic, and self-interested behavior in social dilemmas. Our focus is on the iterated Prisoner's Dilemma, a classic example of a non-zero-sum interaction, but our broader research program encompasses a range of experimental economics scenarios, including the ultimatum game, dictator game, and public goods game. Using a within-subject experimental design, we instantiated LLM-generated agents with various prompts that conveyed different cooperative and competitive stances. We then assessed the agents' level of cooperation in the iterated Prisoner's Dilemma, taking into account their responsiveness to the cooperative or defection actions of their partners. Our results provide evidence that LLMs can translate natural language descriptions of altruism and selfishness into appropriate behaviour to some extent, but exhibit limitations in adapting their behavior based on conditioned reciprocity. The observed pattern of increased cooperation with defectors and decreased cooperation with cooperators highlights potential constraints in the LLM's ability to generalize its knowledge about human behavior in social dilemmas. We call upon the research community to further explore the factors contributing to the emergent behavior of LLM-generated agents in a wider array of social dilemmas, examining the impact of model architecture, training parameters, and various partner strategies on agent behavior. As more advanced LLMs like GPT-4 become available, it is crucial to investigate whether they exhibit similar limitations or are capable of more nuanced cooperative behaviors, ultimately fostering the development of AI systems that better align with human values and social norms.

READ FULL TEXT

page 6

page 9

research
03/16/2023

Towards the Scalable Evaluation of Cooperativeness in Language Models

It is likely that AI systems driven by pre-trained language models (PLMs...
research
01/05/2023

Evidence of behavior consistent with self-interest and altruism in an artificially intelligent agent

Members of various species engage in altruism–i.e. accepting personal co...
research
07/05/2023

Building Cooperative Embodied Agents Modularly with Large Language Models

Large Language Models (LLMs) have demonstrated impressive planning abili...
research
02/08/2022

Enabling Imitation-Based Cooperation in Dynamic Social Networks

The emergence of cooperation among self-interested agents has been a key...
research
07/05/2023

Hoodwinked: Deception and Cooperation in a Text-Based Game for Language Models

Are current language models capable of deception and lie detection? We s...
research
02/20/2022

Towards a Sociolinguistics-Based Framework for the Study of Politeness in Human-Computer Interaction

Politeness plays an important role in regulating communication and enhan...
research
04/17/2021

Avoiding the bullies: The resilience of cooperation among unequals

Can egalitarian norms or conventions survive the presence of dominant in...

Please sign up or login with your details

Forgot password? Click here to reset