Using Artificial Populations to Study Psychological Phenomena in Neural Models

08/15/2023
by   Jesse Roberts, et al.
0

The recent proliferation of research into transformer based natural language processing has led to a number of studies which attempt to detect the presence of human-like cognitive behavior in the models. We contend that, as is true of human psychology, the investigation of cognitive behavior in language models must be conducted in an appropriate population of an appropriate size for the results to be meaningful. We leverage work in uncertainty estimation in a novel approach to efficiently construct experimental populations. The resultant tool, PopulationLM, has been made open source. We provide theoretical grounding in the uncertainty estimation literature and motivation from current cognitive work regarding language models. We discuss the methodological lessons from other scientific communities and attempt to demonstrate their application to two artificial population studies. Through population based experimentation we find that language models exhibit behavior consistent with typicality effects among categories highly represented in training. However, we find that language models don't tend to exhibit structural priming effects. Generally, our results show that single models tend to over estimate the presence of cognitive behaviors in neural models.

READ FULL TEXT
research
05/23/2022

Context Limitations Make Neural Language Models More Human-Like

Do modern natural language processing (NLP) models exhibit human-like la...
research
08/28/2023

Cognitive Effects in Large Language Models

Large Language Models (LLMs) such as ChatGPT have received enormous atte...
research
08/18/2023

Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis

This paper explores alternatives for integrating two subdisciplines of A...
research
07/20/2021

Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?

Despite being designed for performance rather than cognitive plausibilit...
research
06/03/2023

Guided scenarios with simulated expert personae: a remarkable strategy to perform cognitive work

Large language models (LLMs) trained on a substantial corpus of human kn...
research
06/14/2023

Language models are not naysayers: An analysis of language models on negation benchmarks

Negation has been shown to be a major bottleneck for masked language mod...
research
09/04/2023

Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain

With their recent development, large language models (LLMs) have been fo...

Please sign up or login with your details

Forgot password? Click here to reset