LLM Cognitive Judgements Differ From Human

07/20/2023
by   Sotiris Lamprinidis, et al.
0

Large Language Models (LLMs) have lately been on the spotlight of researchers, businesses, and consumers alike. While the linguistic capabilities of such models have been studied extensively, there is growing interest in investigating them as cognitive subjects. In the present work I examine GPT-3 and ChatGPT capabilities on an limited-data inductive reasoning task from the cognitive science literature. The results suggest that these models' cognitive judgements are not human-like.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2023

Turning large language models into cognitive models

Large language models are powerful systems that excel at many tasks, ran...
research
09/11/2023

Evaluating the Deductive Competence of Large Language Models

The development of highly fluent large language models (LLMs) has prompt...
research
06/11/2023

A blind spot for large language models: Supradiegetic linguistic information

Large Language Models (LLMs) like ChatGPT reflect profound changes in th...
research
09/14/2023

Assessing the nature of large language models: A caution against anthropocentrism

Generative AI models garnered a large amount of public attention and spe...
research
01/16/2023

Dissociating language and thought in large language models: a cognitive perspective

Today's large language models (LLMs) routinely generate coherent, gramma...
research
10/15/2018

Bringing Order to the Cognitive Fallacy Zoo

In the eyes of a rationalist like Descartes or Spinoza, human reasoning ...
research
10/28/2021

Distill: Domain-Specific Compilation for Cognitive Models

This paper discusses our proposal and implementation of Distill, a domai...

Please sign up or login with your details

Forgot password? Click here to reset