Context Limitations Make Neural Language Models More Human-Like

05/23/2022
by   Tatsuki Kuribayashi, et al.
3

Do modern natural language processing (NLP) models exhibit human-like language processing? How can they be made more human-like? These questions are motivated by psycholinguistic studies for understanding human language processing as well as engineering efforts. In this study, we demonstrate the discrepancies in context access between modern neural language models (LMs) and humans in incremental sentence processing. Additional context limitation was needed to make LMs better simulate human reading behavior. Our analyses also showed that human-LM gaps in memory access are associated with specific syntactic constructions; incorporating additional syntactic factors into LMs' context access could enhance their cognitive plausibility.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/21/2023

The Role of AI in Human-AI Creative Writing for Hong Kong Secondary Students

The recent advancement in Natural Language Processing (NLP) capability h...
research
08/15/2023

Using Artificial Populations to Study Psychological Phenomena in Neural Models

The recent proliferation of research into transformer based natural lang...
research
03/10/2023

Does ChatGPT resemble humans in language use?

Large language models (LLMs) and LLM-driven chatbots such as ChatGPT hav...
research
10/28/2022

Modeling structure-building in the brain with CCG parsing and large language models

To model behavioral and neural correlates of language comprehension in n...
research
10/17/2021

Schrödinger's Tree – On Syntax and Neural Language Models

In the last half-decade, the field of natural language processing (NLP) ...
research
02/17/2023

False perspectives on human language: why statistics needs linguistics

A sharp tension exists about the nature of human language between two op...

Please sign up or login with your details

Forgot password? Click here to reset