Is the Computation of Abstract Sameness Relations Human-Like in Neural Language Models?

05/12/2022
by   Lukas Thoma, et al.
0

In recent years, deep neural language models have made strong progress in various NLP tasks. This work explores one facet of the question whether state-of-the-art NLP models exhibit elementary mechanisms known from human cognition. The exploration is focused on a relatively primitive mechanism for which there is a lot of evidence from various psycholinguistic experiments with infants. The computation of "abstract sameness relations" is assumed to play an important role in human language acquisition and processing, especially in learning more complex grammar rules. In order to investigate this mechanism in BERT and other pre-trained language models (PLMs), the experiment designs from studies with infants were taken as the starting point. On this basis, we designed experimental settings in which each element from the original studies was mapped to a component of language models. Even though the task in our experiments was relatively simple, the results suggest that the cognitive faculty of computing abstract sameness relations is stronger in infants than in all investigated PLMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2021

BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?

Analogies play a central role in human commonsense reasoning. The abilit...
research
04/19/2021

Probing for Bridging Inference in Transformer Language Models

We probe pre-trained transformer language models for bridging inference....
research
05/31/2023

Large Language Models Are Not Abstract Reasoners

Large Language Models have shown tremendous performance on a large varie...
research
04/26/2022

Testing the Ability of Language Models to Interpret Figurative Language

Figurative and metaphorical language are commonplace in discourse, and f...
research
12/13/2021

Do Data-based Curricula Work?

Current state-of-the-art NLP systems use large neural networks that requ...
research
04/22/2021

Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand?

Language models trained on billions of tokens have recently led to unpre...
research
11/02/2018

Progress and Tradeoffs in Neural Language Models

In recent years, we have witnessed a dramatic shift towards techniques d...

Please sign up or login with your details

Forgot password? Click here to reset