Is the Computation of Abstract Sameness Relations Human-Like in Neural Language Models?

05/12/2022
by   Lukas Thoma, et al.
0

In recent years, deep neural language models have made strong progress in various NLP tasks. This work explores one facet of the question whether state-of-the-art NLP models exhibit elementary mechanisms known from human cognition. The exploration is focused on a relatively primitive mechanism for which there is a lot of evidence from various psycholinguistic experiments with infants. The computation of "abstract sameness relations" is assumed to play an important role in human language acquisition and processing, especially in learning more complex grammar rules. In order to investigate this mechanism in BERT and other pre-trained language models (PLMs), the experiment designs from studies with infants were taken as the starting point. On this basis, we designed experimental settings in which each element from the original studies was mapped to a component of language models. Even though the task in our experiments was relatively simple, the results suggest that the cognitive faculty of computing abstract sameness relations is stronger in infants than in all investigated PLMs.

READ FULL TEXT

Authors

page 1

page 2

page 3

page 4

05/11/2021

BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?

Analogies play a central role in human commonsense reasoning. The abilit...
04/26/2022

Testing the Ability of Language Models to Interpret Figurative Language

Figurative and metaphorical language are commonplace in discourse, and f...
12/13/2021

Do Data-based Curricula Work?

Current state-of-the-art NLP systems use large neural networks that requ...
08/27/2021

Evaluating the Robustness of Neural Language Models to Input Perturbations

High-performance neural language models have obtained state-of-the-art r...
03/26/2022

Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages

Human languages are full of metaphorical expressions. Metaphors help peo...
06/24/2022

Text and author-level political inference using heterogeneous knowledge representations

The inference of politically-charged information from text data is a pop...
04/22/2021

Provable Limitations of Acquiring Meaning from Ungrounded Form: What will Future Language Models Understand?

Language models trained on billions of tokens have recently led to unpre...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.