From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli?

10/14/2020
by   Maryam Hashemzadeh, et al.
0

The representations generated by many models of language (word embeddings, recurrent neural networks and transformers) correlate to brain activity recorded while people read. However, these decoding results are usually based on the brain's reaction to syntactically and semantically sound language stimuli. In this study, we asked: how does an LSTM (long short term memory) language model, trained (by and large) on semantically and syntactically intact language, represent a language sample with degraded semantic or syntactic information? Does the LSTM representation still resemble the brain's reaction? We found that, even for some kinds of nonsensical language, there is a statistically significant relationship between the brain's activity and the representations of an LSTM. This indicates that, at least in some instances, LSTMs and the human brain handle nonsensical data similarly.

READ FULL TEXT
research
04/22/2016

Bridging LSTM Architecture and the Neural Dynamics during Reading

Recently, the long short-term memory neural network (LSTM) has attracted...
research
06/09/2021

Low-Dimensional Structure in the Space of Language Representations is Reflected in Brain Responses

How related are the representations learned by neural language models, t...
research
09/14/2018

Brain decoding from functional MRI using long short-term memory recurrent neural networks

Decoding brain functional states underlying different cognitive processe...
research
10/24/2022

Characterizing Verbatim Short-Term Memory in Neural Language Models

When a language model is trained to predict natural language sequences, ...
research
09/10/2020

Brain2Word: Decoding Brain Activity for Language Generation

Brain decoding, understood as the process of mapping brain activities to...
research
04/28/2016

Word Ordering Without Syntax

Recent work on word ordering has argued that syntactic structure is impo...
research
09/05/2019

Contextual Minimum-Norm Estimates (CMNE): A Deep Learning Method for Source Estimation in Neuronal Networks

Magnetoencephalography (MEG) and Electroencephalography (EEG) source est...

Please sign up or login with your details

Forgot password? Click here to reset