Which Neural Network Architecture matches Human Behavior in Artificial Grammar Learning?

02/13/2019
by   Andrea Alamia, et al.
0

In recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feed-forward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can 'learn' (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, our results suggest that explicit learning is best modeled by recurrent architectures, whereas feedforward networks better capture the dynamics involved in implicit learning.

READ FULL TEXT

page 4

page 9

page 11

page 14

page 20

research
09/12/2019

Recurrent Connectivity Aids Recognition of Partly Occluded Objects

Feedforward convolutional neural networks are the prevalent model of cor...
research
05/21/2018

Learning long-range spatial dependencies with horizontal gated-recurrent units

Progress in deep learning has spawned great successes in many engineerin...
research
03/23/2017

Sequential Recurrent Neural Networks for Language Modeling

Feedforward Neural Network (FNN)-based language models estimate the prob...
research
02/24/2019

Learning to Apply Schematic Knowledge to Novel Instances

Humans have schematic knowledge of how certain types of events unfold (e...
research
02/20/2019

Emulating Human Developmental Stages with Bayesian Neural Networks

We compare the acquisition of knowledge in humans and machines. Research...
research
02/27/2020

Deep Randomized Neural Networks

Randomized Neural Networks explore the behavior of neural systems where ...
research
03/05/2019

Distinguishing mirror from glass: A 'big data' approach to material perception

Visually identifying materials is crucial for many tasks, yet material p...

Please sign up or login with your details

Forgot password? Click here to reset