Regularised neural networks mimic human insight

02/22/2023
by   Anika T. Löwe, et al.
0

Humans sometimes show sudden improvements in task performance that have been linked to moments of insight. Such insight-related performance improvements appear special because they are preceded by an extended period of impasse, are unusually abrupt, and occur only in some, but not all, learners. Here, we ask whether insight-like behaviour also occurs in artificial neural networks trained with gradient descent algorithms. We compared learning dynamics in humans and regularised neural networks in a perceptual decision task that provided a hidden opportunity which allowed to solve the task more efficiently. We show that humans tend to discover this regularity through insight, rather than gradually. Notably, neural networks with regularised gate modulation closely mimicked behavioural characteristics of human insights, exhibiting delay of insight, suddenness and selective occurrence. Analyses of network learning dynamics revealed that insight-like behaviour crucially depended on noise added to gradient updates, and was preceded by “silent knowledge” that is initially suppressed by regularised (attentional) gating. This suggests that insights can arise naturally from gradual learning, where they reflect the combined influences of noise, attentional gating and regularisation.

READ FULL TEXT
research
09/10/2023

Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? An analysis using stochastic processes

In recent years, there has been an intense debate about how learning in ...
research
03/22/2022

Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals

Humans can learn several tasks in succession with minimal mutual interfe...
research
05/25/2022

On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity

Most complex machine learning and modelling techniques are prone to over...
research
11/07/2019

How implicit regularization of Neural Networks affects the learned function – Part I

Today, various forms of neural networks are trained to perform approxima...
research
05/25/2017

Diagonal Rescaling For Neural Networks

We define a second-order neural network stochastic gradient training alg...
research
07/13/2023

What Exactly is an Insight? A Literature Review

Insights are often considered the ideal outcome of visual analysis sessi...

Please sign up or login with your details

Forgot password? Click here to reset