A mathematical theory of semantic development in deep neural networks

10/23/2018
by   Andrew M. Saxe, et al.
26

An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: what are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep learning dynamics to give rise to these regularities.

READ FULL TEXT

page 1

page 8

page 10

research
03/13/2014

Controlling Recurrent Neural Networks by Conceptors

The human brain is a dynamical system whose extremely complex sensor-dri...
research
07/18/2023

The semantic landscape paradigm for neural networks

Deep neural networks exhibit a fascinating spectrum of phenomena ranging...
research
12/20/2013

Exact solutions to the nonlinear dynamics of learning in deep linear neural networks

Despite the widespread practical success of deep learning methods, our t...
research
07/07/2021

Introducing the structural bases of typicality effects in deep learning

In this paper, we hypothesize that the effects of the degree of typicali...
research
07/21/2022

The Neural Race Reduction: Dynamics of Abstraction in Gated Networks

Our theoretical understanding of deep learning has not kept pace with it...
research
02/26/2017

Criticality & Deep Learning I: Generally Weighted Nets

Motivated by the idea that criticality and universality of phase transit...
research
09/22/2020

The Relativity of Induction

Lately there has been a lot of discussion about why deep learning algori...

Please sign up or login with your details

Forgot password? Click here to reset