Critical Learning Periods Emerge Even in Deep Linear Networks

08/23/2023
by   Michael Kleinman, et al.
0

Critical learning periods are periods early in development where temporary sensory deficits can have a permanent effect on behavior and learned representations. Despite the radical differences between biological and artificial networks, critical learning periods have been empirically observed in both systems. This suggests that critical periods may be fundamental to learning and not an accident of biology. Yet, why exactly critical periods emerge in deep networks is still an open question, and in particular it is unclear whether the critical periods observed in both systems depend on particular architectural or optimization details. To isolate the key underlying factors, we focus on deep linear network models, and show that, surprisingly, such networks also display much of the behavior seen in biology and artificial networks, while being amenable to analytical treatment. We show that critical periods depend on the depth of the model and structure of the data distribution. We also show analytically and in simulations that the learning of features is tied to competition between sources. Finally, we extend our analysis to multi-task learning to show that pre-training on certain tasks can damage the transfer performance on new tasks, and show how this depends on the relationship between tasks and the duration of the pre-training stage. To the best of our knowledge, our work provides the first analytically tractable model that sheds light into why critical learning periods emerge in biological and artificial networks.

READ FULL TEXT
research
10/06/2022

Critical Learning Periods for Multisensory Integration in Deep Networks

We show that the ability of a neural network to integrate information fr...
research
11/24/2017

Critical Learning Periods in Deep Neural Networks

Critical periods are phases in the early development of humans and anima...
research
08/03/2022

GPPF: A General Perception Pre-training Framework via Sparsely Activated Multi-Task Learning

Pre-training over mixtured multi-task, multi-domain, and multi-modal dat...
research
09/27/2018

An analytic theory of generalization dynamics and transfer learning in deep linear networks

Much attention has been devoted recently to the generalization puzzle in...
research
03/07/2021

Embodied Continual Learning Across Developmental Time Via Developmental Braitenberg Vehicles

There is much to learn through synthesis of Developmental Biology, Cogni...
research
07/30/2022

Revisiting the Critical Factors of Augmentation-Invariant Representation Learning

We focus on better understanding the critical factors of augmentation-in...
research
12/02/2020

RotNet: Fast and Scalable Estimation of Stellar Rotation Periods Using Convolutional Neural Networks

Magnetic activity in stars manifests as dark spots on their surfaces tha...

Please sign up or login with your details

Forgot password? Click here to reset