Statistical mechanics of continual learning: variational principle and mean-field potential

12/06/2022
by   Chan Li, et al.
0

An obstacle to artificial general intelligence is set by the continual learning of multiple tasks of different nature. Recently, various heuristic tricks, both from machine learning and from neuroscience angles, were proposed, but they lack a unified theory ground. Here, we focus on the continual learning in single-layered and multi-layered neural networks of binary weights. A variational Bayesian learning setting is thus proposed, where the neural network is trained in a field-space, rather than the gradient-ill-defined discrete-weight space, and furthermore, the weight uncertainty is naturally incorporated, and modulates the synaptic resources among tasks. From a physics perspective, we translate the variational continual learning into the Franz-Parisi thermodynamic potential framework, where the previous task knowledge acts as a prior and a reference as well. Therefore, the learning performance can be analytically studied with mean-field order parameters, whose predictions coincide with the numerical experiments using stochastic gradient descent methods. Our proposed principled frameworks also connect to elastic weight consolidation, and neuroscience inspired metaplasticity, providing a theory-grounded method for the real-world multi-task learning with deep networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2019

Improving and Understanding Variational Continual Learning

In the continual learning setting, tasks are encountered sequentially. T...
research
03/26/2022

Continual learning of quantum state classification with gradient episodic memory

Continual learning is one of the many areas of machine learning research...
research
12/28/2021

Towards continual task learning in artificial neural networks: current approaches and insights from neuroscience

The innate capacity of humans and other animals to learn a diverse, and ...
research
12/04/2019

Indian Buffet Neural Networks for Continual Learning

We place an Indian Buffet Process (IBP) prior over the neural structure ...
research
11/11/2019

How data, synapses and neurons interact with each other: a variational principle marrying gradient ascent and message passing

Unsupervised learning requiring only raw data is not only a fundamental ...
research
02/11/2022

The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention

Linear layers in neural networks (NNs) trained by gradient descent can b...
research
04/21/2020

Bayesian Nonparametric Weight Factorization for Continual Learning

Naively trained neural networks tend to experience catastrophic forgetti...

Please sign up or login with your details

Forgot password? Click here to reset