Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations

05/24/2022
by   Ali Hummos, et al.
0

Animals thrive in a constantly changing environment and leverage the temporal structure to learn well-factorized causal representations. In contrast, traditional neural networks suffer from forgetting in changing environments and many methods have been proposed to limit forgetting with different trade-offs. Inspired by the brain thalamocortical circuit, we introduce a simple algorithm that uses optimization at inference time to generate internal representations of temporal context and to infer current context dynamically, allowing the agent to parse the stream of temporal experience into discrete events and organize learning about them. We show that a network trained on a series of tasks using traditional weight updates can infer tasks dynamically using gradient descent steps in the latent task embedding space (latent updates). We then alternate between the weight updates and the latent updates to arrive at Thalamus, a task-agnostic algorithm capable of discovering disentangled representations in a stream of unlabeled tasks using simple gradient descent. On a continual learning benchmark, it achieves competitive end average accuracy and demonstrates knowledge transfer. After learning a subset of tasks it can generalize to unseen tasks as they become reachable within the well-factorized latent space, through one-shot latent updates. The algorithm meets many of the desiderata of an ideal continually learning agent in open-ended environments, and its simplicity suggests fundamental computations in circuits with abundant feedback control loops such as the thalamocortical circuits in the brain.

READ FULL TEXT

page 4

page 18

page 19

research
08/02/2019

Weight Friction: A Simple Method to Overcome Catastrophic Forgetting and Enable Continual Learning

In recent years, deep neural networks have found success in replicating ...
research
07/10/2022

Scaling the Number of Tasks in Continual Learning

Standard gradient descent algorithms applied to sequences of tasks are k...
research
06/16/2020

Learning to Learn with Feedback and Local Plasticity

Interest in biologically inspired alternatives to backpropagation is dri...
research
09/16/2022

A Biologically-Inspired Dual Stream World Model

The medial temporal lobe (MTL), a brain region containing the hippocampu...
research
03/06/2019

Using World Models for Pseudo-Rehearsal in Continual Learning

The utility of learning a dynamics/world model of the environment in rei...
research
12/31/2021

Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments

A key challenge for AI is to build embodied systems that operate in dyna...

Please sign up or login with your details

Forgot password? Click here to reset