Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments

12/31/2021
by   Abhiram Iyer, et al.
14

A key challenge for AI is to build embodied systems that operate in dynamically changing environments. Such systems must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios. In these settings, error signals from multiple contexts can interfere with one another, ultimately leading to a phenomenon known as catastrophic forgetting. In this article we investigate biologically inspired architectures as solutions to these problems. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our key contributions are as follows. First, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture on two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model's prediction task changes throughout training. Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. Our neural implementation marks the first time a single architecture has achieved competitive results on both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.

READ FULL TEXT

page 7

page 14

page 21

research
02/01/2019

Policy Consolidation for Continual Reinforcement Learning

We propose a method for tackling catastrophic forgetting in deep reinfor...
research
10/01/2020

Meta-Consolidation for Continual Learning

The ability to continuously learn and adapt itself to new tasks, without...
research
06/18/2022

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Current state-of-the-art vision-and-language models are evaluated on tas...
research
09/15/2023

Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization

The pursuit of long-term autonomy mandates that robotic agents must cont...
research
11/16/2020

Learning to Continuously Optimize Wireless Resource In Episodically Dynamic Environment

There has been a growing interest in developing data-driven and in parti...
research
05/16/2022

Continual learning on 3D point clouds with random compressed rehearsal

Contemporary deep neural networks offer state-of-the-art results when ap...
research
05/24/2022

Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representations

Animals thrive in a constantly changing environment and leverage the tem...

Please sign up or login with your details

Forgot password? Click here to reset