Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

09/27/2020
by   Shixian Wen, et al.
11

The human brain is the gold standard of adaptive learning. It not only can learn and benefit from experience, but also can adapt to new situations. In contrast, deep neural networks only learn one sophisticated but fixed mapping from inputs to outputs. This limits their applicability to more dynamic situations, where input to output mapping may change with different contexts. A salient example is continual learning - learning new independent tasks sequentially without forgetting previous tasks. Continual learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby a previously learned mapping of an old task is erased when learning new mappings for new tasks. Here, we propose a new biologically plausible type of deep neural network with extra, out-of-network, task-dependent biasing units to accommodate these dynamic situations. This allows, for the first time, a single network to learn potentially unlimited parallel input to output mappings, and to switch on the fly between them at runtime. Biasing units are programmed by leveraging beneficial perturbations (opposite to well-known adversarial perturbations) for each task. Beneficial perturbations for a given task bias the network toward that task, essentially switching the network into a different mode to process that task. This largely eliminates catastrophic interference between tasks. Our approach is memory-efficient and parameter-efficient, can accommodate many tasks, and achieves state-of-the-art performance across different tasks and domains.

READ FULL TEXT

page 3

page 6

page 7

page 9

page 10

page 11

page 12

page 14

research
06/22/2019

Beneficial perturbation network for continual learning

Sequential learning of multiple tasks in artificial neural networks usin...
research
09/29/2018

Continuous Learning of Context-dependent Processing in Neural Networks

Deep artificial neural networks (DNNs) are powerful tools for recognitio...
research
12/21/2013

Intriguing properties of neural networks

Deep neural networks are highly expressive models that have recently ach...
research
03/22/2022

Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals

Humans can learn several tasks in succession with minimal mutual interfe...
research
10/21/2021

Center Loss Regularization for Continual Learning

The ability to learn different tasks sequentially is essential to the de...
research
06/07/2016

Active Long Term Memory Networks

Continual Learning in artificial neural networks suffers from interferen...
research
10/05/2022

ImpressLearn: Continual Learning via Combined Task Impressions

This work proposes a new method to sequentially train a deep neural netw...

Please sign up or login with your details

Forgot password? Click here to reset