Continual Learning with Deep Artificial Neurons

11/13/2020
by   Blake Camp, et al.
0

Neurons in real brains are enormously complex computational units. Among other things, they're responsible for transforming inbound electro-chemical vectors into outbound action potentials, updating the strengths of intermediate synapses, regulating their own internal states, and modulating the behavior of other nearby neurons. One could argue that these cells are the only things exhibiting any semblance of real intelligence. It is odd, therefore, that the machine learning community has, for so long, relied upon the assumption that this complexity can be reduced to a simple sum and fire operation. We ask, might there be some benefit to substantially increasing the computational power of individual neurons in artificial systems? To answer this question, we introduce Deep Artificial Neurons (DANs), which are themselves realized as deep neural networks. Conceptually, we embed DANs inside each node of a traditional neural network, and we connect these neurons at multiple synaptic sites, thereby vectorizing the connections between pairs of cells. We demonstrate that it is possible to meta-learn a single parameter vector, which we dub a neuronal phenotype, shared by all DANs in the network, which facilitates a meta-objective during deployment. Here, we isolate continual learning as our meta-objective, and we show that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting, using standard backpropagation, without experience replay, nor separate wake/sleep phases. We demonstrate this ability on sequential non-linear regression tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2022

Continual learning benefits from multiple sleep mechanisms: NREM, REM, and Synaptic Downscaling

Learning new tasks and skills in succession without losing prior learnin...
research
06/11/2018

Meta Continual Learning

Using neural networks in practical settings would benefit from the abili...
research
10/07/2019

Biologically-Inspired Spatial Neural Networks

We introduce bio-inspired artificial neural networks consisting of neuro...
research
08/09/2023

Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks

Children possess the ability to learn multiple cognitive tasks sequentia...
research
06/02/2023

GateON: an unsupervised method for large scale continual learning

The objective of continual learning (CL) is to learn tasks sequentially ...
research
04/22/2022

A Computational Theory of Learning Flexible Reward-Seeking Behavior with Place Cells

An important open question in computational neuroscience is how various ...
research
12/31/2018

Two "correlation games" for a nonlinear network with Hebbian excitatory neurons and anti-Hebbian inhibitory neurons

A companion paper introduces a nonlinear network with Hebbian excitatory...

Please sign up or login with your details

Forgot password? Click here to reset