Increasing Depth of Neural Networks for Life-long Learning

02/22/2022
by   Jędrzej Kozal, et al.
12

Increasing neural network depth is a well-known method for improving neural network performance. Modern deep architectures contain multiple mechanisms that allow hundreds or even thousands of layers to train. This work is trying to answer if extending neural network depth may be beneficial in a life-long learning setting. In particular, we propose a novel method based on adding new layers on top of existing ones to enable the forward transfer of knowledge and adapting previously learned representations for new tasks. We utilize a method of determining the most similar tasks for selecting the best location in our network to add new nodes with trainable parameters. This approach allows for creating a tree-like model, where each node is a set of neural network parameters dedicated to a specific task. The proposed method is inspired by Progressive Neural Network (PNN) concept, therefore it is rehearsal-free and benefits from dynamic change of network structure. However, it requires fewer parameters per task than PNN. Experiments on Permuted MNIST and SplitCIFAR show that the proposed algorithm is on par with other continual learning methods. We also perform ablation studies to clarify the contributions of each system part.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/27/2022

Robust Continual Learning through a Comprehensively Progressive Bayesian Neural Network

This work proposes a comprehensively progressive Bayesian neural network...
research
12/12/2020

Knowledge Capture and Replay for Continual Learning

Deep neural networks have shown promise in several domains, and the lear...
research
06/11/2021

A Novel Approach to Lifelong Learning: The Plastic Support Structure

We propose a novel approach to lifelong learning, introducing a compact ...
research
05/20/2019

Continual Learning in Deep Neural Networks by Using Kalman Optimiser

Learning and adapting to new distributions or learning new tasks sequent...
research
10/17/2018

Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments

The feasibility of deep neural networks (DNNs) to address data stream pr...
research
10/05/2022

ImpressLearn: Continual Learning via Combined Task Impressions

This work proposes a new method to sequentially train a deep neural netw...
research
04/16/2020

Continual Learning with Extended Kronecker-factored Approximate Curvature

We propose a quadratic penalty method for continual learning of neural n...

Please sign up or login with your details

Forgot password? Click here to reset