Block Neural Network Avoids Catastrophic Forgetting When Learning Multiple Task

11/28/2017
by   Guglielmo Montone, et al.
0

In the present work we propose a Deep Feed Forward network architecture which can be trained according to a sequential learning paradigm, where tasks of increasing difficulty are learned sequentially, yet avoiding catastrophic forgetting. The proposed architecture can re-use the features learned on previous tasks in a new task when the old tasks and the new one are related. The architecture needs fewer computational resources (neurons and connections) and less data for learning the new task than a network trained from scratch

READ FULL TEXT
research
08/09/2021

Some thoughts on catastrophic forgetting and how to learn an algorithm

The work of McCloskey and Cohen popularized the concept of catastrophic ...
research
08/07/2017

Measuring Catastrophic Forgetting in Neural Networks

Deep neural networks are used in many state-of-the-art systems for machi...
research
04/11/2023

Task Difficulty Aware Parameter Allocation Regularization for Lifelong Learning

Parameter regularization or allocation methods are effective in overcomi...
research
04/06/2017

Encoder Based Lifelong Learning

This paper introduces a new lifelong learning solution where a single mo...
research
05/26/2019

Sequential mastery of multiple tasks: Networks naturally learn to learn

We explore the behavior of a standard convolutional neural net in a sett...
research
05/02/2017

Analyzing Knowledge Transfer in Deep Q-Networks for Autonomously Handling Multiple Intersections

We analyze how the knowledge to autonomously handle one type of intersec...
research
06/26/2020

Supermasks in Superposition

We present the Supermasks in Superposition (SupSup) model, capable of se...

Please sign up or login with your details

Forgot password? Click here to reset