Continual Learning Using Task Conditional Neural Networks

05/08/2020
by   Honglin Li, et al.
93

Conventional deep learning models have limited capacity in learning multiple tasks sequentially. The issue of forgetting the previously learned tasks in continual learning is known as catastrophic forgetting or interference. When the input data or the goal of learning change, a continual model will learn and adapt to the new status. However, the model will not remember or recognise any revisits to the previous states. This causes performance reduction and re-training curves in dealing with periodic or irregularly reoccurring changes in the data or goals. The changes in goals or data are referred to as new tasks in a continual learning model. Most of the continual learning methods have a task-known setup in which the task identities are known in advance to the learning model. We propose Task Conditional Neural Networks (TCNN) that does not require to known the reoccurring tasks in advance. We evaluate our model on standard datasets using MNIST and CIFAR10, and also a real-world dataset that we have collected in a remote healthcare monitoring study (i.e. TIHM dataset). The proposed model outperforms the state-of-the-art solutions in continual learning and adapting to new tasks that are not defined in advance.

READ FULL TEXT

page 1

page 3

page 9

page 10

page 13

research
10/09/2019

Continual Learning Using Bayesian Neural Networks

Continual learning models allow to learn and adapt to new changes and ta...
research
08/16/2023

Advancing continual lifelong learning in neural information retrieval: definition, dataset, framework, and empirical evaluation

Continual learning refers to the capability of a machine learning model ...
research
09/20/2023

Create and Find Flatness: Building Flat Training Spaces in Advance for Continual Learning

Catastrophic forgetting remains a critical challenge in the field of con...
research
04/05/2022

Attention Distraction: Watermark Removal Through Continual Learning with Selective Forgetting

Fine-tuning attacks are effective in removing the embedded watermarks in...
research
11/09/2020

Lifelong Learning Without a Task Oracle

Supervised deep neural networks are known to undergo a sharp decline in ...
research
12/09/2022

Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models

In this paper, we present a simple yet surprisingly effective technique ...
research
11/03/2021

A Meta-Learned Neuron model for Continual Learning

Continual learning is the ability to acquire new knowledge without forge...

Please sign up or login with your details

Forgot password? Click here to reset