Progress & Compress: A scalable framework for continual learning

05/16/2018
by   Jonathan Schwarz, et al.
0

We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved through training two neural networks: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously learnt tasks. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. Thus, it is a learning process that may be sustained over a lifetime of tasks while supporting forward transfer and minimising forgetting. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.

READ FULL TEXT

page 2

page 6

page 12

research
01/29/2023

Progressive Prompts: Continual Learning for Language Models

We introduce Progressive Prompts - a simple and efficient approach for c...
research
10/23/2021

AFEC: Active Forgetting of Negative Transfer in Continual Learning

Continual learning aims to learn a sequence of tasks from dynamic data d...
research
10/15/2019

Compacting, Picking and Growing for Unforgetting Continual Learning

Continual lifelong learning is essential to many applications. In this p...
research
05/06/2023

Active Continual Learning: Labelling Queries in a Sequence of Tasks

Acquiring new knowledge without forgetting what has been learned in a se...
research
04/26/2022

Theoretical Understanding of the Information Flow on Continual Learning Performance

Continual learning (CL) is a setting in which an agent has to learn from...
research
07/17/2021

Continual Learning for Task-oriented Dialogue System with Iterative Network Pruning, Expanding and Masking

This ability to learn consecutive tasks without forgetting how to perfor...

Please sign up or login with your details

Forgot password? Click here to reset