Selective Replay Enhances Learning in Online Continual Analogical Reasoning

03/06/2021
by   Tyler L. Hayes, et al.
0

In continual learning, a system learns from non-stationary data streams or batches without catastrophic forgetting. While this problem has been heavily studied in supervised image classification and reinforcement learning, continual learning in neural networks designed for abstract reasoning has not yet been studied. Here, we study continual learning of analogical reasoning. Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are commonly used to measure non-verbal abstract reasoning in humans, and recently offline neural networks for the RPM problem have been proposed. In this paper, we establish experimental baselines, protocols, and forward and backward transfer metrics to evaluate continual learners on RPMs. We employ experience replay to mitigate catastrophic forgetting. Prior work using replay for image classification tasks has found that selectively choosing the samples to replay offers little, if any, benefit over random selection. In contrast, we find that selective replay can significantly outperform random selection for the RPM task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/22/2021

Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping

Catastrophic forgetting in neural networks is a significant problem for ...
research
07/04/2022

Progressive Latent Replay for efficient Generative Rehearsal

We introduce a new method for internal replay that modulates the frequen...
research
08/25/2023

GRASP: A Rehearsal Policy for Efficient Online Continual Learning

Continual learning (CL) in deep neural networks (DNNs) involves incremen...
research
10/14/2022

Sequential Learning Of Neural Networks for Prequential MDL

Minimum Description Length (MDL) provides a framework and an objective f...
research
10/29/2018

Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference

Lack of performance when it comes to continual learning over non-station...
research
06/19/2023

Partial Hypernetworks for Continual Learning

Hypernetworks mitigate forgetting in continual learning (CL) by generati...
research
07/14/2020

Lifelong Learning using Eigentasks: Task Separation, Skill Acquisition, and Selective Transfer

We introduce the eigentask framework for lifelong learning. An eigentask...

Please sign up or login with your details

Forgot password? Click here to reset