Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator

10/17/2022
by   Jaesung Yoo, et al.
0

When a deep learning model is sequentially trained on different datasets, it forgets the knowledge acquired from previous data, a phenomenon known as catastrophic forgetting. It deteriorates performance of the deep learning model on diverse datasets, which is critical in privacy-preserving deep learning (PPDL) applications based on transfer learning (TL). To overcome this, we propose review learning (RL), a generative-replay-based continual learning technique that does not require a separate generator. Data samples are generated from the memory stored within the synaptic weights of the deep learning model which are used to review knowledge acquired from previous datasets. The performance of RL was validated through PPDL experiments. Simulations and real-world medical multi-institutional experiments were conducted using three types of binary classification electronic health record data. In the real-world experiments, the global area under the receiver operating curve was 0.710 for RL and 0.655 for TL. Thus, RL was highly effective in retaining previously learned knowledge.

READ FULL TEXT

page 2

page 6

page 7

page 8

research
08/28/2021

Prototypes-Guided Memory Replay for Continual Learning

Continual learning (CL) refers to a machine learning paradigm that using...
research
05/07/2020

Generative Feature Replay with Orthogonal Weight Modification for Continual Learning

The ability of intelligent agents to learn and remember multiple tasks s...
research
03/23/2023

Adiabatic replay for continual learning

Conventional replay-based approaches to continual learning (CL) require,...
research
03/22/2021

Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification

In this work, we study the phenomenon of catastrophic forgetting in the ...
research
05/05/2021

Continual Learning on the Edge with TensorFlow Lite

Deploying sophisticated deep learning models on embedded devices with th...
research
07/14/2020

Lifelong Learning using Eigentasks: Task Separation, Skill Acquisition, and Selective Transfer

We introduce the eigentask framework for lifelong learning. An eigentask...
research
03/27/2021

Addressing catastrophic forgetting for medical domain expansion

Model brittleness is a key concern when deploying deep learning models i...

Please sign up or login with your details

Forgot password? Click here to reset