Lifelong Learning Process: Self-Memory Supervising and Dynamically Growing Networks

04/27/2020
by   Youcheng Huang, et al.
0

From childhood to youth, human gradually come to know the world. But for neural networks, this growing process seems difficult. Trapped in catastrophic forgetting, current researchers feed data of all categories to a neural network which remains the same structure in the whole training process. We compare this training process with human learing patterns, and find two major conflicts. In this paper, we study how to solve these conflicts on generative models based on the conditional variational autoencoder(CVAE) model. To solve the uncontinuous conflict, we apply memory playback strategy to maintain the model's recognizing and generating ability on invisible used categories. And we extend the traditional one-way CVAE to a circulatory mode to better accomplish memory playback strategy. To solve the `dead' structure conflict, we rewrite the CVAE formula then are able to make a novel interpretation about the funtions of different parts in CVAE models. Based on the new understanding, we find ways to dynamically extend the network structure when training on new categories. We verify the effectiveness of our methods on MNIST and Fashion MNIST and display some very insteresting results.

READ FULL TEXT

page 6

page 7

page 8

research
01/17/2022

Lifelong Generative Learning via Knowledge Reconstruction

Generative models often incur the catastrophic forgetting problem when t...
research
07/05/2019

Incremental Concept Learning via Online Generative Memory Recall

The ability to learn more and more concepts over time from incrementally...
research
09/06/2018

Memory Replay GANs: learning to generate images from new categories without forgetting

Previous works on sequential learning address the problem of forgetting ...
research
09/09/2023

Training of Spiking Neural Network joint Curriculum Learning Strategy

Starting with small and simple concepts, and gradually introducing compl...
research
02/11/2022

Continual Learning with Invertible Generative Models

Catastrophic forgetting (CF) happens whenever a neural network overwrite...
research
04/10/2022

FOSTER: Feature Boosting and Compression for Class-Incremental Learning

The ability to learn new concepts continually is necessary in this ever-...

Please sign up or login with your details

Forgot password? Click here to reset