Generative Models from the perspective of Continual Learning

12/21/2018
by   Timothée Lesort, et al.
16

Which generative model is the most suitable for Continual Learning? This paper aims at evaluating and comparing generative models on disjoint sequential image generation tasks. We investigate how several models learn and forget, considering various strategies: rehearsal, regularization, generative replay and fine-tuning. We used two quantitative metrics to estimate the generation quality and memory ability. We experiment with sequential tasks on three commonly used benchmarks for Continual Learning (MNIST, Fashion MNIST and CIFAR10). We found that among all models, the original GAN performs best and among Continual Learning strategies, generative replay outperforms all other methods. Even if we found satisfactory combinations on MNIST and Fashion MNIST, training generative models sequentially on CIFAR10 is particularly instable, and remains a challenge. Our code is available online [<https://github.com/TLESORT/Generative_Continual_Learning>].

READ FULL TEXT

page 6

page 9

page 13

page 14

page 15

page 16

page 17

page 18

research
12/26/2021

Generative Kernel Continual learning

Kernel continual learning by <cit.> has recently emerged as a strong con...
research
01/22/2021

Continual Learning of Generative Models with Limited Data: From Wasserstein-1 Barycenter to Adaptive Coalescence

Learning generative models is challenging for a network edge node with l...
research
09/18/2023

Looking through the past: better knowledge retention for generative replay in continual learning

In this work, we improve the generative replay in a continual learning s...
research
10/27/2022

Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn Continually or Train from Scratch?

Segmentation of Multiple Sclerosis (MS) lesions is a challenging problem...
research
05/17/2023

Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models

The recent proliferation of large-scale text-to-image models has led to ...
research
08/28/2023

CLNeRF: Continual Learning Meets NeRF

Novel view synthesis aims to render unseen views given a set of calibrat...
research
08/30/2019

BooVAE: A scalable framework for continual VAE learning under boosting approach

Variational Auto Encoders (VAE) are capable of generating realistic imag...

Please sign up or login with your details

Forgot password? Click here to reset