Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models

05/17/2023
by   Alvin Heng, et al.
0

The recent proliferation of large-scale text-to-image models has led to growing concerns that such models may be misused to generate harmful, misleading, and inappropriate content. Motivated by this issue, we derive a technique inspired by continual learning to selectively forget concepts in pretrained deep generative models. Our method, dubbed Selective Amnesia, enables controllable forgetting where a user can specify how a concept should be forgotten. Selective Amnesia can be applied to conditional variational likelihood models, which encompass a variety of popular deep generative frameworks, including variational autoencoders and large-scale text-to-image diffusion models. Experiments across different models demonstrate that our approach induces forgetting on a variety of concepts, from entire classes in standard datasets to celebrity and nudity prompts in text-to-image models. Our code is publicly available at https://github.com/clear-nus/selective-amnesia.

READ FULL TEXT

page 8

page 18

page 19

page 20

page 21

page 22

page 23

page 24

research
05/23/2017

Continual Learning in Generative Adversarial Nets

Developments in deep generative models have allowed for tractable learni...
research
03/09/2020

FoCL: Feature-Oriented Continual Learning for Generative Models

In this paper, we propose a general framework in continual learning for ...
research
12/21/2018

Generative Models from the perspective of Continual Learning

Which generative model is the most suitable for Continual Learning? This...
research
07/16/2023

A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning

Forgetting refers to the loss or deterioration of previously acquired in...
research
04/12/2023

Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA

Recent works demonstrate a remarkable ability to customize text-to-image...
research
07/05/2023

Exploring Continual Learning for Code Generation Models

Large-scale code generation models such as Codex and CodeT5 have achieve...
research
08/02/2023

Training Data Protection with Compositional Diffusion Models

We introduce Compartmentalized Diffusion Models (CDM), a method to train...

Please sign up or login with your details

Forgot password? Click here to reset