DeepAI AI Chat
Log In Sign Up

On Memorization in Probabilistic Deep Generative Models

Recent advances in deep generative models have led to impressive results in a variety of application domains. Motivated by the possibility that deep learning models might memorize part of the input data, there have been increased efforts to understand how memorization can occur. In this work, we extend a recently proposed measure of memorization for supervised learning (Feldman, 2019) to the unsupervised density estimation problem and simplify the accompanying estimator. Next, we present an exploratory study that demonstrates how memorization can arise in probabilistic deep generative models, such as variational autoencoders. This reveals that the form of memorization to which these models are susceptible differs fundamentally from mode collapse and overfitting. Finally, we discuss several strategies that can be used to limit memorization in practice.


page 5

page 6

page 15

page 16


Auxiliary Deep Generative Models

Deep generative models parameterized by neural networks have recently ac...

Performing Structured Improvisations with pre-trained Deep Learning Models

The quality of outputs produced by deep generative models for music have...

Void Filling of Digital Elevation Models with Deep Generative Models

In recent years, advances in machine learning algorithms, cheap computat...

Deep Generative Models with Learnable Knowledge Constraints

The broad set of deep generative models (DGMs) has achieved remarkable a...

Bias and Generalization in Deep Generative Models: An Empirical Study

In high dimensional settings, density estimation algorithms rely crucial...

One-Shot Generalization in Deep Generative Models

Humans have an impressive ability to reason about new concepts and exper...

Code Repositories


Code for "On Memorization in Probabilistic Deep Generative Models"

view repo