RG-Flow: A hierarchical and explainable flow model based on renormalization group and sparse prior

09/30/2020
by   Hong-Ye Hu, et al.
7

Flow-based generative models have become an important class of unsupervised learning approaches. In this work, we incorporate the key idea of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, called RG-Flow, which can separate different scale information of images with disentangle representations at each scale. We demonstrate our method mainly on the CelebA dataset and show that the disentangled representation at different scales enables semantic manipulation and style mixing of the images. To visualize the latent representation, we introduce the receptive fields for flow-based models and find receptive fields learned by RG-Flow are similar to convolutional neural networks. In addition, we replace the widely adopted Gaussian prior distribution by sparse prior distributions to further enhance the disentanglement of representations. From a theoretical perspective, the proposed method has O(log L) complexity for image inpainting compared to previous flow-based models with O(L^2) complexity.

READ FULL TEXT

page 6

page 7

page 8

page 14

page 16

page 18

page 19

page 20

research
02/09/2020

Out-of-Distribution Detection with Distance Guarantee in Deep Generative Models

Recent research has shown that it is challenging to detect out-of-distri...
research
03/19/2021

GLOWin: A Flow-based Invertible Generative Framework for Learning Disentangled Feature Representations in Medical Images

Disentangled representations can be useful in many downstream tasks, hel...
research
05/13/2021

PassFlow: Guessing Passwords with Generative Flows

Recent advances in generative machine learning models rekindled research...
research
09/16/2018

f-VAEs: Improve VAEs with Conditional Flows

In this paper, we integrate VAEs and flow-based generative models succes...
research
06/07/2021

Generative Flows with Invertible Attentions

Flow-based generative models have shown excellent ability to explicitly ...
research
01/20/2019

Inducing Sparse Coding and And-Or Grammar from Generator Network

We introduce an explainable generative model by applying sparse operatio...
research
01/23/2021

Improved Training of Sparse Coding Variational Autoencoder via Weight Normalization

Learning a generative model of visual information with sparse and compos...

Please sign up or login with your details

Forgot password? Click here to reset