Concept Formation and Dynamics of Repeated Inference in Deep Generative Models

12/12/2017
by   Yoshihiro Nagano, et al.
0

Deep generative models are reported to be useful in broad applications including image generation. Repeated inference between data space and latent space in these models can denoise cluttered images and improve the quality of inferred results. However, previous studies only qualitatively evaluated image outputs in data space, and the mechanism behind the inference has not been investigated. The purpose of the current study is to numerically analyze changes in activity patterns of neurons in the latent space of a deep generative model called a "variational auto-encoder" (VAE). What kinds of inference dynamics the VAE demonstrates when noise is added to the input data are identified. The VAE embeds a dataset with clear cluster structures in the latent space and the center of each cluster of multiple correlated data points (memories) is referred as the concept. Our study demonstrated that transient dynamics of inference first approaches a concept, and then moves close to a memory. Moreover, the VAE revealed that the inference dynamics approaches a more abstract concept to the extent that the uncertainty of input data increases due to noise. It was demonstrated that by increasing the number of the latent variables, the trend of the inference dynamics to approach a concept can be enhanced, and the generalization ability of the VAE can be improved.

READ FULL TEXT
research
11/08/2018

Disentangling Latent Factors with Whitening

After the success of deep generative models in image generation tasks, l...
research
09/12/2018

Coordinated Heterogeneous Distributed Perception based on Latent Space Representation

We investigate a reinforcement approach for distributed sensing based on...
research
10/31/2017

Latent Space Oddity: on the Curvature of Deep Generative Models

Deep generative models provide a systematic way to learn nonlinear data ...
research
02/22/2017

Adversarial examples for generative models

We explore methods of producing adversarial examples on deep generative ...
research
11/28/2022

Accelerating Antimicrobial Peptide Discovery with Latent Sequence-Structure Model

Antimicrobial peptide (AMP) is a promising therapy in the treatment of b...
research
02/18/2023

Autocodificadores Variacionales (VAE) Fundamentos Teóricos y Aplicaciones

VAEs are probabilistic graphical models based on neural networks that al...
research
11/09/2020

Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE

The ability to record activities from hundreds of neurons simultaneously...

Please sign up or login with your details

Forgot password? Click here to reset