Regularized Sequential Latent Variable Models with Adversarial Neural Networks

08/10/2021
by   Jin Huang, et al.
0

The recurrent neural networks (RNN) with richly distributed internal states and flexible non-linear transition functions, have overtaken the dynamic Bayesian networks such as the hidden Markov models (HMMs) in the task of modeling highly structured sequential data. These data, such as from speech and handwriting, often contain complex relationships between the underlaying variational factors and the observed data. The standard RNN model has very limited randomness or variability in its structure, coming from the output conditional probability model. This paper will present different ways of using high level latent random variables in RNN to model the variability in the sequential data, and the training method of such RNN model under the VAE (Variational Autoencoder) principle. We will explore possible ways of using adversarial method to train a variational RNN model. Contrary to competing approaches, our approach has theoretical optimum in the model training and provides better model training stability. Our approach also improves the posterior approximation in the variational inference network by a separated adversarial training step. Numerical results simulated from TIMIT speech data show that reconstruction loss and evidence lower bound converge to the same level and adversarial training loss converges to 0.

READ FULL TEXT
research
11/15/2017

Z-Forcing: Training Stochastic Recurrent Networks

Many efforts have been devoted to training generative latent variable mo...
research
08/26/2019

Variational Graph Recurrent Neural Networks

Representation learning over graph structured data has been mostly studi...
research
01/16/2018

Variational Recurrent Neural Machine Translation

Partially inspired by successful applications of variational recurrent n...
research
03/19/2021

Adversarial and Contrastive Variational Autoencoder for Sequential Recommendation

Sequential recommendation as an emerging topic has attracted increasing ...
research
10/22/2022

Recurrence Boosts Diversity! Revisiting Recurrent Latent Variable in Transformer-Based Variational AutoEncoder for Diverse Text Generation

Variational Auto-Encoder (VAE) has been widely adopted in text generatio...
research
04/07/2021

Learning robust speech representation with an articulatory-regularized variational autoencoder

It is increasingly considered that human speech perception and productio...
research
08/18/2023

Latent State Models of Training Dynamics

The impact of randomness on model training is poorly understood. How do ...

Please sign up or login with your details

Forgot password? Click here to reset