Stochastic WaveNet: A Generative Latent Variable Model for Sequential Data

06/15/2018
by   Guokun Lai, et al.
0

How to model distribution of sequential data, including but not limited to speech and human motions, is an important ongoing research problem. It has been demonstrated that model capacity can be significantly enhanced by introducing stochastic latent variables in the hidden states of recurrent neural networks. Simultaneously, WaveNet, equipped with dilated convolutions, achieves astonishing empirical performance in natural speech generation task. In this paper, we combine the ideas from both stochastic latent variables and dilated convolutions, and propose a new architecture to model sequential data, termed as Stochastic WaveNet, where stochastic latent variables are injected into the WaveNet structure. We argue that Stochastic WaveNet enjoys powerful distribution modeling capacity and the advantage of parallel training from dilated convolutions. In order to efficiently infer the posterior distribution of the latent variables, a novel inference network structure is designed based on the characteristics of WaveNet architecture. State-of-the-art performances on benchmark datasets are obtained by Stochastic WaveNet on natural speech modeling and high quality human handwriting samples can be generated as well.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2017

Z-Forcing: Training Stochastic Recurrent Networks

Many efforts have been devoted to training generative latent variable mo...
research
02/04/2019

Re-examination of the Role of Latent Variables in Sequence Modeling

With latent variables, stochastic recurrent models have achieved state-o...
research
02/18/2019

STCN: Stochastic Temporal Convolutional Networks

Convolutional architectures have recently been shown to be competitive o...
research
05/19/2016

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

Sequential data often possesses a hierarchical structure with complex de...
research
06/02/2016

Adversarially Learned Inference

We introduce the adversarially learned inference (ALI) model, which join...
research
04/23/2022

Learning and Inference in Sparse Coding Models with Langevin Dynamics

We describe a stochastic, dynamical system capable of inference and lear...
research
11/11/2019

Feedback Recurrent AutoEncoder

In this work, we propose a new recurrent autoencoder architecture, terme...

Please sign up or login with your details

Forgot password? Click here to reset