Learning deep autoregressive models for hierarchical data

04/28/2021
by   Carl R. Andersson, et al.
0

We propose a model for hierarchical structured data as an extension to the stochastic temporal convolutional network (STCN). The proposed model combines an autoregressive model with a hierarchical variational autoencoder and downsampling to achieve superior computational complexity. We evaluate the proposed model on two different types of sequential data: speech and handwritten text. The results are promising with the proposed model achieving state-of-the-art performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/08/2022

Hierarchical and Multi-Scale Variational Autoencoder for Diverse and Natural Non-Autoregressive Text-to-Speech

This paper proposes a hierarchical and multi-scale variational autoencod...
research
10/22/2020

Parallel Tacotron: Non-Autoregressive and Controllable TTS

Although neural end-to-end text-to-speech models can synthesize highly n...
research
04/06/2018

Expressive Speech Synthesis via Modeling Expressions with Variational Autoencoder

Recent advances in neural autoregressive models have improve the perform...
research
02/19/2018

Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement

We propose a conditional non-autoregressive neural sequence model based ...
research
04/09/2018

Scalable Factorized Hierarchical Variational Autoencoder Training

Deep generative models have achieved great success in unsupervised learn...
research
10/02/2019

Variational Temporal Abstraction

We introduce a variational approach to learning and inference of tempora...
research
05/22/2016

Factored Temporal Sigmoid Belief Networks for Sequence Learning

Deep conditional generative models are developed to simultaneously learn...

Please sign up or login with your details

Forgot password? Click here to reset