Feedback Recurrent AutoEncoder

11/11/2019
by   Yang Yang, et al.
0

In this work, we propose a new recurrent autoencoder architecture, termed Feedback Recurrent AutoEncoder (FRAE), for online compression of sequential data with temporal dependency. The recurrent structure of FRAE is designed to efficiently extract the redundancy along the time dimension and allows a compact discrete representation of the data to be learned. We demonstrate its effectiveness in speech spectrogram compression. Specifically, we show that the FRAE, paired with a powerful neural vocoder, can produce high-quality speech waveforms at a low, fixed bitrate. We further show that by adding a learned prior for the latent space and using an entropy coder, we can achieve an even lower variable bitrate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2020

Feedback Recurrent Autoencoder for Video Compression

Recent advances in deep generative modeling have enabled efficient model...
research
10/28/2018

LPCNet: Improving Neural Speech Synthesis Through Linear Prediction

Neural speech synthesis models have recently demonstrated the ability to...
research
10/28/2019

Towards Unsupervised Speech Recognition and Synthesis with Quantized Speech Representation Learning

In this paper we propose a Sequential Representation Quantization AutoEn...
research
06/15/2018

Stochastic WaveNet: A Generative Latent Variable Model for Sequential Data

How to model distribution of sequential data, including but not limited ...
research
07/07/2022

Machine Learning to Predict Aerodynamic Stall

A convolutional autoencoder is trained using a database of airfoil aerod...
research
09/15/2020

Recurrent autoencoder with sequence-aware encoding

Recurrent Neural Networks (RNN) received a vast amount of attention last...
research
01/22/2019

CAE-P: Compressive Autoencoder with Pruning Based on ADMM

Since compressive autoencoder (CAE) was proposed, autoencoder, as a simp...

Please sign up or login with your details

Forgot password? Click here to reset