SampleRNN: An Unconditional End-to-End Neural Audio Generation Model

12/22/2016
by   Soroush Mehri, et al.
0

In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.

READ FULL TEXT
research
07/08/2022

FastLTS: Non-Autoregressive End-to-End Unconstrained Lip-to-Speech Synthesis

Unconstrained lip-to-speech synthesis aims to generate corresponding spe...
research
06/04/2019

MelNet: A Generative Model for Audio in the Frequency Domain

Capturing high-level structure in audio waveforms is challenging because...
research
02/24/2017

Convolutional Gated Recurrent Neural Network Incorporating Spatial Features for Audio Tagging

Environmental audio tagging is a newly proposed task to predict the pres...
research
06/13/2020

GIPFA: Generating IPA Pronunciation from Audio

Transcribing spoken audio samples into International Phonetic Alphabet (...
research
05/16/2023

SoundStorm: Efficient Parallel Audio Generation

We present SoundStorm, a model for efficient, non-autoregressive audio g...
research
03/30/2018

Conditional End-to-End Audio Transforms

We present an end-to-end method for transforming audio from one style to...
research
11/25/2020

MTCRNN: A multi-scale RNN for directed audio texture synthesis

Audio textures are a subset of environmental sounds, often defined as ha...

Please sign up or login with your details

Forgot password? Click here to reset