Streamable Neural Audio Synthesis With Non-Causal Convolutions

04/14/2022
by   Antoine Caillon, et al.
0

Deep learning models are mostly used in an offline inference fashion. However, this strongly limits the use of these models inside audio generation setups, as most creative workflows are based on real-time digital signal processing. Although approaches based on recurrent networks can be naturally adapted to this buffer-based computation, the use of convolutions still poses some serious challenges. To tackle this issue, the use of causal streaming convolutions have been proposed. However, this requires specific complexified training and can impact the resulting audio quality. In this paper, we introduce a new method allowing to produce non-causal streaming models. This allows to make any convolutional model compatible with real-time buffer-based processing. As our method is based on a post-training reconfiguration of the model, we show that it is able to transform models trained without causal constraints into a streaming model. We show how our method can be adapted to fit complex architectures with parallel branches. To evaluate our method, we apply it on the recent RAVE model, which provides high-quality real-time audio synthesis. We test our approach on multiple music and speech datasets and show that it is faster than overlap-add methods, while having no impact on the generation quality. Finally, we introduce two open-source implementation of our work as Max/MSP and PureData externals, and as a VST audio plugin. This allows to endow traditional digital audio workstation with real-time neural audio synthesis on a laptop CPU.

READ FULL TEXT

page 5

page 6

research
08/09/2020

SpeedySpeech: Efficient Neural Speech Synthesis

While recent neural sequence-to-sequence models have greatly improved th...
research
11/09/2021

RAVE: A variational autoencoder for fast and high-quality neural audio synthesis

Deep generative models applied to audio have improved by a large margin ...
research
05/15/2020

WG-WaveNet: Real-Time High-Fidelity Speech Synthesis without GPU

In this paper, we propose WG-WaveNet, a fast, lightweight, and high-qual...
research
02/23/2018

Efficient Neural Audio Synthesis

Sequential models achieve state-of-the-art results in audio, visual and ...
research
10/05/2021

Neural Pitch-Shifting and Time-Stretching with Controllable LPCNet

Modifying the pitch and timing of an audio signal are fundamental audio ...
research
01/28/2023

Cross-domain Neural Pitch and Periodicity Estimation

Pitch is a foundational aspect of our perception of audio signals. Pitch...
research
03/01/2022

Real time spectrogram inversion on mobile phone

With the growth of computing power on mobile phones and privacy concerns...

Please sign up or login with your details

Forgot password? Click here to reset