Real-valued parametric conditioning of an RNN for interactive sound synthesis

05/28/2018
by   Lonce Wyse, et al.
0

A Recurrent Neural Network (RNN) for audio synthesis is trained by augmenting the audio input with information about signal characteristics such as pitch, amplitude, and instrument. The result after training is an audio synthesizer that is played like a musical instrument with the desired musical characteristics provided as continuous parametric control. The focus of this paper is on conditioning data-driven synthesis models with real-valued parameters, and in particular, on the ability of the system a) to generalize and b) to be responsive to parameter values and sequences not seen during training.

READ FULL TEXT

page 4

page 5

research
03/26/2019

Conditioning a Recurrent Neural Network to synthesize musical instrument transients

A recurrent Neural Network (RNN) is trained to predict sound samples bas...
research
08/12/2022

DDX7: Differentiable FM Synthesis of Musical Instrument Sounds

FM Synthesis is a well-known algorithm used to generate complex timbre f...
research
11/10/2022

GANStrument: Adversarial Instrument Sound Synthesis with Pitch-invariant Instance Conditioning

We propose GANStrument, a generative adversarial model for instrument so...
research
06/27/2022

Sound Model Factory: An Integrated System Architecture for Generative Audio Modelling

We introduce a new system for data-driven audio sound model design built...
research
07/11/2021

Neural Waveshaping Synthesis

We present the Neural Waveshaping Unit (NEWT): a novel, lightweight, ful...
research
04/15/2021

Spectrogram Inpainting for Interactive Generation of Instrument Sounds

Modern approaches to sound synthesis using deep neural networks are hard...
research
11/25/2020

MTCRNN: A multi-scale RNN for directed audio texture synthesis

Audio textures are a subset of environmental sounds, often defined as ha...

Please sign up or login with your details

Forgot password? Click here to reset