Conditioning a Recurrent Neural Network to synthesize musical instrument transients

03/26/2019
by   Lonce Wyse, et al.
0

A recurrent Neural Network (RNN) is trained to predict sound samples based on audio input augmented by control parameter information for pitch, volume, and instrument identification. During the generative phase following training, audio input is taken from the output of the previous time step, and the parameters are externally controlled allowing the network to be played as a musical instrument. Building on an architecture developed in previous work, we focus on the learning and synthesis of transients - the temporal response of the network during the short time (tens of milliseconds) following the onset and offset of a control signal. We find that the network learns the particular transient characteristics of two different synthetic instruments, and furthermore shows some ability to interpolate between the characteristics of the instruments used in training in response to novel parameter settings. We also study the behaviour of the units in hidden layers of the RNN using various visualisation techniques and find a variety of volume-specific response characteristics.

READ FULL TEXT

page 2

page 3

page 4

research
05/28/2018

Real-valued parametric conditioning of an RNN for interactive sound synthesis

A Recurrent Neural Network (RNN) for audio synthesis is trained by augme...
research
06/16/2019

Audio Transport: A Generalized Portamento via Optimal Transport

This paper proposes a new method to interpolate between two audio signal...
research
02/14/2021

Parametric Optimization of Violin Top Plates using Machine Learning

We recently developed a neural network that receives as input the geomet...
research
05/14/2017

Musical Instrument Recognition Using Their Distinctive Characteristics in Artificial Neural Networks

In this study an Artificial Neural Network was trained to classify music...
research
06/27/2022

Sound Model Factory: An Integrated System Architecture for Generative Audio Modelling

We introduce a new system for data-driven audio sound model design built...
research
10/04/2020

Resonant Processing of Instrumental Sound Controlled by Spatial Position

We present an acoustic musical instrument played through a resonance mod...
research
01/14/2022

Multiphonic modeling using Impulse Pattern Formulation (IPF)

Multiphonics, the presence of multiple pitches within the sound, can be ...

Please sign up or login with your details

Forgot password? Click here to reset