Neural Percussive Synthesis Parameterised by High-Level Timbral Features

11/25/2019 ∙ by António Ramires, et al. ∙ 0

We present a deep neural network-based methodology for synthesising percussive sounds with control over high-level timbral characteristics of the sounds. This approach allows for intuitive control of a synthesizer, enabling the user to shape sounds without extensive knowledge of signal processing. We use a feedforward convolutional neural network-based architecture, which is able to map input parameters to the corresponding waveform. We propose two datasets to evaluate our approach on both a restrictive context, and in one covering a broader spectrum of sounds. The timbral features used as parameters are taken from recent literature in signal processing. We also use these features for evaluation and validation of the presented model, to ensure that changing the input parameters produces a congruent waveform with the desired characteristics. Finally, we evaluate the quality of the output sound using a subjective listening test. We provide sound examples and the system's source code for reproducibility.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Percussion is one of the main components in music and is normally responsible for a song’s rhythm section. Classic percussion instruments create sound when struck or scraped; however new electronic instruments were developed for generating these sounds either through playing prerecorded samples or through synthesising them. These are called drum machines and became very popular for electronic music [12]. However, these early drum machines did not provide much control over the generation of the sounds. With the developments in digital audio technology and computer music, new drum machines were hand-designed using expert knowledge on synthesis techniques and electronic music production.

With the success of deep learning, several innovative generative methodologies have been proposed in the recent years. These methodologies include Generative Adversarial Networks (GANs) 

[11]

, Variational Autoencoders (VAEs) 

[14] and autoregressive networks [20, 7]. In the audio domain, such methodologies have been applied for singing voice [4], instrumental sounds [7] and drum sound generation [1]. However, in the case of percussive sounds, the proposed methods only allow the user to navigate in non-intuitive high dimensional latent spaces.

The aim of our research is to create a single-event percussive-sound synthesizer that can be intuitively controlled by users, despite their sound design knowledge. This requires both a back end of a generative model that is able to map the user controls to the output sound and a front end user interface. In this paper, we propose a generative methodology based on the Wave-U-Net architecture [18]. Our method maps high-level characteristics of sounds to the corresponding waveforms. The use of these features is aimed at giving the end-user intuitive control over the sound generation process. We also present a dataset of percussive one-shot sounds collected from Freesound [10], curated specially for this study.

The source code for our model is available online111https://github.com/pc2752/percussive˙synth, as are sound examples222https://pc2752.github.io/percussive˙synth/, showcasing the robustness of the models.

2 Generative Models For Audio

In the audio domain, several generative models have been proposed over the recent years. In the context of music, generative models have shown success specially in creating pitched instrumental sounds, when conditioned on musical notes. A pioneering work on this field was NSynth [7]. This synthesizer is based on the wavenet vocoder [17] an autoregressive architecture, which, while capable of generating high quality sounds is very resource intensive. Several alternate architectures have been used for the generation of musical notes, based on GANs [5, 6], VAEs [8, 1, 3], Adversarial AEs [2] and AEs with Wavenet [7].

For percussive sound synthesis, the most relevant work is the Neural Drum Machine [1], which uses a Conditional Wasserstein Auto Encoder [19]

, trained on the magnitude component of the spectogram of percussive sounds coupled with a Multi-Head Convolutional Neural Network for reconstructing the audio from the spectral representation. Principal Component Analysis is used on the low-dimensional representation learned by the AE to select the

most influential dimensions of the dimensions of the embedding. These are provided to the user over a control interface. However these parameters controlled by the user are abstract and are not shown to be perceptually relevant or semantically meaningful.

In our case, we wish to directly map a chosen set of features to the output sound. The WaveNet [20] architecture has been shown to generate high quality waveforms conditioned on input features. However, the autoregressive nature of the model makes it resource extensive and the short nature of percussive sounds do not require the use of a long temporal model. Therefore, for our study, we decided to use the Wave-U-Net [18]

architecture, which has been shown to effectively model waveforms in the case of source separation and follows a feedforward convolutional architecture, making it resource efficient. The model takes as input the waveform of the mixture of sources, downsamples it through a series of convolutional operations to generate a low dimensional representation and then upsamples it through linear interpolation followed by convolution to the output dimensions. There are concatinative connections between the corresponding layers of the upsampling and downsampling blocks. In our work, we adapt this architecture to fit the desired use case.

3 Timbral features

For our end goal, we require semantically meaningful features that can allow for intuitive control of the synthesizer. In the field of Music Information Retrieval, a strong effort has been put on developing hand crafted features which can characterise sounds. These features enable users to retrieve sounds or music from large audio collections by automatically describing them according to their timbre, their mood, or other characteristics which are easy to understand by users. For our purpose, we need features pertaining to timbre. We understand timbre as pertaining to perceptual characteristics of sounds analogous to colour or quality. Control over such features would enable the user to intuitively shape sounds.

A set of such features have been proposed in [15]

, where recurrent query terms, related to timbral characteristics, used for searching sounds in large audio databases were identified. Regression models were developed by mapping user-collected ratings to timbral characteristics, which quantify semantic attributes. These are hardness, depth, brightness, roughness, boominess, warmth and sharpness. The work proposes feature extractors pertaining to these query terms and we use an open-source implementation of the same

333https://github.com/AudioCommons/ac-audio-extractor. For the rest of this paper, we refer to the features extracted by these extractors as timbral features.

Another relevant characteristic which is commonly present in drum synthesizers and music makers are used to work with is the temporal envelope of the sound. This feature describes the energy of the sounds over time and is normally available to users in drum synthesizers as a set of attack and decay controls. We use an open-source implementation of the envelope algorithm described in [21], present in the Essentia library [9]. An attack time of and a release time of was used to generate a smooth curve which matched the sound energy over time. It must be noted that the timbral features described previously are summary features, i.e. have a single value for each sound while the envelope is time evolving and of the same dimensions as the waveform.

4 Dataset Curation

We curated two datasets in order to train our models in different scenarios. The first consists of sounds taken from Freesound, a website which hosts a collaborative collection of Creative Commons licensed sounds444https://freesound.org [10]. We performed queries to the database with the name of percussion instruments as keywords in order to retrieve a set of percussive sounds, with a limit on effective duration of . We then conducted a manual verification of these sounds555We developed an annotation tool available at this repository https://github.com/xavierfav/percussive-annotator.: to select the ones that were containing one single event, and were of appreciable quality in the context of traditional electronic music production. This process created a dataset of around sounds, containing instruments such as kicks, snares, cymbals and bells. For the rest of this paper, we refer to this dataset as FREESOUND.

A second dataset was created by aggregating about kick drum one-shot samples from our personal collections, originating mostly from commercial libraries. This type of sounds are often of high quality, annotated and contain only one event which makes it very handy to construct a dataset of isolated sounds, suiting our needs for training our model in a restricted context. We refer to this dataset as KICKS.

The aim of creating two datasets was to understand if our method could be applicable for synthesising a wide variety of percussion sounds, or if it was more appropriate to focus on synthesising only one type of sounds, in this case the kick drum.

The dataset will be made publicly available upon paper acceptance.

5 Methodology

We aim to model the probability distribution of the waveform

as a function of the timbral features and the time-domain envelope . To this end we use a feedforward convolutional neural network as a function approximator to model . We use a U-Net architecture, similar to the one used by [18], which has been shown to effectively model the waveform of an audio signal. Our network takes the envelope as input and concatenates to it the timbral features, , broadcast to the input dimensions, as done by [20]. As shown in Figure 1

, downsampling is done via a series of convolutions with stride

, to produce a low-dimension embedding. We use a filter length of and double the number of filters after each layers, starting with filters. A total of layers are used in the encoder, leading to an embedding of size . We upsample this low dimensional embedding sequentially to the output , using linear interpolation followed by convolution. This mirrors the approach used by [4, 18] and has been shown to avoid high frequency artefacts which appear while upsampling with transposed convolutions. As with the U-Net, there are connections between the corresponding layers of the encoder and decoder, as shown in Figure 1.

Figure 1: The proposed architecture, with layers.

We initially used a simple reconstruction loss function, shown in equation

1 to optimise the network.

(1)

While this resulted in a decent output, we noticed that the network was able to reproduce the low frequency components of the desired sound, but lacked details in high frequency components. To rectify this, we added a short time fourier transform (

STFT) based loss, similar to [16]. This loss is shown in equation 2.

(2)

The final loss of this network is shown in equation 3.

(3)

Where is the weight given to the high frequency component of the reconstruction.

6 Experiments

6.1 Data Pre-processing

All sound were downsampled to a sampling rate of and silences were removed from the beginning and end of the sounds. Following this, we calculated the timbral features and envelope described in section 3

and then zero-padded at the end of the sound to

samples. The features were normalised using min-max normalisation, to ensure that the inputs were within the range   to  

6.2 Training the network

The network was trained using the Adam optimiser [13] for epochs with a batch size of . We use of the data for training and for evaluation. The STFT used for the loss function is calculated over samples and a hopsize of . With the given sampling rate, this led to a frequency resolution of per bin. We evaluate the model with three losses: the loss, henceforth referred to as WAVE; the , referred to as FULL; and a version with only the high frequency components of the STFT for the , referred as HIGH. This last model uses STFT components above or bins as traditional kick synthesizers model a kick sound via a low frequency sinusoid, generally below with some high frequency noise. We use for our experiments.

6.3 Evaluation

Figure 2: A sample of the input envelope and features and the output waveforms for the various models for the KICK dataset

The proposed models need to be evaluated in terms of the perceived audio quality and the coherence of timbral features between the input and the output. A preliminary assessment of the quality of reconstruction can be made by looking at the output waveforms, shown in Figure 2 for a sample from the test set of the KICKS dataset. Although the reconstruction seems to be visually accurate for the three models, the perceived quality of the audio is a subjective metric that cannot be judged by simply looking at the plots. We can objectively assess the coherence of the timbral features used as input to the model. More importantly, we want to assess that a change in these features leads to a corresponding change in the output.

To this end, we vary each individual timbral feature while maintaining the other features constant. We then check the accuracy of the output waveform via the same feature extractors used for training. For each individual feature, we set values of low, corresponding to over the normalised scale, mid, corresponding to and high, corresponding to . The respective outputs for such models are termed , and and their corresponding features are , and for the feature. For coherent modelling, the models should follow the order . We asses the accuracy of this order in three tests, , which checks the condition , , which checks and , which checks over all values of . The accuracy of the models over these tests is shown in Table 1 and a feature wise summary is shown in Table 2.

Accuracy
Dataset Model E1 E2 E3
WAVE 0.601 0.569 0.552
FREESOUND HIGH 0.649 0.601 0.657
FULL 0.825 0.758 0.780
WAVE 0.805 0.722 0.722
KICKS HIGH 0.876 0.789 0.769
FULL 0.920 0.814 0.798
Table 1: Objective verification of feature coherence across models and datasets.

It can be seen that the FULL model, followed by the HIGH, are the most efficient at mapping the input features to the output waveform in terms of feature coherence, but all three models do maintain this coherence to a high degree.

FREESOUND KICKS
Feature E1 E2 E3 E1 E2 E3
Boominess 0.98 0.82 0.98 0.96 0.86 0.95
Brightness 0.99 0.99 1.00 0.99 0.98 0.84
Depth 0.94 0.65 0.94 0.99 0.89 0.94
Hardness 0.64 0.66 0.59 0.85 0.61 0.79
Roughness 0.63 0.59 0.57 0.84 0.80 0.62
Sharpness 0.63 0.77 0.45 0.90 0.91 0.54
Warmth 0.92 0.79 0.91 0.88 0.61 0.87
Table 2: Objective verification of the accuracy on feature coherence for the best performing models for each dataset.

While feature coherence is maintained for features like boominess, brightness, depth and warmth for the full dataset, the models are less consistent in terms of hardness, roughness and sharpness, particularly true for the FREESOUND dataset.

Given the absence of a suitable baseline system, we decided to use an online AB listening test that compared the models amongst themselves and a reference for subjective evaluation of quality. The participants of the test were presented with examples each from both datasets. Each example had two options, A and B from two of the models used for the dataset, along with a reference ground truth audio. There were examples each from each of the pairs. The participant was asked to choose the audio clip which was closest in quality to the reference audio. There were participants in the listening test, the results of which are shown in Figure 3.

Figure 3: Results of the listening test, displaying the user preference between loss functions for each of the datasets.

A clear preference for the HIGH model can be seen, especially for the KICKS dataset. This can be attributed partly to the choice of cutoff frequency used in the model and partly to the diversity of sounds in the FREESOUND dataset. We note the difficulty in assessing audio quality over printed text and encourage the user to visit our demo page and listen to the audio samples for assessment.

7 Conclusions And Future Work

In this work, we proposed a method using a feedforward convolutional neural network based on the Wave-U-Net [18] for synthesising percussive sounds conditioned on semantically meaningful features.

Our final aim is to create a system that can be controlled using high-level parameters, being semantically meaningful characteristics that correspond to concepts casual music makers are familiar with. To this end, we use hand crafted features designed by MIR experts and curate and present a dataset for the purpose of modelling percussive sounds. Via objective evaluation, we were able to verify that the control features do indeed modify the output waveform as desired and quality assessment was done via an online listening test.

Future work will focus on developing an interface for interacting with the synthesizer, which will allow to evaluate the approach in its context of use, with real users.

References

  • [1] C. Aouameur, P. Esling, and G. Hadjeres (2019) Neural drum machine : an interactive system for real-time synthesis. External Links: Link, 1907.02637 Cited by: §1, §2, §2.
  • [2] A. Bitton, P. Esling, A. Caillon, and M. Fouilleul (2019) Assisted sound sample generation with musical conditioning in adversarial auto-encoders. External Links: Link, 1904.06215 Cited by: §2.
  • [3] A. Bitton, P. Esling, and A. Chemla-Romeu-Santos (2018) Modulated variational auto-encoders for many-to-many musical timbre transfer. External Links: Link, 1810.00222 Cited by: §2.
  • [4] P. Chandna, M. Blaauw, J. Bonada, and E. Gomez (2019) WGANSing: a multi-voice singing voice synthesizer based on the wasserstein-gan. Cited by: §1, §5.
  • [5] C. Donahue, J. J. McAuley, and M. S. Puckette (2018) Synthesizing audio with generative adversarial networks. External Links: Link, 1802.04208 Cited by: §2.
  • [6] J. Engel, K. K. Agrawal, S. Chen, I. Gulrajani, C. Donahue, and A. Roberts (2019) GANSynth: adversarial neural audio synthesis. In International Conference on Learning Representations, External Links: Link Cited by: §2.
  • [7] J. Engel, C. Resnick, A. Roberts, S. Dieleman, M. Norouzi, D. Eck, and K. Simonyan (2017) Neural audio synthesis of musical notes with wavenet autoencoders. In

    Proceedings of the 34th International Conference on Machine Learning - Volume 70

    ,
    ICML’17, pp. 1068–1077. External Links: Link Cited by: §1, §2.
  • [8] P. Esling and A. Bitton (2018) Bridging audio analysis, perception and synthesis with perceptually-regularized variational timbre spaces.. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR 2018), Cited by: §2.
  • [9] D. B. et al. (2013-04/11/2013) ESSENTIA: an audio analysis library for music information retrieval. In International Society for Music Information Retrieval Conference (ISMIR’13), Curitiba, Brazil, pp. 493–498. External Links: Link Cited by: §3.
  • [10] F. Font, G. Roma, and X. Serra (2013) Freesound technical demo. In Proceedings of the 21st ACM international conference on Multimedia, pp. 411–412. Cited by: §1, §4.
  • [11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1.
  • [12] Z. Hasnain (2017-04) How the roland tr-808 revolutionized music. The Verge. External Links: Link Cited by: §1.
  • [13] D. P. Kingma and J. Ba (2015) Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Cited by: §6.2.
  • [14] D. P. Kingma and M. Welling (2014) Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, External Links: Link Cited by: §1.
  • [15] A. Pearce, T. Brookes, and R. Mason (2017-06) Timbral attributes for sound effect library searching. In Audio Engineering Society Conference: 2017 AES International Conference on Semantic Audio, External Links: Link Cited by: §3.
  • [16] A. Sahai, R. Weber, and B. McWilliams (2019) Spectrogram feature losses for music source separation. Cited by: §5.
  • [17] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. Skerrv-Ryan, et al. (2018) Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4779–4783. Cited by: §2.
  • [18] D. Stoller, S. Ewert, and S. Dixon (2018) Wave-u-net: A multi-scale neural network for end-to-end audio source separation. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, Paris, France, September 23-27, 2018, pp. 334–340. Cited by: §1, §2, §5, §7.
  • [19] I. O. Tolstikhin, O. Bousquet, S. Gelly, and B. Schölkopf (2018) Wasserstein auto-encoders. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, Cited by: §2.
  • [20] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. W. Senior, and K. Kavukcuoglu (2016) WaveNet: A generative model for raw audio. In The 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, USA, 13-15 September 2016, pp. 125. Cited by: §1, §2, §5.
  • [21] U. Zölzer (2008) Digital audio signal processing. Vol. 9, Wiley Online Library. Cited by: §3.