OrchideaSOL: a dataset of extended instrumental techniques for computer-aided orchestration

07/01/2020
by   Carmine-Emanuele Cella, et al.
NYU college
berkeley college
0

This paper introduces OrchideaSOL, a free dataset of samples of extended instrumental playing techniques, designed to be used as default dataset for the Orchidea framework for target-based computer-aided orchestration. OrchideaSOL is a reduced and modified subset of Studio On Line, or SOL for short, a dataset developed at Ircam between 1996 and 1998. We motivate the reasons behind OrchideaSOL and describe the differences between the original SOL and our dataset. We will also show the work done in improving the dynamic ranges of orchestral families and other aspects of the data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/17/2021

Incorporating Uncertainty in Learning to Defer Algorithms for Safe Computer-Aided Diagnosis

In this study we propose the Learning to Defer with Uncertainty (LDU) al...
11/14/2019

ViWi: A Deep Learning Dataset Framework for Vision-Aided Wireless Communications

The growing role that artificial intelligence and specifically machine l...
01/25/2017

Photographic dataset: playing cards

This is a photographic dataset collected for testing image processing al...
02/07/2018

Computer-Aided Annotation for Video Tampering Dataset of Forensic Research

The annotation of video tampering dataset is a boring task that takes a ...
10/19/2020

BIRD: Big Impulse Response Dataset

This paper introduces BIRD, the Big Impulse Response Dataset. This open ...
12/07/2020

A novel dataset for the identification of computer generated melodies in the CSMT challenge

In this paper, the dataset used for the data challenge organised by Conf...
05/31/2021

GWLAN: General Word-Level AutocompletioN for Computer-Aided Translation

Computer-aided translation (CAT), the use of software to assist a human ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Target-based computer-aided orchestration is a set of techniques that help composers find combinations of orchestral sounds matching a given sound [maresz2013computer]. Typically, computer-aided orchestration systems consist of algorithms that compute a large number of combinations of audio samples in a dataset, corresponding to instrumental notes, trying to find the one combination that is “more similar” to the target (with respect to some metrics). Solutions to this problem are proposed as orchestral scores, often ranked by similarity between the target sound and the mixture of audio samples in the dataset. Figure 1 shows a typical workflow of a computer-aided orchestration system.

Figure 1: A diagram of a typical system for computer-aided orchestration.

In such a context, the quality of the proposed orchestrations heavily depends on the quality and on the overall consistency of the sounds in the dataset. An essential aspect of this consistency is given by the dynamic ranges across instrumental families. Suppose, for example, that a flute note that is tagged as pp has a waveform amplitude greater than a trumpet note tagged as ff. In this case, the orchestration solutions would work poorly in real-life orchestral scenarios.

Among the state-of-the-art systems for computer-aided orchestration there is the Orch* family: Orchidée (2008) [carpentier2010interacting], Orchids (2013) [caetano2020], Orchidea (2017) [gillick2019estimating], developed at Ircam (Paris) and at the Haute École de Musique (Geneva). In this paper, we focus on the last iteration, Orchidea111For more information on the Orchidea system for computer-aided orchestration, please visit: http://www.orch-idea.org.

Until now, Orchidea relied on the Studio On Line (SOL) dataset [ballet1999studio], in a specific distribution, called 0.9.2, originally created as default dataset for Orchids. This version differs from the original SOL in several regards. First, the amplitude level of each audio sample has been normalized: this means that the genuine dynamic ranges of each instrument family have been lost. Moreover, the licensing of the dataset is not free and requires a Premium account on the Ircam forum222See https://forum.ircam.fr/ for more information..

The effects generated by the normalization of the files severely impacted the results of Orchidea. Thus, we decided to recover and process the original recordings of SOL to create three new datasets, one of them being OrchideaSOL. The three versions have different sizes and features and are distributed under different licenses:

  • TinySOL: a small subset of SOL including only ordinario sounds, free for non-commercial usage;

  • OrchideaSOL: the version documented in this paper, featuring a certain number of extended techniques (playing techniques), free for non commercial usages after subscription to the Ircam Forum;

  • FullSOL: the full set of samples originally present in SOL, requiring a Premium account on the Ircam forum to be used.

Although in this paper we will specifically focus on OrchideaSOL, all three vesions have been processed in the same way, in order to correct and improve the dynamic ranges and other minor problems.

2 Studio On Line: a brief history

Studio On Line [ballet1999studio] is a dataset of instrumental sounds featuring a rich set of extended techniques commonly found throughout 20th century Western music. SOL was recorded in the Espace de projection of Ircam between 1996 and 1998 [sollevyreport]. Over the past two decades, SOL has served as a reference dataset for many Ircam projects. The project also included software for sound processing and transformation. The head of the project was Guillaume Ballet, while the artistic managers were Joshua Fineberg (in 1997) and Fabien Lévy (in 1998).

The recordings were carried out in two phases. The first phase included the recordings of Flute, Oboe, Clarinet, Bassoon, Horn, Trumpet, Trombone, Violin, Viola, Cello and Double Bass. The second phase was planned to include more instruments and doublings [solfinebergreport]; yet, only Tuba, Harp, Guitar, Alto Sax and Accordion were eventually accomplished.

All instruments were recorded in a six-channel format (see Fig. 2):

  • a stereo couple (track 1 and 2) was used as a reference signal;

  • a proximity microphone (track 3), with minimal reverberation, was used to record the signal at a fairly high level even for very soft sounds;

  • a so-called ‘internal’ microphone (track 4), either an aerial microphone placed inside the instrument or a contact microphone, was used to record sounds mostly for the purpose of acoustics studies;

  • two bi-directional microphones with a figure-8 pattern (tracks 5 and 6), placed far from the musician, were used to capture the reverberation.

More than 125k samples (sampled at 48 kHz with resolution of 24 bit) were recorded [sollevyreport]: ordinario sounds were sampled at least at three levels of dynamics (usually pp, mfand ff) and with a semitonal resolution. Woodwind instruments also included quarter tones. Most of the other playing techniques were sampled with a coarser resolution in pitch and/or in dynamics.

Figure 2: Spatial arrangement of the microphones during the recording sessions of SOL.

3 OrchideaSOL

We have built OrchideaSOL starting from the original version of SOL, properly cleaned and trimmed, resampled at 44.1kHz (with a bit depth of 24 bits), and made monophonic by only keeping the third channel (proximity microphone). The rationale behind this choice was to avoid the normalization introduced in later versions of the base, such as the one used by Orchids, as well as some resampling issues. The proximity microphone provided the cleanest audio recording quality. We have subsequently performed the operation described in the following subsections.

(a)
(b)
(c)
Figure 3: RMS volume curves of the Alto Saxophone ordinario notes at three different level of dynamics, in three different SOL-related datasets: (a) the original SOL; (b) the Orchids version of SOL; (c) our OrchideaSOL. The volume is calculated by windowing the signal and weighting each window’s RMS according to a measure of loudness obtained via the pyloudnorm library [christian_steinmetz_2019_3551801]. Cubic least-square regressions are shown with dashed lines.

3.1 Selection

We include 12253 sound files in the free OrchideaSOL dataset, amounting to 35% of the total number of playing techniques within SOL. The choice was motivated by the desire to provide an initial set of techniques rich enough to allow experimentation with the Orchidea tools. This initial set includes: ordinario, sforzati, artificial harmonics, pizzicati, vibrati, slaps, aeolian sounds, brassy sounds, stopped sounds, flatterzunge, discolored fingerings, harmonic fingerings, notes played col legno (tratto or battuto), tremoli and bisbigliando, pedal tones, whistle tones, key clicks, jet whistles, and a few other extended techniques.

Due to the fact that currently Orchidea works with static targets, we exclude all techniques with time-varying dynamics, such as crescendo and descrescendo.

3.2 Retuning

A large part of samples in the original SOL distrubition were very audibly out of tune, therefore a retuning passage was inescapable. To avoid retuning all samples, we decided to only process those notes whose pitch error was above 10 cents and below 80 cents. The few samples with error above 80 cents were flagged and inspected manually on an individual basis (see section 3.6). The resampling was done via the Python resampy module, with its kaiser_best filter333Resampy web page: http://ccrma.stanford.edu/~jos/resample/

. The fundamental frequency was estimated via Essentia’s

PitchYin descriptor [bogdanov2013essentia], and verified by ear for every resampled sound.

3.3 Volume compensation

Although the third channel is by far the best channel to work with, as far as recording quality, it has the important drawback that its level and positioning had been often modified during the course of recording sessions.

A subset of recordings (Alto Sax, Accordion, Harp, Guitar and Bass Tuba) had precise reports for such modifications. Most of the other instruments, unfortunately, did not. In order to retrieve more natural relationships between the instrument dynamics, whenever we could, we reverted the volume modifications according to the reports; when we could not do that, we extrapolated the rationale behind them and we tried to infer volume adjustments family-wise, with the help of analysis of volume curves across the different levels of dynamics. We initially tried to infer the loudness differences by comparing the signal of the third microphone with the standard stereo couple, but this proved to be more intricate, invasive and error-prone than an a-posteriori analysis of volume curves.

We decided to avoid any nonlinear signal processing, such as limiters or compressors. Hence, we had to apply a global negative makeup on the dataset to avoid clipping; we recalibrated local make-up factors in order to account for macroscopic differences between families of instruments. For example, in the original dataset, a flute ffwas much louder than a trumpet ff. Although we are aware that our choices are far from perfection — whatever “perfection” may mean in a similar recovery task —, we believe that the volume-compensated dataset is more faithful to the relationships between instrumental dynamics than the original one. As such, we hope that it will prove more effective in orchestration tasks.

Figure 3 illustrates the application of our volume compensation procedure in the case of the Alto Saxophone. In the online repository of Orchidea, we provide a list of all the gain transformation which we applied.

3.4 Resampling

In SOL, some of the playing techniques were sampled by whole tones, minor thirds or major thirds. Moreover, some of the samples were missing altogether. Because Orchidea does not perform automatic pitch shifting of samples during the analysis process, we decided, for most of the playing techniques, to fill in the remaining notes by resampling the nearest ones, up to a tone upwards or downwards.

3.5 Renaming

The naming in SOL was at times inconsistent and hard to parse properly. Filenames usually included four fields (instrument, playing technique, pitch, dynamics), and possibly a fifth one (other specifications), all separated by dashes. However, dashes were often used inside playing techniques themselves, as well as a description of pitch combinations, such as multiphonics of glissandi. Moreover, in few instances, some of the fields were dropped as they were not applicable to the sample at hand. In the release of OrchidaSOL, we devised a semi-automatic script to fix these issues and we performed the renaming of the sound files.444A renaming, slightly different from ours, was already accomplished in later version of SOL, such as the one used in Orchids. However, these versions had the important drawback of having been normalized, with seemingly no trace of the original levels. The general filename has been brought to the standard form:

<instr>-<ps>-<pitch>-<dyn>-<other>-<res>.wav

where <ins> is the abbreviated instrument name, <ps> is the playing technique, <pitch> is the pitch in textual form (under the convention that middle C is C4), <dyn> is the dynamics and <other> is any other meaningful specification (e.g. string number, alternative version number, etc.). Any among <pitch>, <dyn> and <other> can be replaced by the letter ‘N’ when such property is not applicable to the recorded file (e.g. pitch is irrelevant when playing a string instrument on the tuning pegs, and so on). All of the five property always appear in the file names (even though most of the time the <other> property is just ‘N’). If a mute is applied, the instrument name becomes <instr>+<mute>.

The <res> property includes information on the file resampling, namely: whether the tuning of the note was adjusted, or whether the note did not appear in the original dataset and was resampled from another one. The first information is in the form T<amount><u|d>, while the second is in the form R<amount><u|d>. The amount is always an integer number of cents; ‘u’ stands for upwards and ‘d’ stands for ‘downwards’. If no resampling was performed, the <res> property is ‘N’.

We rewrote the names of playing techniques so as to avoid any dash or non-ASCII character. Furthermore, we harmonized discrepancies between some pairs of synonym categories, such as sforzato vs. sforzando.

3.6 Manual corrections

Finally, whenever we came across some file which was wrongly tagged or had evident issues in volume or pitch, we applied a manual correction. The list of all the manual correction is provided in the Orchidea repository (see section 5). Some issues were found in certain folders which could not be solved (e.g. missing or corrupted samples); a list of open issues is also provided within the repository.

4 Baseline classification

Together with the dataset, we decided to include some baseline classification results on three specific tasks: instrument recognition, playing technique recognition and note recognition. The tasks have different degrees of difficulty, given the unbalanced number of examples per class and the intrisic nature of each problem (instrument recognition has 32 classes, playing technique recognition has 89 classes and note recognition has 145 classes555This is due to the presence in the dataset of multi-pitch techniques, such as play-and-sing (playing a given note and singing another one).).

The classification pipeline has been the same for each task and included, after a standardization phase, several classifiers:

-nearest neighbors (kNN), logistic regression (LogReg), support vector machines (SVC) and random forest with 10 estimators (RF10). The sounds have been analysed using 20 MFFCs; the Python code used for the classification is included in the dataset distribution and can be referred for a detailed description of the parameters of each classifier. Table

1 details the results of our experiments (accuracy). In all cases, the random forest obtained the best results. The playing technique recognition task appears to be the most difficult and the accuracies drop consequently.

Instrument Playing technique Note
kNN .85 .50 .80
LogReg .85 .58 .87
SVC .92 .73 .87
RF10 .95 .90 .90
Table 1: Classification results (accuracy) for instrument recognition, playing technique recognition and note recognition.

It is important to remark that, in this context, we did not want to provide state-of-the-art classification results but we only wanted to give reference baselines for further experimentation. We refer to [lostanlen2018dlfm] on the potential of SOL and its derivatives for scientific research in machine listening.

5 Distribution

OrchideaSOL is distributed via the Ircam Forum666https://forum.ircam.fr/ and can be freely downloaded upon subscription.

5.1 Distribution of OrchideaSOL metadata on Zenodo

Besides its usage in musical creation, OrchideaSOL has the potential of advancing knowledge in scientific research. Indeed, the wealth of playing techniques that is afforded by OrchideaSOL, as well as the consistency of its recording conditions, makes it an ideal test bed for timbre modeling. In particular, samples in OrchideaSOL from different playing techniques are aligned in terms of onset time, fundamental frequency, and loudness level. Thus, OrchideaSOL allows to devise systematic protocols on music perception and cognition in human subjects. Furthermore, OrchideaSOL may be employed to train and evaluate machine listening software on various music information retrieval (MIR) tasks, including instrument classification, playing technique classification, and fundamental frequency estimation.

To facilitate the adoption of OrchideaSOL by the research community, we generate a metadata file in CSV format which summarizes the attributes of every audio sample. This metadata file expedites the need for writing a custom parser on the part of the MIR practitioner 777The metadata of OrchideaSOL can be downloaded at:
https://doi.org/10.5281/zenodo.3686251
.

For the sake of research reproducibility, we provide an official split of OrchideaSOL into five non-overlapping folds, as an additional column to the metadata spreadsheet. Here, it is crucial that all folds have the same distribution of labels. To achieve this goal, we apply an algorithm named Entrofy [huppenkothen2019entrofy], originally designed for cohort selection among human subjects. After convergence, we verify that all folds fare equally in terms of instruments, playing techniques, pitch range, and intensity dynamics.

5.2 Distribution of TinySOL in the mirdata package

Furthermore, we provide a Python module allowing every user of TinySOL to guarantee that they have access to the dataset in its pristine form. This module is integrated into the mirdata package [bittner2019ismir]

, an open-source initiative for the reproducible usage of datasets

888The mirdata Python package can be installed with the command:
pip install mirdata.
. The key idea behind mirdata is for the dataset curators to upload a dataset “index”, in JSON format, which summarizes the list of files as well as their MD5 checksums.

To this end, we upload a copy of TinySOL to the Zenodo repository of open-access data999The audio and metadata of FullSOL can be downloaded at:
https://doi.org/10.5281/zenodo.3632192
. Because Zenodo is developed by the European OpenAIRE program and operated by CERN, it has a anticipated lifespan of multiple decades. In addition, the presence of TinySOL as an unalterable dataset on Zenodo associates it to a digital object identifier (DOI), which is directly citable in scientific publications. For this reason, our implementation of the TinySOL module for mirdata points to the Zenodo repository as a reliable source.

A track from TinySOL may be loaded as follows:

from mirdata import tinysol
data_home = "mir_datasets/TinySOL"
tinysol.download(data_home=data_home)
dataset = tinysol.load()
track = dataset["Fl-ord-C4-mf-N-T14d"]

The corresponding waveform can be loaded (via librosa [mcfee2020librosa]) by accessing the track.audio property. Likewise, metadata for this track corresponds to other properties such as instrument_abbr, technique_abbr, dynamics, and pitch_id.

Because mirdata is version-controlled and released under a free license, it protects TinySOL against the eventuality of slight alteration, either accidental or deliberate. Indeed, the function mirdata.tinysol.validate() compares all MD5 checksums in the local repository against the public checksums. In case of mismatch, mirdata lists all files which are missing, spurious, or corrupted. Therefore, this function offers a guarantee to researchers that they are using the pristine version of TinySOL.

5.3 Distribution of pre-computed features

In addition to raw audio samples, the OrchideaSOL repository includes:

  • Statistics on the number of samples (also see Figures 4 and 5).

    Figure 4: Distributions of samples by instrument in the OrchideaSOL dataset (samples with mutes are included in the count)
    Figure 5: Distributions of samples by playing technique in the OrchideaSOL dataset. Only playing techniques with more than 200 entries are displayed (12 out of 52).
  • Volume curves for each instrument and playing techniques, with plotted cubic least-squares regression. For all the “ordinario” samples, we also provide the coefficient of the quadratic two-dimensional regression formula for the loudness-weighted RMS ():

    where is the MIDI number and is a numeric value assigned to each dynamic marking according to the convention

    Figure 6 show some examples of volume curves for the flute (ordinario). The quadratic regression for its loudness-weighted RMS is

    These formulas can be also used to roughly model instrumental dynamics in other contexts, such as synthesis or sampling.

    (a)
    (b)
    Figure 6: Volume curves for the flute (left, cubic regression lines are dashed) and three-dimensional representation of the corresponding regression surface (right).
  • Spectral analysis: for each sample, we provide the first 1024 amplitude bins of its average spectrum (analyzed with window size: 4096 samples, hop size: 2048 samples);

  • MFCC analysis: for each sample, we provide the average of the first 20 MFCCs (analyzed with window size: 4096 samples, hop size: 2048 samples).

  • Spectral envelope: for each sample, we provide the first 1024 bins of the average spectral envelope (analyzed with window size: 4096 samples, hop size: 2048 samples); the spectral envelope is computed by applying a lowpass window of 80 coefficients (liftering) to the cepstrum and by taking again the Fourier transform; briefly, given the signal S:

  • Spectral peaks: for each sample, we provide the average of the first 120 peaks of the magnitude spectrum analyzed with window size: equal to 4096 samples and hop size equal to 2048 samples.

  • Spectral moments: for each sample, we provide the average of the first 4 spectral moments

    [peeters2004large]

    : centroid, spread, skewness and kurtosis, analyzed with window size equal to 4096 samples and hop size equal to 2048 samples).

6 Conclusion

We have introduced OrchideaSOL, a free subset of the SOL dataset modified and tailored to be reliable in target-based computer-aided composition tasks. We expect the generated scores to be more faithful in real-life orchestral scenarios than with the previous SOL dataset. Moreover, we believe that OrchideaSOL has a strong potential for advancing scientific research in music cognition, music information retrieval, and generative models for sound synthesis.

We would like to thank Hugues Vinet, Jean-Louis Giavitto and Gregoire Beller for sharing with us the collection of internal Ircam reports on the SOL recordings and for agreeing to release OrchideaSOL for free.

References