DawDreamer: Bridging the Gap Between Digital Audio Workstations and Python Interfaces

11/18/2021
by   David Braun, et al.
Stanford University
0

Audio production techniques which previously only existed in GUI-constrained digital audio workstations, livecoding environments, or C++ APIs are now accessible with our new Python module called DawDreamer. DawDreamer therefore bridges the gap between real sound engineers and coders imitating them with offline batch-processing. Like contemporary modules in this domain, DawDreamer can create directed acyclic graphs of audio processors such as VSTs which generate or manipulate audio streams. DawDreamer can also dynamically compile and execute code from Faust, a powerful signal processing language which can be deployed to many platforms and microcontrollers. We discuss DawDreamer's unique features in detail and potential applications across music information retrieval including source separation, transcription, and audio effect parameter inference. We provide fully cross-platform PyPI installers, a Linux Dockerfile, and an example Jupyter notebook.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

11/06/2021

Digital Audio Processing Tools for Music Corpus Studies

Digital audio processing tools offer music researchers the opportunity t...
04/30/2019

Deep Learning for Audio Signal Processing

Given the recent surge in developments of deep learning, this article pr...
06/30/2022

libACA, pyACA, and ACA-Code: Audio Content Analysis in 3 Languages

The three packages libACA, pyACA, and ACA-Code provide reference impleme...
02/03/2021

Music source separation conditioned on 3D point clouds

Recently, significant progress has been made in audio source separation ...
09/21/2021

An Audio Synthesis Framework Derived from Industrial Process Control

Since its conception, digital synthesis has significantly influenced the...
01/23/2017

Design of an Audio Interface for Patmos

This paper describes the design and implementation of an audio interface...
12/15/2020

An Artistic Visualization of Music Modeling a Synesthetic Experience

This project brings music to sight. Music can be a visual masterpiece. S...

1 Introduction

A digital audio workstation (DAW) is a software system which integrates most music production tasks including composing, recording, editing, adjusting effects, and exporting to audio files. An audio engineer typically uses a mouse and keyboard or expensive mixing console to carry out these tasks, making it difficult to explore efficiently the large action space of effects and their parameters. Moreover, some digital instruments and effects are platform specific, such as Audio Units on macOS or LV2 plug-ins on Linux. The ideal batch-processing audio framework with relevance to machine learning should both overcome the hurdles of mouse-and-keyboard interfaces and unify instruments and effects across all platforms.

One project in this domain is RenderMan [4], a Python module which served as the starting codebase for DawDreamer. RenderMan uses the JUCE[20] framework for rendering audio from VST222VST is short for Virtual Studio Technology, an audio plug-in software interface licensed by Steinberg Media Technologies. instruments. RenderMan played a crucial role in research on software synthesizer presets [21, 3, 13] and massive audio generation [10], but its development has been slow to branch into other aspects of music production such as bussing. Bussing is the summation of audio tracks as an intermediate step in some mixing procedure. Other researchers tried RenderMan but transitioned to a Max/MSP method after encountering audio artifacts [16].

FluidSynth [5] is a sample-based synthesizer engine with command-line support, but its reliance on SoundFount samples limits broader applications. Pedalboard is a new project with similarities to RenderMan and DawDreamer [18]. It has a promising future but currently lacks support for Faust, parameter automation, efficient time-stretching and pitch-bending, and generalized bussing (audio processor graph building).

2 Features

DawDreamer aims to address the limitations of other tools and expand the capabilities of Python interfaces which emulate DAWs. Users can compose graphs of audio processors and record multiple processors at once in a single forward-pass. Therefore one pass can efficiently produce mixed and unmixed audio tracks, which is ideal for machine learning pipelines. Graphs can be reused, and processors’ settings can be adjusted for subsequent passes. Parameter automation, which is the automatic changing of parameters over time, can be accomplished by specifying control signals as numpy arrays.

DawDreamer introduces some audio processors not available in other packages. In the following sections, we will describe the support for (1) arbitrary VST instruments and effects, (2) Faust code, (3) time-stretching and pitch-warping.

2.1 Virtual Sound Technology

Like RenderMan, DawDreamer supports VST instruments, but it also supports VST effects. Furthermore, it supports VST effects that take multiple inputs such as a sidechain compressor that attenuates the volume of one input according to the loudness of another.

2.2 Faust

Faust (Functional AUdio STream) is a programming language for real time signal processing [14]. Faust’s built-in libraries include functions for reverbs, compressors, oscillators, filters, ambisonics, Yamaha DX7 emulation, and more.333https://faustlibraries.grame.fr

DawDreamer uses the libfaust [9] backend to compile Faust code just-in-time. Elements in the Faust source code that would usually designate user interfaces such as sliders or toggles instead become parameters which can be automated according to numpy arrays.

This same coupling between Faust user interfaces and DawDreamer enables easy control of polyphonic Faust instruments [8]. A developer can write Faust code with a single voice of polyphony in mind and provide MIDI notes from Python or from a MIDI file. All of the voice allocation is done automatically.

The Faust examples in DawDreamer include a sidechain compressor, polyphonic wavetable synthesizer, and polyphonic sampler instrument. The synthesizer’s wavetable and the sampler’s sample can be specified with numpy arrays. The sampler example shows the simplicity of using MIDI-triggered ADSR envelopes and information to modulate the sample’s pitch, volume and filter cutoff. One no longer needs to compose numpy functions to slice, fade, or filter short audio samples in order to emulate a basic sampler.

Beyond DawDreamer, Faust code can be compiled for Windows, Linux, macOS, Android, iOS, and many microcontrollers such as Teensy, SHARC, Bela, and most recently FPGAs.444https://fast.grame.fr It can also be exported in many project formats and languages such as JUCE, Max, vcvrack, rust, julia, soul, C, C++, and more.555The Faust IDE (https://faustide.grame.fr) is the best way to get started with exporting Faust code. Researchers would be wise to not restrict themselves to VST and LV2 audio plug-ins when Faust can be deployed so widely.

2.3 Time-Stretching and Pitch-Warping

DawDreamer borrows from a "warp marker" concept developed by the Ableton Live DAW [2] to provide an easy and efficient interface for time-stretching and pitch-warping audio. Each warp marker pairs a time in seconds and a position measured in beats. Ableton can generate and save warp markers to files with an .asd extension, which we reverse engineered.666A companion Python module is available: https://github.com/DBraun/AbletonParsing Thus, DawDreamer can parse Ableton .asd files and use the Rubber Band Library [1] to pitch-warp and time-stretch the associated audio without writing to the file system as an intermediate step like prior modules do [7, 12]. The start/end markers and loop positions from the .asd file affect the audio’s playback. One can also efficiently re-use the same clip at several places along a global timeline in DawDreamer’s renderer.

3 Potential Use Cases

3.1 Generative Mash-ups and Music Information Retrieval

Research on adversarial semi-supervised audio source separation would benefit from more ways to generate mixed and unmixed tracks with variations in timing and pitch [19]. Therefore, we provide a Jupyter notebook777An automatically annotated example output can be seen at https://youtu.be/HkK2ocYSUL0 that tempo-matches and mixes a cappella and instrumental pairs according to an L2 distance combining their proximity in beats per minute and the musical circle of fifths.

A researcher of universal music source separation could use DawDreamer and generative music composition networks to create ground truth mixtures of tens of audio tracks rather than the common four (vocals, drums, bass, and other) [15]. With adversarial learning, these generated mixtures could become increasingly realistic and helpful for source separation, transcription, lyrics alignment, instrument identification, cover identification, and more.

3.2 Intelligent Music Production

In the task of automatic audio mastering, DeepAFX achieved high quality results through gradient approximation of a fixed series of LV2 audio effects [11]. DeepAFx also succeeded at picking plug-in parameters to match a guitar pedal’s distortion. In both cases, DawDreamer could learn the same mastering or compressor with Faust effects, but thanks to Faust, the effect could be deployed easily to more microcontrollers.

DawDreamer has potential applications in not only intelligent effects but also intelligent signal generators. Previous research on synthesizer parameter inference or exploration [21, 3, 13, 6, 17] has been constrained by black-box compiled synthesizer code and plug-in formats, but DawDreamer can run arbitrary signal generators written with Faust. For example, the Slakh project[10] relied on presets and sample packs for the Native Instruments’ plug-in Kontakt, but DawDreamer can pass audio samples to polyphonic Faust signal generator code, either of which could be learned via some algorithm.

4 Conclusion

Much of music production is a series of actions taken inside a DAW environment888

Perhaps Reinforcement Learning researchers can also begin to think of the DAW as an environment, just like an Atari video game.

, yet some ML researchers study musical audio as a raw series of numbers. To be fair, this domain-agnosticism helps models generalize to other domains, but it forfeits the helpful inductive biases from understanding music as the interaction of MIDI notes, sample packs, signal chains, effects, and parameter settings. Those building blocks and domain knowledge form a large part of the DNA of music. Researchers can now use DawDreamer as the physically unconstrained software engine that grows musical DNA into fully-realized audio data.

5 Acknowledgments

The author thanks Leon Fedden for starting RenderMan and making it open-source; Julius O. Smith III and Stéphane Letz for their support with Faust; Christian Steinmetz and Chris Donahue for their feedback on the manuscript.

References

  • [1] Note: [Online; accessed 12-September-2021] External Links: Link Cited by: §2.3.
  • [2] (2010-02-19) External Links: Link Cited by: §2.3.
  • [3] P. Esling, N. Masuda, A. Bardet, R. Despres, and A. Chemla-Romeu-Santos (2019) Universal audio synthesizer control with normalizing flows. CoRR abs/1907.00971. External Links: Link, 1907.00971 Cited by: §1, §3.2.
  • [4] L. Fedden (2017-12) fedden/RenderMan: The v1.0.0 release for publication of paper. Zenodo. External Links: Document, Link Cited by: §1.
  • [5] D. Henningsson (2011) FluidSynth real-time and thread safety challenges. In Proceedings of the 9th International Linux Audio Conference, Maynooth University, Ireland, pp. 123–128. Cited by: §1.
  • [6] C. A. Huang, D. Duvenaud, K. C. Arnold, B. Partridge, J. W. Oberholtzer, and K. Z. Gajos (2014) Active learning of intuitive control knobs for synthesizers using gaussian processes. In Proceedings of the 19th international conference on Intelligent User Interfaces, pp. 115–124. Cited by: §3.2.
  • [7] I. Jordal (2019) Note: [accessed 12-September-2021] External Links: Link Cited by: §2.3.
  • [8] S. Letz and Y. Orlarey (2017) Polyphony, sample-accurate control and MIDI support for FAUST DSP using combinable architecture files. Cited by: §2.2.
  • [9] S. Letz, D. Fober, and Y. Orlarey (2013-05) COMMENT embarquer le compilateur faust dans vos applications ?. pp. . Cited by: §2.2.
  • [10] E. Manilow, G. Wichern, P. Seetharaman, and J. Le Roux (2019) Cutting music source separation some Slakh: a dataset to study the impact of training data quality and quantity. In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Cited by: §1, §3.2.
  • [11] M. A. Martínez Ramírez, O. Wang, P. Smaragdis, and N. J. Bryan (2021) Differentiable signal processing with black-box audio effects. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 66–70. External Links: Document Cited by: §3.2.
  • [12] B. McFee (2015) Pyrubberband. External Links: Link Cited by: §2.3.
  • [13] C. Mitcheltree and H. Koike (2021) SerumRNN: step by step audio VST effect programming. CoRR abs/2104.03876. External Links: Link, 2104.03876 Cited by: §1, §3.2.
  • [14] Y. Orlarey, D. Fober, and S. Letz (2009-01) FAUST: an efficient functional approach to dsp programming. Cited by: §2.2.
  • [15] Z. Rafii, A. Liutkus, F. Stöter, S. I. Mimilakis, and R. Bittner (2019-08) MUSDB18-HQ - an uncompressed version of MUSDB18. External Links: Document, Link Cited by: §3.1.
  • [16] A. M. Sarroff (2020) BLIND arbitrary reverb matching. Cited by: §1.
  • [17] H. Scurto, B. V. Kerrebroeck, B. Caramiaux, and F. Bevilacqua (2021) Designing deep reinforcement learning for human parameter exploration. ACM Transactions on Computer-Human Interaction (TOCHI) 28 (1), pp. 1–35. Cited by: §3.2.
  • [18] Spotify AB (2021) External Links: Link Cited by: §1.
  • [19] D. Stoller, S. Ewert, and S. Dixon (2017) Adversarial semi-supervised audio source separation applied to singing voice extraction. CoRR abs/1711.00048. External Links: Link, 1711.00048 Cited by: §3.1.
  • [20] J. Storer (2010) JUCE: jules’ utility class extensions. London, U.K.. External Links: Link Cited by: §1.
  • [21] M. J. Yee-King, L. Fedden, and M. d’Inverno (2018) Automatic programming of VST sound synthesizers using deep networks and other techniques. IEEE Transactions on Emerging Topics in Computational Intelligence 2 (2), pp. 150–159. External Links: Document Cited by: §1, §3.2.