A digital audio workstation (DAW) is a software system which integrates most music production tasks including composing, recording, editing, adjusting effects, and exporting to audio files. An audio engineer typically uses a mouse and keyboard or expensive mixing console to carry out these tasks, making it difficult to explore efficiently the large action space of effects and their parameters. Moreover, some digital instruments and effects are platform specific, such as Audio Units on macOS or LV2 plug-ins on Linux. The ideal batch-processing audio framework with relevance to machine learning should both overcome the hurdles of mouse-and-keyboard interfaces and unify instruments and effects across all platforms.
One project in this domain is RenderMan , a Python module which served as the starting codebase for DawDreamer. RenderMan uses the JUCE framework for rendering audio from VST222VST is short for Virtual Studio Technology, an audio plug-in software interface licensed by Steinberg Media Technologies. instruments. RenderMan played a crucial role in research on software synthesizer presets [21, 3, 13] and massive audio generation , but its development has been slow to branch into other aspects of music production such as bussing. Bussing is the summation of audio tracks as an intermediate step in some mixing procedure. Other researchers tried RenderMan but transitioned to a Max/MSP method after encountering audio artifacts .
FluidSynth  is a sample-based synthesizer engine with command-line support, but its reliance on SoundFount samples limits broader applications. Pedalboard is a new project with similarities to RenderMan and DawDreamer . It has a promising future but currently lacks support for Faust, parameter automation, efficient time-stretching and pitch-bending, and generalized bussing (audio processor graph building).
DawDreamer aims to address the limitations of other tools and expand the capabilities of Python interfaces which emulate DAWs. Users can compose graphs of audio processors and record multiple processors at once in a single forward-pass. Therefore one pass can efficiently produce mixed and unmixed audio tracks, which is ideal for machine learning pipelines. Graphs can be reused, and processors’ settings can be adjusted for subsequent passes. Parameter automation, which is the automatic changing of parameters over time, can be accomplished by specifying control signals as numpy arrays.
DawDreamer introduces some audio processors not available in other packages. In the following sections, we will describe the support for (1) arbitrary VST instruments and effects, (2) Faust code, (3) time-stretching and pitch-warping.
2.1 Virtual Sound Technology
Like RenderMan, DawDreamer supports VST instruments, but it also supports VST effects. Furthermore, it supports VST effects that take multiple inputs such as a sidechain compressor that attenuates the volume of one input according to the loudness of another.
Faust (Functional AUdio STream) is a programming language for real time signal processing . Faust’s built-in libraries include functions for reverbs, compressors, oscillators, filters, ambisonics, Yamaha DX7 emulation, and more.333https://faustlibraries.grame.fr
DawDreamer uses the libfaust  backend to compile Faust code just-in-time. Elements in the Faust source code that would usually designate user interfaces such as sliders or toggles instead become parameters which can be automated according to numpy arrays.
This same coupling between Faust user interfaces and DawDreamer enables easy control of polyphonic Faust instruments . A developer can write Faust code with a single voice of polyphony in mind and provide MIDI notes from Python or from a MIDI file. All of the voice allocation is done automatically.
The Faust examples in DawDreamer include a sidechain compressor, polyphonic wavetable synthesizer, and polyphonic sampler instrument. The synthesizer’s wavetable and the sampler’s sample can be specified with numpy arrays. The sampler example shows the simplicity of using MIDI-triggered ADSR envelopes and information to modulate the sample’s pitch, volume and filter cutoff. One no longer needs to compose numpy functions to slice, fade, or filter short audio samples in order to emulate a basic sampler.
Beyond DawDreamer, Faust code can be compiled for Windows, Linux, macOS, Android, iOS, and many microcontrollers such as Teensy, SHARC, Bela, and most recently FPGAs.444https://fast.grame.fr It can also be exported in many project formats and languages such as JUCE, Max, vcvrack, rust, julia, soul, C, C++, and more.555The Faust IDE (https://faustide.grame.fr) is the best way to get started with exporting Faust code. Researchers would be wise to not restrict themselves to VST and LV2 audio plug-ins when Faust can be deployed so widely.
2.3 Time-Stretching and Pitch-Warping
DawDreamer borrows from a "warp marker" concept developed by the Ableton Live DAW  to provide an easy and efficient interface for time-stretching and pitch-warping audio. Each warp marker pairs a time in seconds and a position measured in beats. Ableton can generate and save warp markers to files with an .asd extension, which we reverse engineered.666A companion Python module is available: https://github.com/DBraun/AbletonParsing Thus, DawDreamer can parse Ableton .asd files and use the Rubber Band Library  to pitch-warp and time-stretch the associated audio without writing to the file system as an intermediate step like prior modules do [7, 12]. The start/end markers and loop positions from the .asd file affect the audio’s playback. One can also efficiently re-use the same clip at several places along a global timeline in DawDreamer’s renderer.
3 Potential Use Cases
3.1 Generative Mash-ups and Music Information Retrieval
Research on adversarial semi-supervised audio source separation would benefit from more ways to generate mixed and unmixed tracks with variations in timing and pitch . Therefore, we provide a Jupyter notebook777An automatically annotated example output can be seen at https://youtu.be/HkK2ocYSUL0 that tempo-matches and mixes a cappella and instrumental pairs according to an L2 distance combining their proximity in beats per minute and the musical circle of fifths.
A researcher of universal music source separation could use DawDreamer and generative music composition networks to create ground truth mixtures of tens of audio tracks rather than the common four (vocals, drums, bass, and other) . With adversarial learning, these generated mixtures could become increasingly realistic and helpful for source separation, transcription, lyrics alignment, instrument identification, cover identification, and more.
3.2 Intelligent Music Production
In the task of automatic audio mastering, DeepAFX achieved high quality results through gradient approximation of a fixed series of LV2 audio effects . DeepAFx also succeeded at picking plug-in parameters to match a guitar pedal’s distortion. In both cases, DawDreamer could learn the same mastering or compressor with Faust effects, but thanks to Faust, the effect could be deployed easily to more microcontrollers.
DawDreamer has potential applications in not only intelligent effects but also intelligent signal generators. Previous research on synthesizer parameter inference or exploration [21, 3, 13, 6, 17] has been constrained by black-box compiled synthesizer code and plug-in formats, but DawDreamer can run arbitrary signal generators written with Faust. For example, the Slakh project relied on presets and sample packs for the Native Instruments’ plug-in Kontakt, but DawDreamer can pass audio samples to polyphonic Faust signal generator code, either of which could be learned via some algorithm.
Much of music production is a series of actions taken inside a DAW environment888 Perhaps Reinforcement Learning researchers can also begin to think of the DAW as an environment, just like an Atari video game.
Perhaps Reinforcement Learning researchers can also begin to think of the DAW as an environment, just like an Atari video game., yet some ML researchers study musical audio as a raw series of numbers. To be fair, this domain-agnosticism helps models generalize to other domains, but it forfeits the helpful inductive biases from understanding music as the interaction of MIDI notes, sample packs, signal chains, effects, and parameter settings. Those building blocks and domain knowledge form a large part of the DNA of music. Researchers can now use DawDreamer as the physically unconstrained software engine that grows musical DNA into fully-realized audio data.
The author thanks Leon Fedden for starting RenderMan and making it open-source; Julius O. Smith III and Stéphane Letz for their support with Faust; Christian Steinmetz and Chris Donahue for their feedback on the manuscript.
-  Note: [Online; accessed 12-September-2021] External Links: Cited by: §2.3.
-  (2010-02-19) External Links: Cited by: §2.3.
-  (2019) Universal audio synthesizer control with normalizing flows. CoRR abs/1907.00971. External Links: Cited by: §1, §3.2.
-  (2017-12) fedden/RenderMan: The v1.0.0 release for publication of paper. Zenodo. External Links: Cited by: §1.
-  (2011) FluidSynth real-time and thread safety challenges. In Proceedings of the 9th International Linux Audio Conference, Maynooth University, Ireland, pp. 123–128. Cited by: §1.
-  (2014) Active learning of intuitive control knobs for synthesizers using gaussian processes. In Proceedings of the 19th international conference on Intelligent User Interfaces, pp. 115–124. Cited by: §3.2.
-  (2019) Note: [accessed 12-September-2021] External Links: Cited by: §2.3.
-  (2017) Polyphony, sample-accurate control and MIDI support for FAUST DSP using combinable architecture files. Cited by: §2.2.
-  (2013-05) COMMENT embarquer le compilateur faust dans vos applications ?. pp. . Cited by: §2.2.
-  (2019) Cutting music source separation some Slakh: a dataset to study the impact of training data quality and quantity. In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Cited by: §1, §3.2.
-  (2021) Differentiable signal processing with black-box audio effects. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 66–70. External Links: Cited by: §3.2.
-  (2015) Pyrubberband. External Links: Cited by: §2.3.
-  (2021) SerumRNN: step by step audio VST effect programming. CoRR abs/2104.03876. External Links: Cited by: §1, §3.2.
-  (2009-01) FAUST: an efficient functional approach to dsp programming. Cited by: §2.2.
-  (2019-08) MUSDB18-HQ - an uncompressed version of MUSDB18. External Links: Cited by: §3.1.
-  (2020) BLIND arbitrary reverb matching. Cited by: §1.
-  (2021) Designing deep reinforcement learning for human parameter exploration. ACM Transactions on Computer-Human Interaction (TOCHI) 28 (1), pp. 1–35. Cited by: §3.2.
-  (2021) External Links: Cited by: §1.
-  (2017) Adversarial semi-supervised audio source separation applied to singing voice extraction. CoRR abs/1711.00048. External Links: Cited by: §3.1.
-  (2010) JUCE: jules’ utility class extensions. London, U.K.. External Links: Cited by: §1.
-  (2018) Automatic programming of VST sound synthesizers using deep networks and other techniques. IEEE Transactions on Emerging Topics in Computational Intelligence 2 (2), pp. 150–159. External Links: Cited by: §1, §3.2.