Towards democratizing music production with AI-Design of Variational Autoencoder-based Rhythm Generator as a DAW plugin

04/01/2020 ∙ by Nao Tokui, et al. ∙ 0

There has been significant progress in the music generation technique utilizing deep learning. However, it is still hard for musicians and artists to use these techniques in their daily music-making practice. This paper proposes a Variational Autoencoder<cit.>(VAE)-based rhythm generation system, in which musicians can train a deep learning model only by selecting target MIDI files, then generate various rhythms with the model. The author has implemented the system as a plugin software for a DAW (Digital Audio Workstation), namely a Max for Live device for Ableton Live. Selected professional/semi-professional musicians and music producers have used the plugin, and they proved that the plugin is a useful tool for making music creatively. The plugin, source code, and demo videos are available online.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Since the research of deep learning took off, researchers and artists have proved that deep learning models are competent for generating information contents, such as images and texts. Music is no exception. Many researchers have been working on applications of deep learning, especially architectures for time-series prediction such as Recurrent Neural Networks(RNNs), in music generation.

Historically, many of those researches focused on generating melodies, typically of piano, usually using recurrent neural networks(RNN) or their variant, Long-short term memory (LSTM)

[6] (for example, [12]). In the last couple of years, researchers had started working on models generating rhythms and bass lines and, in some cases, both of them alongside melodies, so that the model can generate music in the more comprehensive form with rhythm, bass, and melody parts[14].

Few researches have tackled the music generation problem in the audio signal domain due to its significant complexity and computational requirements (One of a few exceptions is [10]). Hence this paper focuses on music generation at the symbolic level. [1] gives a more comprehensive view of current deep learning-based music generation techniques.

These researches also tend to focus on the autonomous nature of the process: They design AI systems to autonomously generate a complete piece of music, or at least a part of music, according to a given input (a short sequence of notes as a seed, or a random input vector). Interventions of human musicians are usually not considered.

Based on MusicVAE[14] and GrooVAE[3], Google Magenta team provided accompanying plugins for Ableton Live222https://magenta.tensorflow.org/studio/ableton-live/ so that musicians can use them to generate short melody and rhythm patterns. Although these plugins can be used in trial and error manner, the parameters provided by them are limited: "temperature" of the sampling process is the only parameter to affect the patterns to be generated. The user has to stop playing and hit a button to generate new patterns reflecting new parameter settings.

The generative models (in this case, decoders of Variational Autoencoders[8]) included in MusicVAE[14] plugins were trained with millions of MIDI data provided as a publicly available Lakh MIDI Dataset (LMD)[13]. Users of the plugin do not have any control over MIDI files used in the training process, and subsequently, what kind of music patterns the model is capable of generating.

By considering the limitations of current AI-based music generation tools as mentioned above, this paper proposes an easy-to-use deep learning-based music generation assistive tools, which musicians and music producers can use to generate novel music patterns and get new musical ideas from them.

The tool proposed here is the following advantages:

  1. Users of the tool can train their own AI model by "drag and drop" MIDI files they want to use in training onto the plugin. By doing so, they have loose control over the patterns they want the plugin to generate.

  2. The tool provides direct access to the latent space of the trained model. Users can directory specify/modify input latent vectors and generate various rhythm patterns corresponding to the inputs.

  3. The plugin gives realtime feedback. When users move input vectors, the plugin instantly generates new rhythm patterns. Users can listen to/try various rhythms by exploring the latent space.

The author has implemented the tool as a Max for Live device(plugin) for Ableton Live333https://www.ableton.com/live/max-for-live/M4L.RhythmVAE(Fig.1). Ableton Live444https://www.ableton.com/live/ is one of the most popular DAW(Digital Audio Workstation) softwares. The plugin used Cycling’74 Max555https://cycling74.com/

and TensorFlow.js

666https://www.tensorflow.org/js/

. TensorFlow.js enables us to integrate powerful machine learning capability into JavaScript runtime of Node for Max

777https://docs.cycling74.com/nodeformax/api/ of Cycling’74 Max.

On our website, the author provides a demo video88footnotemark: 81.

This paper focuses on generating rhythm patterns, keeping the production of electronic dance music in mind, but the same technique can be easily applied to melody generation and other music genres.

2 Related Works

2.1 Rhythm generation

Historically speaking, the majority of researches on music generation deal with melodies and harmonies. Out of 32 papers reviewed in [1] (Chapter 7), only two papers handle rhythm as their main objective. However, researchers started experimenting with rhythm generation techniques recently.

[2] showed that a simple LSTM architecture trained with rhythm data encoded as strings of characters could successfully generate heavy-metal drum patterns. The paper used a binary representation of nine different drum components. For example, 100000000 and 010000000 represent kick and snare, respectively, and 110000000 means playing kick and snare simultaneously.

In [9], Markris et al. proposed an architecture with stacked LSTM layers conditioned with a Feedforward layer to generate rhythm patterns. The Feedforward layer takes information on accompanying bass lines and the position within a bar. Training rhythm patterns in binary representation, similar to [2], are fed into LSTM layers. The system allows only five drum components (kick, snare, tom, hihat, cymbal).

In [3], Gillick et al. employed Variational Autoencoder(VAE) model to generate new rhythm patterns. The paper also made use of the encoder-decoder architecture of VAE and proved that their proposed architecture could "humanize" quantized rhythms by stressing or weakening certain onsets and slightly shifting the timing of them.

Their model handles a rhythm pattern represented as three matrices: onsets, velocities (strength of onsets), timing offsets from quantized timing (discussed more in detail in section 3. Their VAE consists of layers of bidirectional LSTM used in [14], and the dimension of the latent space z is 256.

Gillick et al. also provide Ableton Live plugin99footnotemark: 92 based on the method, which allows users to generate new rhythms within the Ableton Live DAW environment. However, users cannot train their own models, and users’ control over generated patterns is limited. The plugins do not operate in realtime either, i.e. users have to stop playing when they want to generate new patterns.

2.2 Assistive tools

There are not many literatures on the application of deep learning in the context of assistive tools for music composition since its researches are relatively new.

Hadjeres et al. proposed DeepBach architecture for the generation of J.S. Bach chorals[4]. The architecture combines two LSTMs and two Feedforward layers. The user interface of the system was also implemented as a plugin for the MuseScore music editor. It allows the user to select generated chorales and control the progressions interactively.

In [16], Vogl et al.

proposed a concept of an "intelligent drum machine" utilizing a rhythm generation model based on restricted Boltzmann machines (RBMs)

[5]. On their tablet interface, the user can input their own rhythm pattern (seed) in a grid UI and generate its variations by turning a knob. To generate variations of the seed pattern, first, the seed pattern is entered into the visible layer of the RBM. Variations for every instrument are generated at once using Gibbs sampling. The rhythm patterns are handled as sequences 0s and 1s: 0 means there is an active note, 1 means no active note.

The generated variations are sorted according to their similarity to the seed and number of active onsets. The authors used a variance of the Hamming distance assessed in

[15]) as a similarity measure. Finally, patterns other than the 16 most similar ones in both groups are discarded. In this way, the system allows the user to explore the variations from sparse patterns to dense ones, from similar ones to different ones, by turning a knob UI. Users can also play generated rhythms via Ableton’s Link101010https://www.ableton.com/link/ technology as MIDI output.

Although the concept and goal is similar to the ones presented in this paper, the system in [16] does not allow users to train their model, and its representation of rhythm is limited.

3 Implementation

Figure 2: Overview of the VAE architecture used in the plugin. (FC: Fully-connected layer)
Figure 3: A sample rhythm pattern (left) and its encoded matrices (right).

The generative model the plugin used here is a Variational Autoencoder(VAE)[8]. The author implemented the training process of the VAE model in the Max for Live device(plugin). The structure of the VAE is similar to [3]. The encoder encodes batches of three input matrices, which represent drum onsets, velocity (strength) of onsets, and timing offset from the given grid(Fig.2). Once the plugin finished training with user-provided MIDI files, it uses the decoder of the VAE to generate rhythm patterns(Fig.2 b⃝).

The minimum time step of the drum onset is the 16th note; the encoder quantizes the timing of every onset to the timing of the nearest 16th note. Onsets are represented as 0, 1(Fig.3 a⃝). The encoder also normalizes the MIDI velocity values of onsets [0, 127] to [0., 1.](Fig.3 b⃝). The timing offset value [-1.0, 1.0) represents the relative distance from the nearest 16th note. -1.0 indicates the given onset is a 32nd note ahead from the exact timing of the nearest 16th note(Fig.3 c⃝).

The author chose nine typical drum sounds (Kick, Snare, Hi-hat closed, Hi-hat open, Tom low, Tom mid, Tom high, Clap, Rim). The author also specified mapping from MIDI note numbers in General MIDI (GM) convention[11] to the selected nine drums. The VAE model encodes and decodes two bar-long rhythm patterns. Since onset timings are quantized in 16th notes, the input and output of the model is 9 x 32 matrix.

The main difference from the architecture proposed in [3]

is that the plugin adopts simple fully-connected Feedforward layers instead of bidirectional LSTMs in favor of faster training on CPU environment of TensorFlow.js in Node for Max. More precisely, both encoder and decoder have two layers of Feedforward layers with 512 nodes with batch normalization

[7] and LeakyReLU activation.

The VAE model has a 2-dimensional latent space; the encoder encodes input matrices into 2-dimensional vectors, and the decoder reconstructs the input matrices from these 2D vectors. The author deliberately reduced the dimension of the latent space down to 2, from 256 of the model proposed in [3], so that users of the plugin can directly control the latent vector by standard XY pad UI (Fig.1 d⃝).

Figure 4: Automation of the progression of input latent vector for VAE decoder.

When users use the plugin, they can select MIDI files they want to use, "drag and drop" these files(Figure.1 a⃝). The plugin expects MIDI files conformed in General MIDI convention[11], e.g., MIDI note number 36 corresponds to Bass Drum, 40 is Electronic Snare. It ignores channels other than channel #10, which is dedicated to a drum track in GM Format. Every note onset in channel #10 is mapped to one of 9 drums according to the pre-defined drum mapping.

Once the plugin loaded MIDI files, users of the plugin can start training a VAE model by just clicking a start button. Training losses and validation losses are plotted on respective graphs. Once the training for user-specified epochs finished, the user can use the decoder of the trained VAE to generate rhythms by moving a knob in XY-pad representing the 2D latent space of the VAE.

The internal sequencer of the plugin plays the generated rhythm through MIDI output in sync with Ableton Live’s global sequencer. The internal sequencer plays a drum onset slightly before or after the exact timing of 16th notes, depending on the "timing offset" output of the decoder.

When users gradually move the XY-pad knob, the decoder generates continuously changing rhythm patterns in realtime, so that the user can create musical progressions on the fly. The movement of the knob of XY-pad—the transition in the latent space—can be recorded and played back afterward as a part of the automation mechanism of Ableton Live’s sequencer(Fig.4).

4 Discussion

4.1 Real world test

In November 2019, the author co-organized a workshop session with selected musicians from around the world as a part of MUTEK AI Music Lab Tokyo111111https://mutek.jp/en/news/archives/50 and asked them to use the plugin.

Feedbacks from the participants are mainly positive, and one participant stated that it gave him new musical ideas. Some of the participants used the plugin in realtime on stage during the final presentation121212A short video documentation of these performances is available online: https://youtu.be/RnlJF1YU6JU.

A more empirical analysis of the plugin and user questionaries are yet to be conducted.

Some of the complete tracks made by participants of the workshop using rhythm patterns generated by the plugin are available online1313footnotemark: 131.

4.2 Model size and controllability

One can argue that the VAE model the author adopted lacks the capacity of generalizing the wide range of different types of rhythm patterns since it has only 2-dimensional latent space in favor of the usability with the XY Pad. The smaller latent space leads to severe information bottleneck. Still, in the practical use case of this tool, where average users do not have millions of MIDI files, the author assumed the downside of the bottleneck is negligible.

The author also observed that musicians do not need generic generative models. They usually prefer to have generative models, which enable them to explore specific kinds of music (in this case, rhythm patterns). (One of the participants even said, "I love overfitting.")

At the same time, if we further explore this idea of easy-to-use AI music tools for musicians and try generating more complex music patterns, then the limitation of the 2-dimensional latent space represented on an XY-pad will be problematic. It poses a challenging question on how we can adopt latent spaces in higher dimensions and maintain the controllability for the user at the same time. This problem shall be one of the next challenges the author will tackle.

5 Conclusion

To let musicians train and use their rhythm generation model, this paper proposed a VAE-based rhythm generator as a Max for Live device, a plugin for the popular DAW software, Ableton Live. Musicians have tested the plugin and found it useful as a tool to seek new ideas and extend their musical creativity.

The author believes that this research and plugin are one of the first stepping stones towards more democratized use of AI in creative practices. In the near future, the author also hopes that every artist/creator should be able to train their own small AI models suited to their need, instead of using generic models someone else has trained, and explore new creative ideas with them.

6 Acknowledgments

This research was funded by the Keio Research Institute at SFC Startup Grant and Keio University Gakuji Shinko Shikin grant. The author wishes to thank Stefano Kalonaris and Max Frenzel for inspiration. The author also thanks Maurice Jones and Natalia Fuchs for organizing MUTEK AI Music Lab in Tokyo.

References

  • [1] J.-P. Briot, G. Hadjeres, and F. Pachet. Deep learning techniques for music generation. Springer, 2019.
  • [2] K. Choi, G. Fazekas, and M. Sandler. Text-based LSTM networks for Automatic Music Composition. Proceedings of the 1st Conference on Computer Simulation of Musical Creativity, 2016.
  • [3] J. Gillick, A. Roberts, J. Engel, D. Eck, and D. Bamman. Learning to Groove with Inverse Sequence Transformations. Proceedings of the 36th International Conference on Machine Learning, 2019.
  • [4] G. Hadjeres, F. Pachet, and F. Nielsen. DeepBach: A steerable model for bach chorales generation. In 34th International Conference on Machine Learning, 2017.
  • [5] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006.
  • [6] S. Hochreiter and J. J. Urgen Schmidhuber. LONG SHORT-TERM MEMORY. Neural Computation, 1997.
  • [7] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In 32nd International Conference on Machine Learning, 2015.
  • [8] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, 2014.
  • [9] D. Makris, M. Kaliakatsos-Papakostas, I. Karydis, and K. L. Kermanidis. Conditional neural sequence learners for generating drums’ rhythms. Neural Computing and Applications, 2019.
  • [10] R. Manzelli, V. Thakkar, A. Siahkamari, and B. Kulis. Conditioning deep generative raw audio models for structured automatic music. In Proceedings of the 19th International Society for Music Information Retrieval Conference, 2018.
  • [11] MIDI Association. General MIDI 1 Sound Set. https://www.midi.org/specifications-old/item/gm-level-1-sound-set, 1991.
  • [12] S. Oore, I. Simon, S. Dieleman, D. Eck, and K. Simonyan. This time with feeling: learning expressive musical performance. Neural Computing and Applications, 2018.
  • [13] C. Raffel and D. P. Ellis. Extracting ground truth information from MIDI files: A MIDIfesto. In Proceedings of the 17th International Society for Music Information Retrieval Conference, 2016.
  • [14] A. Roberts, J. Engel, C. Raffel, C. Hawthorne, and D. Eck. A hierarchical latent vector model for learning long-term structure in music. In 35th International Conference on Machine Learning, 2018.
  • [15] G. Toussaint. A Comparison of Rhythmic Dissimilarity Measures. Forma, 2006.
  • [16] R. Vogl and P. Knees. An Intelligent Drum Machine for Electronic Dance Music Production and Performance. In NIME 2017 Proceedings of the International Conference on New Interfaces for Musical Expression, 2017.