Since the early days of computation, composers have explored methods of combining aleatoric music and algorithmic composition with generic computing devices (Agon et al., 2003; Cope, 1989). Authors have taken a wide variety of data driven approaches to ”creative generation” in various domains (Barbieri et al., 2012; Ha & Eck, 2017; Graves, 2013), with extensive application to music modeling (Briot et al., 2017; Roberts et al., 2018; Eck & Schmidhuber, 2002; Sturm et al., 2015; Hadjeres et al., 2016; Boulanger-Lewandowski et al., 2012; Bretan et al., 2017; Roberts et al., 2018).
In this paper, we focus on the task of harmonic recomposition (Casal & Casey, 2010). Melody generation and evaluation is a difficult task, even in monophonic music (Jaques et al., 2016), so we use the term harmonic recomposition to reference our focus on aspects of agreement and structure between voices. Our pipeline is also applicable to purely sequential and iterative generation, as has been shown in prior work (Huang et al., 2017; van den Oord et al., 2017).
1.1 Related Work
Autoregressive models have proven to be powerful distribution estimators for images and sequence data, showing excellent results in generative settings(van den Oord et al., 2016a). They have also performed well in related prior work for polyphonic music generation (Briot et al., 2017). Most related to the work described in this paper is CoCoNet (Huang et al., 2017, 2018), which also uses an autoregressive convolutional model over image-like structures for polyphonic music generation and was a direct inspiration for our approach. One key difference of our approach is our utilization of a two stage pipeline (first seen in the work of van den Oord et al. (2017)) which greatly improves training and generation speed as well as creating an implicit separation between local voice agreement (first stage) and global consistency over measures (second stage).
2 Implementation Details
In this section, we describe the data, model, and training details for our recomposition approach. An open source implementation of our setup (including audio samples) is available online111https://github.com/kastnerkyle/harmonic_recomposition_workshop.
We use a subset of the scores associated with the composer Josquin des Prez as compiled by the Josquin project 222http://josquin.stanford.edu/. Only pieces with parts are considered, resulting in a dataset with pieces comprising measures. We hold a contiguous measures out for use as a source of harmonic chord sequences during conditional generation.
After extracting individual measures, we convert to a ”piano roll” style multichannel image, with each measure having quantized timesteps (regardless of time-signature) on the horizontal axis, and one of possible tones on the vertical axis, where comes from the set of all possible notes used in the key normalized data (Hadjeres et al., 2016). These
values are padded to
for compatibility with the convolutional strided layers used in the VQ-VAE, and each voice assigned its own channel in an image-like container of sizedescribed in examples, height, width, and channels format (NHWC). The overall result can be seen in Fig. 1, where each color represents a separate channel.
2.2 Conditional Information
We extract the chord function and voicing of all measures using the music21 software package (Cuthbert & Ariza, 2010), and form ”function triplets” of the previous, current, and next measure. A measure group of chords would form triplet groupings of (we repeat chords to handle border issues), then , and finally .
The model pipeline is a two-stage generative setup, as described by van den Oord et al. (2017), wherein an initial stage (denoted VQ-VAE) is unconditionally trained to compress inputs to a spatially reduced, discrete representation (which we call ), and uncompress. Once the VQ-VAE stage is trained, we use it to generate a compressed for each element in the dataset, and train an autoregressive generative ”prior model” on this representation. The prior model learns to generate components of (which takes the form of a spatial map in this work), denoted , one at a time conditioning the next generation step on all previously generated .
The prior model may also take conditioning as one or multiple vectors (separate embeddings for each previous, current, and next chord, indexed by a chord integer), a spatial map (afrom some previous measure), or a combination of both during the generation process. The effect of conditioning type can be seen in Fig 2.
2.4 Experiment Details
The first stage VQ-VAE has 2 strided convolutional layers of kernel size and a stride of on both spatial axes, followed by an additional 2 layers of convolution with stride , using rectified linear activations (Glorot et al., 2011)2015). These layers have sizes , , , and leading to a VQ codebook size of , which results in a latent dimension of for size input. This procedure is inverted using transpose convolution for the decoder and combined with a binary cross-entropy loss alongside codebook and commitment losses for the VQ-VAE, averaged over all channels and spatial dimensions. Training was performed over minibatches of size with an Adam optimizer with , , , (Kingma & Ba, 2014).
In the second stage, a gated conditional PixelCNN (van den Oord et al., 2016b) is configured with layers of projection channels. The first layer has a kernel size of
and no residual connection. Layers after the first utilize residual connections(He et al., 2016) and have a kernel size of , followed by convolution, rectified linear activation, and a final convolution of size and channels (due to the VQ codebook size of ). Training was performed over minibatches of size with an Adam optimizer (configured as before) with categorical cross-entropy loss averaged over the output.
We experiment with two types of conditioning combined with the aforementioned architecture. The higher level information contained in the chord sequences alone seems sufficient to produce directed, coherent trajectories, without need for spatial conditioning information. When spatial conditioning from the previous timestep is included, the resulting generations are punctuated by dissonant intervals or long silent gaps. Finding better ways to combine local note level information with chord annotation will be an important step to improving this pipeline.
Chord-conditional generative models are an ideal fit for harmonic recomposition. We find that a two-stage pipeline reminiscent of van den Oord et al. (2017) and Huang et al. (2017) captures musical structure, and allows for chord conditional generation. Our work demonstrates note-level realizations of given chordal sequences and provides an open-source implementation with examples.
- Agon et al. (2003) Agon, Carlos, Andreatta, Moreno, Assayag, Gérard, and Schaub, Stephan. Formal aspects of Iannis Xenakis’ Symbolic Music: a computer-aided exploration of some compositional processes. Journal of New Music Research, 2003. cote interne IRCAM: Agon03a.
Barbieri et al. (2012)
Barbieri, Gabriele, Pachet, François, Roy, Pierre, and Esposti,
Markov constraints for generating lyrics with style.
Proceedings of the 20th European Conference on Artificial Intelligence, ECAI’12, 2012.
- Boulanger-Lewandowski et al. (2012) Boulanger-Lewandowski, Nicolas, Bengio, Yoshua, and Vincent, Pascal. Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. arXiv preprint arXiv:1206.6392, 2012.
- Bretan et al. (2017) Bretan, Mason, Oore, Sageev, Engel, Jesse, Eck, Douglas, and Heck, Larry P. Deep music: Towards musical dialogue. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,, 2017.
- Briot et al. (2017) Briot, Jean-Pierre, Hadjeres, Gaëtan, and Pachet, François. Deep learning techniques for music generation - A survey. CoRR, abs/1709.01620, 2017.
- Casal & Casey (2010) Casal, David Plans and Casey, Michael. Decomposing autumn: A component-wise recomposition. In ICMC, 2010.
- Cope (1989) Cope, David. Experiments in musical intelligence (emi): Non‐linear linguistic‐based composition. Interface, 1989.
- Cuthbert & Ariza (2010) Cuthbert, Michael Scott and Ariza, Christopher. Music21: A toolkit for computer-aided musicology and symbolic music data. In Proceedings of the 11th International Society for Music Information Retrieval Conference, 2010.
Eck & Schmidhuber (2002)
Eck, Douglas and Schmidhuber, Juergen.
Learning the long-term structure of the blues.
In Dorronsoro, J. (ed.),
Artificial Neural Networks – ICANN 2002 (Proceedings), 2002.
- Glorot et al. (2011) Glorot, Xavier, Bordes, Antoine, and Bengio, Yoshua. Deep sparse rectifier neural networks. In Proceedings of the fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323, 2011.
- Graves (2013) Graves, A. Generating sequences with recurrent neural networks. arXiv:1308.0850 [cs.NE], August 2013.
- Ha & Eck (2017) Ha, David and Eck, Douglas. A neural representation of sketch drawings. arXiv preprint arXiv:1704.03477, 2017.
- Hadjeres et al. (2016) Hadjeres, Gaëtan, Pachet, François, and Nielsen, Frank. Deepbach: a steerable model for bach chorales generation. arXiv preprint arXiv:1612.01010, 2016.
- He et al. (2016) He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. In
- Huang et al. (2018) Huang, Anna, Chen, Sherol, Nelson, Mark, and Eck, Douglas. Towards mixed-initiative generation of multi-channel sequential structure. 2018.
- Huang et al. (2017) Huang, Cheng-Zhi Anna, Cooijmans, Tim, Roberts, Adam, Courville, Aaron, and Eck, Douglas. Counterpoint by convolution. In Proceedings of ISMIR 2017, 2017.
- Ioffe & Szegedy (2015) Ioffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, 2015.
- Jaques et al. (2016) Jaques, Natasha, Gu, Shixiang, Bahdanau, Dzmitry, Hernández-Lobato, José Miguel, Turner, Richard E, and Eck, Douglas. Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control. arXiv preprint arXiv:1611.02796, 2016.
- Kingma & Ba (2014) Kingma, Diederik P and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Raffel & Ellis (2015) Raffel, Colin and Ellis, Daniel PW. Large-scale content-based matching of midi and audio files. In ISMIR, pp. 234–240, 2015.
- Roberts et al. (2018) Roberts, Adam, Engel, Jesse, Raffel, Colin, Hawthorne, Curtis, and Eck, Douglas. A hierarchical latent vector model for learning long-term structure in music. CoRR, abs/1803.05428, 2018.
- Sturm et al. (2015) Sturm, Bob, Santos, João Felipe, and Korshunova, Iryna. In 16th International Society for Music Information Retrieval Conference, late-breaking demo session, pp. 2, 2015.
- van den Oord et al. (2016a) van den Oord, Aaron, Dieleman, Sander, Zen, Heiga, Simonyan, Karen, Vinyals, Oriol, Graves, Alex, Kalchbrenner, Nal, Senior, Andrew, and Kavukcuoglu, Koray. Wavenet: A generative model for raw audio. 2016a.
- van den Oord et al. (2016b) van den Oord, Aaron, Kalchbrenner, Nal, Espeholt, Lasse, kavukcuoglu, koray, Vinyals, Oriol, and Graves, Alex. Conditional image generation with pixelcnn decoders. In Advances in Neural Information Processing Systems 29. 2016b.
- van den Oord et al. (2017) van den Oord, Aaron, Vinyals, Oriol, and kavukcuoglu, koray. Neural discrete representation learning. In Advances in Neural Information Processing Systems 30, pp. 6306–6315. 2017.