To make music composition more approachable, we designed the first AI-powered Google Doodle, the Bach Doodle , where users can create their own melody and have it harmonized by a machine learning model (Coconet ) in the style of Bach. For users to input melodies, we designed a simplified sheet-music based interface. To support an interactive experience at scale, we re-implemented Coconet in TensorFlow.js  to run in the browser and reduced its runtime from s to s by adopting dilated depth-wise separable convolutions and fusing operations. We also reduced the model download size to approximately KB through post-training weight quantization. We calibrated a speed test based on partial model evaluation time to determine if the harmonization request should be performed locally or sent to remote TPU servers. In three days, people spent 350 years worth of time playing with the Bach Doodle, and Coconet received more than 55 million queries. Users could choose to rate their compositions and contribute them to a public dataset, which we are releasing with this paper. We hope that the community finds this dataset useful for applications ranging from ethnomusicological studies, to music education, to improving machine learning models.
Machine learning can extend our creative abilities by offering generative models that can rapidly fill in missing parts of our composition, allowing us to see a prototype of how a piece could sound. To celebrate J.S. Bach’s 334th birthday, we designed the Bach Doodle to create an interactive experience where users can rapidly explore different possibilities in harmonization by tweaking their melody and requesting new harmonizations. The harmonizations are powered by Coconet , a versatile generative model of counterpoint that can fill in arbitrarily incomplete scores.
Creating this first AI-powered doodle involved overcoming challenges in user interaction and interface design, and also technical challenges in both machine learning and in infrastructure for serving the models at scale. For inputting melodies, we designed a simplified sheet music interface that facilitates easy trial and error and found that users adapted to it quickly even when they were not familiar with western music notation.
As most users do not have dedicated hardware to run machine learning models, we re-implemented Coconet in TensorFlow.js  so that it could run in the browser. We reduced the model run-time from 40s to 2s by adopting dilated depth-wise separable convolutions and fusing operations, and we reduced the model download size to KB through post-training weight quantization. To prepare for large-scale deployment, we calibrated a speed test to determine if a user’s device is fast enough for running the model in the browser. If not, the harmonization requests were sent to remote TPU servers.
Users in 80% of sessions explored multiple harmonizations, and 53.8% of the harmonizations were rated as positive. One complaint from advance users was the presence of parallel fifths (P5s) and octaves (P8s). We analyzed 21.8 million harmonizations and found that P5s and P8s occur on average 0.365/measure and 0.391/measure respectively. Furthermore, P5s and P8s were more common when user input was out of distribution, and fewer P5s and P8s were correlated with positive user feedback.
3 Related work
Machine learning has been used in algorithmic music composition to support a wide range of musical tasks [28, 13, 29, 5, 19]. Melody harmonization is one of the canonical tasks [11, 26, 7, 20], encourages human-computer interaction [25, 3, 14, 21, 33], and is particularly approachable for novices. Different interfaces and tools have been developed to make the interaction experience more accessible. For example, in MySong , users can sing a melody and have the system harmonize it. In Hyperscore , users can draw multiple levels of “motifs” on a graphical sketchpad and have them harmonized according to the tension curve they specified. Startups such as JukeDeck and Amper Music offer APIs that allow users to describe a piece through timing and mood tags. Opensource libraries such as Magenta.js  allow machine learning models to be used in digital audio work stations. For score-based interaction, FlowComposer  offers an augmented lead-sheet based interface, while DeepBach  demonstrates interactive chorale composition in MuseScore which uses standard music notation.
4 The Bach Doodle
4.1 A walk through of the user experience
The Bach Doodle user experience begins by demonstrating 4-part harmony using two measures of a Bach chorale, Ach wie flüchtig, ach wie nichtig, BWV 26. By playing the soprano line alone, followed by soprano and alto, and then all four voices, users are shown how the harmony enhances the melody. Users are then presented with two measures of blank sheet music, in the treble clef, with a key signature of C major, in standard time. There are four vertical lines in each measure to indicate the beats which give the user visual cues on where to put notes.
The user enters a monophonic melody using quarter and eighth notes. The note duration is automatic. If a note is entered on a beat, it is a quarter note by default. However, if a note is added after the beat, the existing quarter note becomes an eighth note. This simple interface makes it easy for users with no musical knowledge to input a composition, and can be seen in Figure 1. If the user clicks on the “star” button on the left-hand side of the sheet music, they can enter “advanced mode”. This allows the user to input sixteenth notes anywhere on the staff. It also enables a control where the key can be changed to any of the 12 key signatures. This mode is hidden because it can be overwhelming for new users. It is also easier to make less enjoyable music this way, for example by going off key, or making the music overly complex.
Clicking the “Harmonize” button sends the generation request to either TensorFlow.js or the TPU server. When the response is ready, it is presented to the user one voice at a time, listing them out: “Soprano”, “Alto”, “Tenor”, “Bass”. The voices are color-coded to illustrate the harmonization in relation to the soprano input notes (Figure 2).
4.2 Design challenges
Celebrating J.S. Bach’s birthday using machine learning presented many unique design opportunities as well as some user experience challenges. One of the main goals was to empower people with the feeling that they could augment their own creativity in ways not previously thought possible, by allowing them to directly collaborate with a machine learning model. Another important goal was to convey the message that machine learning is not “magical” or incomprehensible, but rather a science that can be understood. Finally, a notable challenge was to ensure that these aims would be met for a large diversity of individuals, from children who have not yet learned to read to experts of music and technology.
In order to acquire early feedback on the design, user tests were employed. Over the course of the project, dozens of people were asked to play the demos and comment on their experience through both pre-defined questions and open comments. The first user test in the development process revealed that many people do not fully understand the concept of harmony, but fortunately, further testing showed that short animated musical examples were enough for people to comprehend these concepts quickly. Also, user tests indicated that only a small subset of people could read standard music notation. Our intuition was that using standard notation, rather than a grid based interface would be intuitive and frictionless to anybody only if the user interface (UI) was responsive with animations and sound and also if the note input interface was kept simple. Further user testing of the standard notation input UI proved this to be correct.
In order to accommodate people of all ages and experiences, a common technique employed is to remove any advanced feature or unexpected delight from the core experience and instead integrate them as “easter eggs”. This allows people of all skill levels to experience the full core experience without feeling frustrated, while also giving the rarer advanced user more features. While the core experience primarily allows eighth notes and tempo changes, clicking on a special button in the background additionally allowed the user to add sixteenth notes and change the key – two features that are very confusing to those without musical backgrounds.
4.3 Reusable insights
For future projects, we have shown that if the technology being used is unfamiliar or perceived as “scary” to those who know little about it, tethering the experience to a familiar story and visuals can be a successful strategy. Most people have a limited understanding of musical concepts such as harmony and standard notation, but it is possible that people of all ages can quickly acquire an intuitive understanding of musical concepts through carefully designed animated audiovisuals and a simple and responsive UI. Additionally, injecting content into loading screens could not only make loading times feel shorter but also be an excellent space for educational content. Finally, user testing is crucial when trying to create an experience using new technology that encompasses a large and diverse audience – it can reveal serious issues and shortcomings that are not obvious due to the team’s own background and domain knowledge.
5 Technical challenges
In order for users to interact with Coconet via a web interface, we needed to either port it to run client-side on the user’s device or host the model on a server with sufficient speed and capacity to support the number of requests we were expecting. In fact, we did both: we ported the model to TensorFlow.js (TF.js) so that it could run on user devices and added support for Tensor Processing Units (TPU) so that it could be served on Google Cloud. By running a simple test on users’ devices to determine the speed of core tensor operations, we were able to determine whether to use the TF.js implementation locally, or fall back to a TPU server to handle the computation. In the end, 47.4% of all harmonizations were done locally, in TF.js.
5.1 Background: Coconet
Coconet  111Blog post: https://magenta.tensorflow.org/coconet
Code: https://github.com/tensorflow/magenta/tree/master/magenta/models/coconet is a versatile generative model of musical counterpoint that can fill in arbitrarily incomplete scores, as illustrated in Figure 3.
Coconet represents counterpoint as a stack of piano rolls encoded in a binary three-dimensional tensor , where , , and denotes the number of instruments, time steps, and pitches respectively. if the th instrument plays pitch at time . Each instrument is assumed to play exactly one pitch at a time, therefore for all positions. We also focus on four-part Bach chorales as used in prior work [8, 2, 4, 16, 24, 18], and assume throughout.
Conventional approaches often factorize the joint distributioninto conditional distributions of the form , where indexes a sequence in some predetermined ordering such as chronological. In contrast, Coconet is an instance of orderless NADE [35, 34] and offers direct access to all conditionals of the form , where selects a fragment of a musical score and is in its complement (i.e. the missing parts). To train Coconet, we sample a training example , choose uniformly how many variables to erase, i.e. , and then choose uniformly the particular subset of variables to erase. The input is obtained by erasing the piano rolls to obtain incomplete piano rolls and concatenating this with the corresponding masks, as shown in Figure 4 (top left) where the yellow gaps indicate erased positions with all pitches set to zero.
The output predictions for each position is a softmax over the set of pitches (top right of Figure 4). The negative loglikelihood loss is given below, which involves reweighing by the number of variables erased to ensure that all conditionals are trained equally.
In contrast to generating from left to right in one pass, Coconet uses Gibbs sampling to improve sample quality through rewriting (see  for convergence analysis). Figure 4 shows how the procedure iterates between filling in missing parts and then erasing other parts so that they could be rewritten given the updated context.
5.2 Improving and speeding up Coconet
The original Coconet uses dense convolutions, where filter weights and their mixing weights are fully connected (Equation 1). This makes it unable to fully leverage GPU parallelization in TF.js (see Section 5.3.2).
Let index the pitch and time dimension in filters, and the input and output channels. In dense convolutions, each output position indexed by pitch, time and output channel is a sum over the resultant input channels and also over the positions in each filter (given by the neighbourhood function). For a 3-by-3 filter this summing is over the 9 positions.
In contrast, depthwise separable convolutions , shown in Equation 2, factorizes the dense tensor into a depthwise tensor and a pointwise . As a result, the multiplications between and can be parallelized over the input channels in the inner sum of Equation 3.
To further speed up Coconet, we adopt dilated convolutions to grow the receptive field exponentially to reduce the number of layers needed. As in , where in each block the dilation factors double in each layer for both the pitch and time dimension and then the block repeats.
The original Coconet was trained on eight measures (T=128). However, the Bach Doodle is designed for two measures (T=32), so we retrained the model with the original architecture and saw that the loss increased from 0.57 to 0.62 (show in Table 1), possibly because there is less context. Switching from dense to depthwise separable convolutions reduced the loss, requiring more filters but fewer layers. Since Tensorflow.js allows for parallelization across filters, this still resulted in much faster generation (see Section 5.3.2). Dilated convolutions reduced both the number of layers and number of filters and also reducing the loss. The particular scheme we used is 7 blocks of dilation rates for the pitch dimension and for the time dimension.
|Convolution type||NLL||run time|
|Dense (T=128), 64L, 128f||0.57|
|Dense (T=32), 64L, 128f||0.62||> 40s|
|Depthwise separable, 48L, 192f||0.59||7s|
|Dilated, 45L (7 blocks), 128f||0.58||4s|
5.3 Porting Coconet to the Browser
While some models can easily be ported to TF.js using a conversion script, Coconet’s Python TensorFlow implementation used some ops that did not yet exist in TF.js (e.g., cumsum), and we also needed the flexibility to optimize the performance of the model for our use case. We therefore manually re-implemented Coconet in TF.js ourselves and have made the code opensource333https://tensorflow.github.io/magenta-js/music/classes/_coconet_model_.coconet.html. We also contributed missing ops to TF.js with WebGL fragment shader code for GPU acceleration.
5.3.1 UI Challenges
TF.js makes use of the async/await pattern for access to outputs of models and individual TensorFlow operations. During inference, users receive a callback for when GPU operations have completed and the result is ready to be consumed. In this way, there is no blocking of the UI while waiting for model results. In practice, with large models like Coconet (which includes many repeated sampling steps of a deep network), it is still important to cede control back to the UI explicitly during the course of the model operations, which can be done with the tf.nextFrame() operation. Our op-by-op code port of the network allowed us to add these occasional UI breaks, which avoided a poor user experience where the page would freeze for multiple seconds during model prediction.
5.3.2 Performance Challenges
The initial port of Coconet to TF.js took over 40 seconds to do one harmonization. For a satisfying user experience, we needed to lessen this latency to below 5 seconds. While TF.js is able to take advantage of GPU acceleration, WebGL does not directly support the types of tensor operations used in deep learning. Instead, these operations must be implemented as shader programs, which were originally intended to compute the color for pixels during graphics rendering. This mismatch leads to inefficiencies that sometimes vary by operation. It turned out that the (unavoidably) inefficient shader implementation of convolutional layers were the main culprit. By switching to depthwise-separable convolutions, however, we were able to avoid many of these performance issues, reducing generation time to 7 seconds.
As we run Coconet through 64 Gibbs sampling steps, any improvement to operations that are used in this loop could lead to a significant saving. We wrote a custom operation using WebGL shaders to fuse together the operations used in our initial TF.js implementation of the annealing schedule. This schedule by  is a sequence of simple element-by-element operations that is run on every sampling step during harmonization. Because of the simplicity of the operations (a scalar subtraction, multiplication, division, and a max operation), we were able to easily fuse them into a single operation that avoided the overhead of executing multiple shader programs on the GPU, speeding up inference by about 5%. The combined savings of adding depthwise-separable convolutions, shrinking the model by using dilated convolutions, and using the fused schedule operation resulted in a reduction of the model latency from 40s to 2s.
5.3.3 Download Size
Due to the number of users we intended to reach as well as the variety of locations, devices, and bandwidth limits they would have, we needed to ensure the download size of the model weights was as small possible. To achieve this goal, we implemented support for post-training weight quantization and contributed it to TF.js. This quantization compresses each float32 weight tensor by mapping the full range of its dimensions down to 256 uniformly-spaced buckets in order to represent them as int8 values, which are then stored along with float32 min and scale values used to recover the range. During model initialization, the weights are converted back to float32
tensors using linear interpolation. By using these quantization, we were able to reduce the size of the downloaded weights by approximately 4, resulting in a payload ofKB without any noticable sacrifice in quality.
5.4 Balancing Load Between Tensorflow.js and TPU
We ideally wanted to run the harmonization model completely on end-user devices using TF.js to avoid the need for serving infrastructure, which adds cost, effort, and additional points of failure. But the speed of harmonization differs by user device, with older and lower-end devices taking longer to run the TF.js model code. For devices where harmonization take more than a few seconds, the harmonization is instead done by the cloud-served model. The first step in checking if a device can run harmonization locally is to check if WebGL is supported on the device, since that is required for using GPU-acceleration through TF.js. If WebGL is supported then we perform a speed test on the model, running a sample melody through its first four layers. If the latency of this model inference is below a set threshold, then the TF.js version is used. As there is overhead on the first inference of a model in TF.js, due to initial loading of the model weight textures onto the GPU, we actually run the speed test twice and use the second measurement to make the decision.
6 Dataset Release and Analysis
6.1 Data structure
Every user who interacted with the Bach Doodle had the opportunity to add their composition to a dataset. We make this entire dataset available at https://g.co/magenta/bach-doodle-dataset under a Creative Commons license. Of more than 55 million requests, the user contributed dataset contains over 21.6 million miniature compositions. The compositions are split across 8.5 million sessions. Each session represents an anonymous user’s interaction with the Bach Doodle over a single pageload and may contain multiple data points. Each data point consists of the user’s input melody, the 4-voice harmonization returned by Coconet, as well as other metadata: the country of origin, the user’s rating, the composition’s key signature, how long it took to compose the melody, and the number of times the composition was listened to.
We present some preliminary analysis of the dataset to shed some light on how users interacted with the doodle. Out of the 21.6 million sequences in the dataset, about 14 million (or 65.7%) are unique pitch sequences, that are not repeated anywhere in the dataset (without considering timing information). Overall, the median amount of time spent composing a sequence was 25.5 seconds, and sequences were listened to for a median of 3 loops, with a total of 78.2 million loops listened across the entire dataset.
The sequences come from 109 different countries, with the United States, Brazil, Mexico, Italy, and Spain ranking in the top 5. Countries that had a small number of sequences were all grouped together in a separate category, to minimize the possibility of identifying users. While many sessions (20%) contained only one request for harmonization (shown in Figure 5), most sessions had 2 or more harmonizations, either of the same melody, or of a different one. As shown in Figure 6, more than 5 million of the input sequences used the maximum number of notes in the default version of the doodle, which is 16. It is interesting to note that despite being an Easter egg, 7.6% of user sessions discovered the advanced mode that allowed them to enter longer sequences.
The doodle has 3 presets: Twinkle Twinkle Little Star, Mary had a Little Lamb, and the beginning to Bach’s Toccata and Fugue in D Minor, BWV 565, which are the 3 most repeated sequences. However, there are also shows some surprising runner ups, such as Beethoven’s Ode to Joy, and Megalovania, a popular song from the game Undertale, as well as some regional hits 444Visit https://g.co/magenta/bach-doodle-dataset to interact with visualizations of the top repeated melodies overall and in each region, as well as regional unique hits.. Overall, users enjoyed their harmonizations, with 53.8% of all compositions rated as “Good”. Figure 7 gives the breakdown of user ratings.
6.3 Parallel Fifths and Parallel Octaves
The Coconet model that powered the Bach Doodle was trained to produce harmonizations in the style of Bach chorales, and one well known characteristic of Bach’s counterpoint writing is how carefully he followed the rule of avoiding parallel fifths (P5s) and parallel octaves (P8s). However, one complaint from advanced users of the app was the presence of P5s and P8s in the output of the model. Here, we present some analysis of how frequently and under what circumstances such outputs occurred. To identify the P5 and P8 occurrences, we used music21 .
First, we looked at how frequently P5s and P8s appeared in our training data. We were surprised to find that in the 382 Bach chorale preludes we used in our train and validation sets, there were 132 instances of P5s (0.023/measure) and 51 instances of P8s (0.009/measure). Given this prevalence, the model may learn to output this kind of parallel motion. However, many of these instances can be “excused” because they occur under special circumstances such as at phrase boundaries or when using non-chord tones [10, 15]. Unfortunately, our training data does not include key signatures, time signatures, or fermatas, so the model likely learned to treat P5s as more permissible than was actually the case in Bach’s music.
We then examined the output of the model to see if P5s/P8s occurred more frequently when user input was outside the training distribution and if the absence of P5s/P8s was correlated with positive user feedback. We first split the output based on whether users gave positive feedback or not (non-positive feedback includes neutral, negative, and the absence of feedback). Next, we split based on whether user input was within the same pitch range as the soprano lines in the training data (MIDI pitches from 60 through 81) and whether the maximum delta between consecutive pitches exceeded that of the training data (1 octave).
In total, we found 15,816,599 P5s (0.365/measure) and 16,949,818 P8s (0.391/measure) in the model output. Results split into the four categories are shown in Figure 8. As hypothesized, P5s/P8s were more common when user input was out of distribution, and their absence correlated with positive user feedback. A Kruskal-Wallis H test for both the number of P5s and P8s showed that there is at least one statistically significant difference between the four categories with . Further, Mann-Whitney rank tests between the categories showed significant differences, each with . The correlation between fewer P5s/P8s and positive user feedback is particularly interesting. This could either indicate that users prefer music with fewer P5s/P8s or it could simply mean that when the model produces poor output, P5s/P8s tend to be a feature of that output. In any case, the presence of P5s/P8s seems to be a useful proxy metric for model output quality. In future work, it could be a useful signal during training (similar to ), evaluation, or perhaps even during inference where it could trigger additional Gibbs sampling steps.
The Bach Doodle enabled large-scale participation in baroque-style counterpoint composition through an intuitive sheet music interface assisted by machine learning. We hope this encourages more creative apps that allow novices and artists to interact with music composition and machine learning in approachable ways. With this paper, we are releasing a dataset of 21.6 million instances of human-computer collaborative miniature compositions, along with meta-data such as user rating and country of origin. We hope the community will find it useful for ethnomusicological studies, music education, or improving machine learning models.
Many thanks to Ann Yuan, Daniel Smilkov and Nikhil Thorat from Tensorflow.js for their expert assistance. A big shoutout to Pedro Vergani, Rebecca Thomas, Jordan Thompson and others on the Doodle team for their contribution to the core components of the Doodle. Thank you Lauren Hannah-Murphy and Chris Han for keeping us on track. Thank you to Magenta colleagues for their support. Thank you Tim Cooijmans for co-authoring the blog post.
-  Celebrating Johann Sebastian Bach. https://www.google.com/doodles/celebrating-johann-sebastian-bach. Accessed: 2019-04-04.
-  Moray Allan and Christopher KI Williams. Harmonising chorales by probabilistic inference. Advances in neural information processing systems, 17:25–32, 2005.
-  Gérard Assayag, Georges Bloch, Marc Chemillier, Arshia Cont, and Shlomo Dubnov. Omax brothers: a dynamic yopology of agents for improvization learning. In Proceedings of the 1st ACM workshop on Audio and music computing multimedia, pages 125–132. ACM, 2006.
-  Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. International Conference on Machine Learning, 2012.
-  Jean-Pierre Briot, Gaëtan Hadjeres, and François Pachet. Deep learning techniques for music generation-a survey. arXiv preprint arXiv:1709.01620, 2017.
-  François Chollet. Xception: Deep learning with depthwise separable convolutions. In , pages 1251–1258, 2017.
-  Ching-Hua Chuan and Elaine Chew. A hybrid system for automatic generation of style-specific accompaniment. In Proceedings of the 4th International Joint Workshop on Computational Creativity, pages 57–64. Goldsmiths, University of London London, 2007.
-  David Cope. Computers and musical style. Oxford University Press, 1991.
-  Michael Scott Cuthbert and Christopher Ariza. music21: A toolkit for computer-aided musicology and symbolic music data. In International Society for Music Information Retrieval, 2010.
-  Luke Dahn. Consecutive 5ths and octaves in bach chorales. https://lukedahn.wordpress.com/2016/04/15/consecutive-5ths-and-octaves-in-bach-chorales/. Accessed: 2019-04-12.
Mary Farbood and Bernd Schöner.
Analysis and synthesis of palestrina-style counterpoint using markov chains.In Proceedings of the International Computer Music Conference, 2001.
-  Morwaread M Farbood, Egon Pasztor, and Kevin Jennings. Hyperscore: a graphical sketchpad for novice composers. IEEE Computer Graphics and Applications, 24(1):50–54, 2004.
Jose D Fernández and Francisco Vico.
Ai methods in algorithmic composition: A comprehensive survey.
Journal of Artificial Intelligence Research, 48:513–582, 2013.
Rebecca Anne Fiebrink.
Real-time human interaction with supervised learning algorithms for music composition and performance.PhD dissertation, Princeton University, 2011.
-  George Fitsioris and Darrell Conklin. Parallel successions of perfect fifths in the bach chorales. MUSICAL STRUCTURE, page 52, 2008.
Kratarth Goel, Raunaq Vohra, and JK Sahoo.
Polyphonic music generation by modeling temporal dependencies using a
International Conference on Artificial Neural Networks, 2014.
-  Gaëtan Hadjeres, François Pachet, and Frank Nielsen. Deepbach: a steerable model for bach chorales generation. In International Conference on Machine Learning, pages 1362–1371, 2017.
-  Gaëtan Hadjeres, Jason Sakellariou, and François Pachet. Style imitation and chord invention in polyphonic music with exponential families. arXiv preprint arXiv:1609.05152, 2016.
-  Dorien Herremans, Ching-Hua Chuan, and Elaine Chew. A functional taxonomy of music generation systems. ACM Computing Surveys (CSUR), 50(5):69, 2017.
-  Dorien Herremans and Kenneth Sörensen. Composing fifth species counterpoint music with a variable neighborhood search algorithm. Expert systems with applications, 40(16):6427–6437, 2013.
-  Cheng-Zhi Anna Huang, Sherol Chen, Mark Nelson, and Doug Eck. Mixed-initiative generation of multi-channel sequential structures. In International Conference on Learning Representations Workshop Track, 2018.
-  Cheng-Zhi Anna Huang, Tim Cooijmnas, Adam Roberts, Aaron Courville, and Douglas Eck. Counterpoint by convolution. ISMIR, 2017.
-  Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E Turner, and Douglas Eck. Sequence tutor: Conservative fine-tuning of sequence generation models with kl-control. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1645–1654. JMLR. org, 2017.
-  Feynman Liang. Bachbot: Automatic composition in the style of bach chorales. Masters thesis, University of Cambridge, 2016.
-  Francois Pachet. The continuator: Musical interaction with style. Journal of New Music Research, 32(3):333–341, 2003.
-  François Pachet and Pierre Roy. Musical harmonization with constraints: A survey. Constraints, 6(1):7–19, 2001.
-  Alexandre Papadopoulos, Pierre Roy, and François Pachet. Assisted lead sheet composition using flowcomposer. In International Conference on Principles and Practice of Constraint Programming, pages 769–785. Springer, 2016.
-  George Papadopoulos and Geraint Wiggins. Ai methods for algorithmic composition: A survey, a critical view and future prospects. In AISB Symposium on Musical Creativity, volume 124, pages 110–117. Edinburgh, UK, 1999.
-  Philippe Pasquier, Arne Eigenfeldt, Oliver Bown, and Shlomo Dubnov. An introduction to musical metacreation. Computers in Entertainment (CIE), 14(2):2, 2016.
-  Ian Simon, Dan Morris, and Sumit Basu. Mysong: automatic accompaniment generation for vocal melodies. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 725–734. ACM, 2008.
-  Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, David Soergel, et al. Tensorflow. js: Machine learning for the web and beyond. arXiv preprint arXiv:1901.05350, 2019.
-  Bob L Sturm, Oded Ben-Tal, Una Monaghan, Nick Collins, Dorien Herremans, Elaine Chew, Gaëtan Hadjeres, Emmanuel Deruty, and François Pachet. Machine learning research that matters for music creation: A case study. Journal of New Music Research, 48(1):36–55, 2019.
Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray, and Hugo
Neural autoregressive distribution estimation.The Journal of Machine Learning Research, 17(1):7184–7220, 2016.
-  Benigno Uria, Iain Murray, and Hugo Larochelle. A deep and tractable density estimator. In In Proceedings of the International Conference on Machine Learning, 2014.
-  Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio.
-  Li Yao, Sherjil Ozair, Kyunghyun Cho, and Yoshua Bengio. On the equivalence between deep nade and generative stochastic networks. In In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2014.