Composing Music with Grammar Argumented Neural Networks and Note-Level Encoding

Creating aesthetically pleasing pieces of art, including music, has been a long-term goal for artificial intelligence research. Despite recent successes of long-short term memory (LSTM) recurrent neural networks (RNNs) in sequential learning, LSTM neural networks have not, by themselves, been able to generate natural-sounding music conforming to music theory. To transcend this inadequacy, we put forward a novel method for music composition that combines the LSTM with Grammars motivated by music theory. The main tenets of music theory are encoded as grammar argumented (GA) filters on the training data, such that the machine can be trained to generate music inheriting the naturalness of human-composed pieces from the original dataset while adhering to the rules of music theory. Unlike previous approaches, pitches and durations are encoded as one semantic entity, which we refer to as note-level encoding. This allows easy implementation of music theory grammars, as well as closer emulation of the thinking pattern of a musician. Although the GA rules are applied to the training data and never directly to the LSTM music generation, our machine still composes music that possess high incidences of diatonic scale notes, small pitch intervals and chords, in deference to music theory.



There are no comments yet.


page 1

page 3

page 4


LSTM Networks for Music Generation

The paper presents a method of the music generation based on LSTM (Long ...

Differential Music: Automated Music Generation Using LSTM Networks with Representation Based on Melodic and Harmonic Intervals

This paper presents a generative AI model for automated music compositio...

An Augmented Lagrangian Method for Piano Transcription using Equal Loudness Thresholding and LSTM-based Decoding

A central goal in automatic music transcription is to detect individual ...

Music Generation using Three-layered LSTM

This paper explores the idea of utilising Long Short-Term Memory neural ...

GGA-MG: Generative Genetic Algorithm for Music Generation

Music Generation (MG) is an interesting research topic that links the ar...

An approach to Beethoven's 10th Symphony

Ludwig van Beethoven composed his symphonies between 1799 and 1825, when...

Music Generation Using an LSTM

Over the past several years, deep learning for sequence modeling has gro...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The creation of all forms of art[1, 2, 3, 4]

, including music, has been a long-term pursuit of artificial intelligence (AI) research. Broadly speaking, music generation by AI is based on the principle that musical styles are in effect “complex systems of probability relationships”, as defined by the musicologist Leonard B. Meyer. In the early years, symbolic AI methods were popular and specific grammars describing a set of rules drive the composition

[5, 6]

. These methods were later much improved by evolutionary algorithms in various ways

[7], as embodied by the famous EMI project[8]

. More recently, statistical models such as Markov chains and the Hidden Markov model (HMM) became popular in algorithmic composition

[9]. Parallel to these developments was the rapid rise of neural network (NN) approaches, which have made remarkable progress in fields like signal and image recognition, as well as [10] music composition. At present, the cutting-edge approaches to generative modeling of music are based on Recurrent Neural Networks (RNN)[11, 12, 13, 14] like the Long Short-Term Memory (LSTM)[15, 16, 17] RNN.

While RNN and LSTM networks perform well in modeling sequential data, they suffer from a few significant shortcomings when applied to music composition. The music generated is often drab and dull without any discernible theme, consisting of notes that either sound either too repetitive or too random. It is thus desirable to have a machine that can learn to generate music adhering to the principles of music theory, although that is beyond the capabilities of ordinary neural networks or usual grammatical methods.

In this work, we hence improvise an LSTM with an original method known as the Grammar Argumented (GA) method, such that our model combines a neural network with grammars. We begin by training a LSTM neural network with a dataset from music composed by actual human musicians. In the training process, the machine learns the relationships within the sequential information as much as possible. Next we feed a short phrase of music to trigger the first phase of generation. Instead of adding the first phase of generated notes directly to the output, we evaluate these notes according to common music composition rules. Notes that go against music theory rules will be abandoned, and replaced by repredicted new notes that eventually conform to the rules. All amended results and their corresponding inputs will be then be added to training set. We then retrain our model with the updated training set and use the original generating method to do the second phase of (actual) generation. The abovementioned procedure is summarized in Fig.


. Another novel feature of our model is our note-level encoding method, which involves a new representation of notes by concatenating each note’s duration and pitch as a single input vector. This combines the duration and pitch of each note as a single semantic entity, which is not only closer to how human composers think, but which also faciliates the direct application of music theory rules as grammars.

Our results indicate that our GA model possess markedly superior performance in music generation compared to its non-GA version, according to metrics based on music theory like the percentages of notes in the diatonic scale and chords, and pitch intervals within an octave. Indeed, our machine-created melodies sound pleasing and natural, as exemplified in our explicit example in Fig. 4. In all, our GA neural network with note-level encoding can learn basic music composition principles and produce natural and melodious music.

Ii Methods

Ii-a Note-Level Encoding

Although machine-learning methods have made significant progress in music composition, so far none has managed to closely simulate how human composers create music. In particular, human composers regard the pitch and duration of each note as attributes of a single entity, which in turn forms the building block of more complex musical motifs. By contrast, existing approaches either analyze pitches and note durations separately in separate neural networks[18, 19], or represent music as quantized time series[11, 13, 20, 21, 16, 22]. In this work, we shall attempt to more closely emulate human composers by combining the pitches and durations of musical notes into one entity, which we shall call as note-level encoding. Very importantly, this encoding allows the natural implementation of the rules of music theory as grammars, which act on notes and not merely fixed durations. This will be elaborated in Section 2.3.

Our training data is derived from the MIDI sequences of 106 piano pieces by contemporary musicians like Joe Hisaishi, Yiruma, Yoko Kanno and Shi Jin. For consistency, we transpose all pieces to start with C major/A minor, only include pieces with 4/4 time signature, and retain only the melody such that the resultant music is monophonic. This entails omitting music accompaniments, grace notes and intensity changes. In particular, only the highest note, which typically carries the melody, is retained when simultaneous notes occur. This leaves us with a sequence of ”Note On Events” and ”Note Off Events”, which can then be directly encoded as a sequence of one-hot vectors containing duration and pitch information, like Fig. 1. Each one-hot vector consist of a 59-bit segment representing pitch semitones from A0 to C8, concatenated with a 30-bit segment representing durations from a semiquaver to a breve. Indeed, by including both pitch and duration within a single vector, our note-level encoding method enables the machine to ”learn” music composition by regarding the notes as fundamental building blocks, just like with human composers.

Fig. 1: The duration and pitch of each note, as extracted from MIDI files, are encoded in a single (binary) one-hot vector, This is illustrated by the quarter and eighth notes above, both of A5 pitch.

Ii-B Long Short-Term Memory Neural Networks

Recurrent neural networks (RNN) are widely used in sequential learning. They ”remember” information from previous time steps as each of their hidden layers receives input from the previous layer, as well as input from itself one time step ago. However, simple RNNs are inadequate for music composition as they do not perform well with long-term dependency due vanishing gradients[23]. This long-term dependency is necessary for understanding musical motifs which often last beyond several time steps. Our solution is to employ a more advanced type of RNN known as a long short-term memory (LSTM) neural network, which also possess a memory cell with potentially longer-term storage of data controlled by various gates.

An LSTM module contains a memory cell state in addition to its hidden state , as in Fig. 2. Unlike the hidden state, is linearly related to its past values, and can thus store information for an arbitrary duration until they are ”erased” by the forget gate. At each time step, the values of the input , previous memory cell state and previous hidden state together determine the new memory cell state and new hidden state . This achieved with the input gate , output gate and forget gate defined by


where and are the corresponding vectors of biases, and are the corresponding weight matrices for the input vectors and and

are the corresponding weights connecting the previous hidden state vectors. The element-wise sigmoid function

realizes the filtering role of the gates with its output value increasing from (block) to (pass) as the input ranges from to . At each time step, the memory cell state is updated according to


The forget gate controls how much information is ”forgotten” i.e. not passed on: if is zero, all previous information in the memory cell is forgotten. The input gate controls the amount of ”new” input to the memory cell from the activated current state memory defined by


which depends on the current input and most recent hidden state data. , and are the associated weight matrices and biases respectively.

Finally, the hidden state of the LSTM is updated according to the activated current state of memory cell under the control of output gate:

Fig. 2: The structure of the LSTM module. With the information from input and hidden states and , an LSTM layer outputs a hidden state that conveys temporal information. denotes element-wise multiplication and denotes element-wise addition. The upper diagram illustrates the logical dependencies of the input and outputs, and the lower diagram schematically illustrates the how an LSTM layer compute the current hidden state .

As a differentiable function approximator, the (weights and biases of the) LSTM are typically trained with gradient descent[23], with gradient calculated via Back-propagation Through Time (BPTT)[24]. The training details of our LSTM will be discussed in Section III.

Ii-C Grammar Argumented Method

One problem plaguing neural network approaches to music composition is that the music generated largly do not conform to basic principles of music theory. For instance, they often have too many overtones (excessive chromaticity), overly large pitch intervals, and unharmonious melodies.

We propose a novel approach called the Grammar Argumented (GA) method that can significantly alleviate this problem without any manual intervention (Fig. 3). The idea is to augment the training data such that it also includes machine-generated music that perfectly satisfies the principles of music theory. To do so, the music generation is broken into two phases, the first for generating training data that perfectly conforms to criteria derived from music theory, and the second for the actual musical output. In the first phase, a GA filtering step is applied to the output, such that only melodies satisfying the three grammatical rules described below can pass (as amended data). The residual nonconforming data will be abandoned by resampling. Next the amended data will be added to the training data for retraining the machine before the second phase of generation produces the actual output.

Inspired by music theory[25], we put forward three specific rules for the GA filtering. The first rule is that the notes (after translation to C major) must belong to the C major diatonic scale (DIA). Most of western music (and many from other cultures) is based on the diatonic scale consisting of seven distinct tones C, D, E, F, G, A, and B within an octave, among which various harmonies exist. While occassional chromaticity (presence of overtones C#, D#, F#, G#, and A#) can add extra color to a musical piece, LSTM generated music without GA argumentation contains too many overtones and consequently sound random and devoid of structure.

The second rule is that the pitch interval between two consecutive notes do not exceed an octave, i.e. that of short pitch interval (SPI). Large jumps in pitch usually sound disruptive and unlyrical, and we leave their artful implementation to future work.

The third rule is that any three consecutive notes must belong to a triad (TRI). Triads are pairs of pitch intervals representing chords, which are of fundamental importance in musical harmony. There are four types of triads, namely the major, minor, augmented and diminished triads, each inducing a different emotional response. Triads are furthermore the building blocks of all seventh chords, which add sophistication to the composition.

Fig. 3: The grammar argumented (GA) method. First, we train the LSTM neural network with the original dataset (top). In the first phase of generation, each note is evaluated with the GA rules from music theory, and each nonconforming note is replaced by a conforming note. The resultant amended data is next mixed with the original dataset, and used to retrain the LSTM network. This network then composes the machine-created music output in the second phase of generation.

We conclude this subsection by providing a very simple illustration of the GA method. In the first phase of generation, the aim is to generate training music that perfectly conforms to the three abovementioned GA rules. When a nonconforming note is predicted, we return to the output layer of model and resample from the output distribution. This operation is repeated until a GA conforming note is generated. For example, suppose that the last note in the output score is (eighth, A5), and that the newly predicted note is (eighth, B6). The new note B6 violate SPI because the pitch interval spans 14 semitones, which is larger than the octave interval of 12 semitones. Although B6 may have the highest probability in the output layer, we abandon it and resample till we arrive at a GA conforming note, which will then be added to the output score. After this first phase of generation, we mix all amended data with the original training set, as illustrated in Fig. 3, and then retrain the model for the second phase of actual generation. The amount of GA-conforming data in the training set determines the extent of chromaticity, lyricalness and harmony in the final output music. We emphasize that the simple implementation of these three music theory rules as GA grammers has been possible owing to note-level encoding.

Iii Experiments

Our model consists of one LSTM layer and one fully connected layer. The LSTM layer includes 128 cells with input dimension 89, the length of each note’s binary representation. There are 89 nodes in the fully connected layer, which is also the output layer. We adopt orthogonal initialization for inner cells and glorot uniform initialization for weights. As suggested by Jozefowicz[26]

, the forget gates bias are initialized with an all-ones matrix. The size of our dataset is 30k, which is divided into batches of 64 to speed up the training process. The loss function is defined with categorical crossentropy, and we use Adam

[27] to perform gradient descent optimization, with learning rate set to 0.001. We build our model on a high-level neural networks library [28] and use [29]

as its tensor manipulation library.

This model was first trained with the original dataset with the length of seed phrase set to 7 (notes). The loss stopped decreasing after 400 epochs, and we label the resultant weights as Orig. In the first phase of generation, we used Orig to generate 100k notes for each GA rule and obtained 5759, 5217 and 7931 amended notes respectively. Each group of amended data was then mixed with the original dataset to produce three different sets of new training data. A fourth set of new training data was obtained by mixing all these three groups of data (MIX) with the original training data. The model was then retrained with these new data, yielding four new sets of weights labeled DIA, SPI, TRI, and MIX, based on the GA rules they conform to. For statistics analysis, we used a public random seed to generate 100k notes with all five sets of weights, including Orig. Finally, the second phase of generation was performed with these five sets of weights to produce the actual output music.

Iv Results and evaluation

We first look at a representative segment from machine’s full composition generated in MIX mode, which encompasses all three GA rules. The music score of this segment is shown in Fig.4. Evidently, the machine prefers notes in the C major diatonic scale, with only one overtone (E-flat). There is also some rudimentary use of repeating motifs, as appearing in bars 3-4 and 10-12. The machine has also employed variations of rhythm in bars 3,4,6 and 12, reminiscent of actual songs. On the whole, the segment is generally lyrical, consistent with music in the dataset.

Fig. 4: An approximately 100-note segment of the machine’s composition. It was generated in MIX mode, which encompasses all three GA rules.

To quantitatively evaluate the music generated, we put forward three metrics motivated by the GA rules based on music theory (Section II-C). They are the percentage of notes in the diatonic scale (), percentage of pitch intervals within one octave () and percentage of triads (). These metrics are generically applicable for all types of music, and not just those defined by note-level encoding.


C 8.9 6.6 11.7 8.6 6.2 10.8
D 7.8 6.4 12.1 7.9 4.9 9.4
E 9.1 7.5 14.5 9.2 7.8 11.7
F 7.6 7.3 8.2 7.9 7.4 7.4
G 7.0 5.2 10.0 7.3 5.0 8.3
A 6.6 5.4 9.8 7.1 4.7 8.5
B 8.0 7.5 8.6 8.1 6.9 7.9
Total () 54.8 45.9 75.0 56.1 42.9 64.0
TABLE I: (%) of Dataset (DS) and outputs from the Five Modes

Our results show that music generated in the DIA mode indeed possess significantly more notes adhering to the diatonic scale. From Table I, which displays the percentages of each of the seven tones in C major diatonic scale, is 29.1 percentage points higher in the DIA mode than in Orig, where the DIA grammer rules have not been applied. Indeed the DIA GA method can significantly decrease the occurence of overtones, even if the original dataset contain key changes and depart significantly from the original diatonic scale (as seen from its relatively low ). Incidentally, the tonic note C is observed to have one of the highest occurrences, in line with expectations from more advanced music theory beyond the GA rules.


12.9 14.2 12.3 9.4 13.2 10.2
TABLE II: (%) of Dataset (DS) and outputs from the Five Modes

In Table II, we tabulate the percentage of pitch intervals within an octave for all the various mode outputs. A high percentage corresponds to a more lyrical composition. Evidently, the SPI and MIX modes produces music with the highest , such that there are about 30 percent fewer pitch jumps larger than one octave than that of Orig mode, whose data was generated before any GA rule has been applied.


Major 2.3 2.2 2.3 2.4 7.9 5.8
Minor 2.1 2.0 2.3 2.1 7.7 5.6
Augmented 0.0 0.1 0.1 0.2 0.5 0.4
Diminished 0.2 0.3 0.4 0.5 1.3 1.1
Total () 4.6 4.6 5.1 5.3 17.4 12.9
TABLE III: (%) of Dataset (DS) and outputs from the Five Modes

Our results in Table III show that the music composed in the TRI and MIX modes indeed contain more triads than the other modes’. , the percentage of triads, is computed by counting the total proportion of 3 consecutive notes assuming one of the four types of triads. The TRI mode, in particular, generates music within an almost fourfold increase in the number of triads.

Note that the music composed in MIX mode perform well under all three metrics. This suggests that the three GA rules are not conflicting, but rather are complementary ingredients for lyrical music. We emphasize that although some of the training data satisfy these metrics perfectly by construction, the final output music is generated purely by machine learning, and without human intervention.

V Conclusion

By themselves, simple LSTM neural networks cannot generate music that are appealing from the standpoint of music theory. We addressed this problem by augmenting the training data with grammar argumented (GA) machine generated ouput. In this way, the machine can be trained to generate music that inherits the naturalness of the original dataset while closely adhering to the major aspects of music theory. Since the GA filters are applied to the training data and not directly to the output, the latter is still generated by a completely bona-fide machine learning approach. Our note-level encoding method also allows a more authentic emulation of human composers, as well as provide a natural platform for implementing our grammar argumented method. The generated music generally sound lyrical, and adhere well to music theory according to the three major criteria we proposed.


The authors will like to thank Chuangjie Ren and Qingpei Liu for helpful discussions on neural networks.


  • [1] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu, “Pixel recurrent neural networks,” arXiv preprint arXiv:1601.06759, 2016.
  • [2] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576, 2015.
  • [3] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra, “Draw: A recurrent neural network for image generation,” arXiv preprint arXiv:1502.04623, 2015.
  • [4] A. Graves, “Generating sequences with recurrent neural networks,” arXiv preprint arXiv:1308.0850, 2013.
  • [5] G. M. Rader, “A method for composing simple traditional music by computer,” Communications of the ACM, vol. 17, no. 11, pp. 631–638, 1974.
  • [6] J. D. Fernánd Ndez and F. Vico, “Ai methods in algorithmic composition: a comprehensive survey,” Journal of Artificial Intelligence Research, vol. 48, no. 48, pp. 513–582, 2013.
  • [7] THYWISSEN and KURT, “Genotator: an environment for exploring the application of evolutionary techniques in computer-assisted composition,” Organised Sound, vol. 4, no. 2, pp. 127–133, 1999.
  • [8] D. Cope, “Computer modeling of musical intelligence in emi,” Computer Music Journal, vol. 16, no. 16, pp. 69–87, 1992.
  • [9] M. Allan, “Harmonising chorales in the style of johann sebastian bach,” Master’s Thesis, School of Informatics, University of Edinburgh, 2002.
  • [10] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, d. D. G. Van, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, and M. Lanctot, “Mastering the game of go with deep neural networks and tree search.” Nature, vol. 529, no. 7587, pp. 484–489, 2016.
  • [11] P. M. Todd, “A connectionist approach to algorithmic composition,” Computer Music Journal, vol. 13, no. 4, pp. 27–43, 1989.
  • [12] M. C. MOZER, “Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multi-scale processing,” Connection Science, vol. 6, no. 2-3, pp. 247–280, 1994.
  • [13] N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent, “Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription,” arXiv preprint arXiv:1206.6392, 2012.
  • [14] S. Wermter, C. Weber, W. Duch, T. Honkela, and P. Koprinkovahristova, “Artificial neural networks and machine learning – icann 2014,” Lecture Notes in Computer Science, vol. 8681, 2014.
  • [15] J. A. Franklin, “Recurrent neural networks for music computation,” Informs Journal on Computing, vol. 18, no. 3, pp. 321–338, 2006.
  • [16] D. Eck and J. Lapalme, “Learning musical structure directly from sequences of music,” University of Montreal, Department of Computer Science, CP, vol. 6128, 2008.
  • [17] N. Jaques, S. Gu, R. E. Turner, and D. Eck, “Tuning recurrent neural networks with reinforcement learning,” arXiv preprint arXiv:1611.02796, 2016.
  • [18] M. C. Mozer, “Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multi-scale processing,” Connection Science, vol. 6, no. 2-3, pp. 247–280, 1994.
  • [19] J. A. Franklin, “Recurrent neural networks for music computation,” INFORMS Journal on Computing, vol. 18, no. 3, pp. 321–338, 2006.
  • [20] K. Goel, R. Vohra, and J. Sahoo, “Polyphonic music generation by modeling temporal dependencies using a rnn-dbn,” in Artificial Neural Networks and Machine Learning–ICANN 2014.   Springer, 2014, pp. 217–224.
  • [21] D. Eck and J. Schmidhuber, “Finding temporal structure in music: Blues improvisation with lstm recurrent networks,” in Neural Networks for Signal Processing, 2002. Proceedings of the 2002 12th IEEE Workshop on.   IEEE, 2002, pp. 747–756.
  • [22] Q. Lyu, Z. Wu, and J. Zhu, “Polyphonic music modelling with lstm-rtrbm,” in Proceedings of the 23rd Annual ACM Conference on Multimedia Conference.   ACM, 2015, pp. 991–994.
  • [23] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  • [24] A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional lstm and other neural network architectures,” Neural Networks, vol. 18, no. 5, pp. 602–610, 2005.
  • [25] J. P. Clendinning and E. W. Marvin, The Musician’s Guide to Theory and Analysis (Third Edition).   WW Norton & Company, 2016.
  • [26] R. Jozefowicz, W. Zaremba, and I. Sutskever, “An empirical exploration of recurrent network architectures,” in Proceedings of The 32nd International Conference on Machine Learning, 2015, pp. 2342–2350.
  • [27] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” Computer Science, 2014.
  • [28]

    F. Chollet, “Keras,”, 2015.
  • [29]

    M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015, software available from [Online]. Available: