Learning Interpretable Musical Compositional Rules and Traces

06/17/2016 ∙ by Haizi Yu, et al. ∙ University of Illinois at Urbana-Champaign 0

Throughout music history, theorists have identified and documented interpretable rules that capture the decisions of composers. This paper asks, "Can a machine behave like a music theorist?" It presents MUS-ROVER, a self-learning system for automatically discovering rules from symbolic music. MUS-ROVER performs feature learning via n-gram models to extract compositional rules --- statistical patterns over the resulting features. We evaluate MUS-ROVER on Bach's (SATB) chorales, demonstrating that it can recover known rules, as well as identify new, characteristic patterns for further study. We discuss how the extracted rules can be used in both machine and human composition.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For centuries, music theorists have developed concepts and rules to describe the regularity in music compositions. Pedagogues have documented commonly agreed upon compositional rules into textbooks (e.g

., Gradus ad Parnassum) to teach composition. With recent advances in artificial intelligence, computer scientists have translated these rules into programs that automatically generate different styles of music

Cope (1996); Biles (1994). However, this paper studies the reverse of this pedagogical process, and poses the question: can a machine independently extract from symbolic music data compositional rules that are instructive to both machines and humans?

This paper presents MUS-ROVER, a self-learning system for discovering compositional rules from raw music data (i.e., pitches and their durations). Its rule-learning process is implemented through an iterative loop between a generative model — “the student” — that emulates the input’s musical style by satisfying a set of learned rules, and a discriminative model — “the teacher” — that proposes additional rules to guide the student closer to the target style. The self-learning loop produces a rule book and a set of reading instructions customized for different types of users.

MUS-ROVER is currently designed to extract rules from four-part music for single-line instruments. We represent compositional rules as probability distributions over features abstracted from the raw music data. MUS-ROVER leverages an evolving series of

-gram models over these higher-level feature spaces to capture potential rules from both horizontal and vertical dimensions of the texture.

We train the system on Bach’s (SATB) chorales (transposed to C), which have been an attractive corpus for analyzing knowledge of voice leading, counterpoint, and tonality due to their relative uniformity of rhythm Taube (1999); Rohrmeier & Cross (2008). We demonstrate that MUS-ROVER is able to automatically recover compositional rules for these chorales that have been previously identified by music theorists. In addition, we present new, human-interpretable rules discovered by MUS-ROVER that are characteristic of Bach’s chorales. Finally, we discuss how the extracted rules can be used in both machine and human composition.

2 Related Work

Researchers have built expert systems for automatically analyzing and generating music. Many analyzers leverage predefined concepts (e.g., chord, inversion, functionality) to annotate music parameters in a pedagogical process Taube (1999), or statistically measure a genre’s accordance with standard music theory Rohrmeier & Cross (2008). Similarly, automatic song writers such as EMI Cope (1996) and GenJem Biles (1994) rely on explicit, ad-hoc coding of known rules to generate new compositions Merz (2014).

In contrast, other systems generate music by learning statistical models such as HMMs and neural networks that capture domain knowledge — patterns — from data

Simon et al. (2008); Mozer (1994)

. Recent advances in deep learning take a step further, enabling knowledge discovery via feature learning directly from raw data

Bengio (2009); Bengio et al. (2013); Rajanna et al. (2015). However, the learned, high-level features are implicit and non-symbolic with post-hoc interpretations, and often not directly comprehensible or evaluable.

MUS-ROVER both automatically extracts rules from raw data — without prior encoding any domain knowledge — and ensures that the rules are interpretable by humans. Interpretable machine learning has studied systems with similar goals in other domains

Malioutov & Varshney (2013); Dash et al. (2015).

For readers of varying musical background, the music terminology mentioned throughout this paper can be referenced by any standard textbook on music theory Laitz (2012).

3 The Rule-Learning System

We encode the raw representation of a chorale — pitches and their durations — symbolically as a four-row matrix, whose entries are MIDI numbers (). The rows represent the horizontal melodies in each voice. The columns represent the vertical sonorities, where each column has unit duration equaling the greatest common divisor (gcd) of note durations in the piece.

MUS-ROVER extracts compositional rules, : probability distributions over learned features (), from both horizontal and vertical dimensions of the texture. MUS-ROVER prioritizes vertical rule extractions via a self-learning loop, and learns horizontal rules through a series of evolving -grams.

3.1 Self-Learning Loop

The self-learning loop identifies vertical rules about sonority (a chord in traditional harmonies) constructions. Its two main components are “the student” — a generative model that applies rules, and “the teacher” — a discriminative model that extracts rules. The loop is executed iteratively starting with an empty rule set and an unconstrained student who picks pitches uniformly at random. In each iteration, the teacher compares the student’s writing style with Bach’s works, and extracts a new rule that augments the current rule set. The augmented rule set is then used to retrain the student, and the updated student is sent to the teacher for the next iteration.

The student in the th iteration is trained by the rule set via the following optimization problem for the student’s probabilistic model :

subject to

where requires to satisfy the th rule with being the constraint set derived from , and the objective is a Tsallis entropy, which achieves a maximum when is uniform and a minimum when is deterministic. Thus the constrained maximization of disperses probability mass across all the rule-satisfying possibilities and encourages creativity from randomness.

The teacher in the th iteration solves the following optimization problem for feature :

maximize (2)
subject to

where the objective is a scoring function that selects a feature whose distribution is both regular — small and discriminative — large ; the constraint signifies that the candidates are the unlearned high-level features. MUS-ROVER highlights the automaticity and interpretability of feature generation. It constructs the universe of all features via the combinatorial enumeration of selection windows and basis features (descriptors):


The descriptor set is hand-designed but doesn’t require domain knowledge, leveraging only basic observations, such as the distance and periodicity, as well as the ordering of the pitches. On the contrary, the window set is machine enumerated to ensure the exploration capacity. The construction of guarantees the interpretability for all . For instance, people can read out the feature specified by as the piano distance modulo 12 (interval class) between the soprano and bass pitches.

This idea of “learning by comparison” and the collaborative setting between a generative and discriminative model are similarly presented in statistical models such as noise-contrastive estimation

Gutmann & Hyvärinen (2010) and generative adversarial networks Goodfellow et al. (2014). Both models focus on density estimations to approximate the true data distribution for the purpose of generating similar data; in contrast, our methods do explain the underlying mechanisms that generate the data distribution, such as the compositional rules that produce Bach’s styles.

3.2 Evolving -grams on Feature Spaces

MUS-ROVER employs a series of -gram models (with words being vertical features) to extract horizontal rules that govern the transitions of the sonority features. All -grams encapsulate copies of self-learning loops to accomplish rule extractions in their contexts. Starting with unigram, MUS-ROVER gradually evolves to higher order -grams by initializing an -gram student from the latest (-)-gram student. While the unigram model only captures vertical rules such as concepts of intervals and triads, the bigram model searches for rules about sonority progressions such as parallel/contrary motions.

MUS-ROVER’s -gram models operate on high-level feature spaces, which is in stark contrast with many other -gram applications in which the words are the raw inputs. In other words, a higher-order -gram in MUS-ROVER shows how vertical features (high-level abstractions) transition horizontally, as opposed to how a specific chord is followed by other chords (low-level details). Therefore, MUS-ROVER does not suffer from low-level variations in the raw inputs, highlighting a greater generalizability.

4 Learning Rules from Bach’s Chorales

4.1 A Rule Book on Bach’s chorales

The rule book contains 63 unigram rules resulting from the feature set , all of which are probability distributions of the associated features. Figure 1 illustrates two examples of the unigram rules, whose associated features are (top) and (bottom), respectively. The first example considers the soprano voice, and its descriptor is semantically equivalent to pitch class (p.c.). It shows the partition of two p.c. sets, which says that the soprano line is built on a diatonic scale. The second example considers the soprano and bass, and its descriptor is semantically equivalent to interval class (i.c.). It recovers our definition of intervalic quality: consonance versus dissonance.

Figure 1: Unigram rule examples from Bach’s chorales.

Given a feature , a bigram rule is represented by the feature transition distribution . Due to the large amount of conditionals for each feature, the book contains many more bigram rules than unigram rules. Figure 2 illustrates two examples of the bigram rules, both of which are associated with feature . Comparing the top bigram rule in Figure 2 with the bottom unigram rule in Figure 1 shows the re-distribution of the probability mass for feature , the i.c. between soprano and bass (SB: i.c.). The dramatic drop of recovers the rule that avoids parallel P8s, while the rises of and their inversions suggest the usage of passing/neighbor tones (PT/NTs). The bottom rule in Figure 2 illustrates resolution — an important technique used in tonal harmony — which says tritones (TTs) are most often resolved to and their inversions. Interestingly, the fifth peak () in the pmf of this rule reveals an observation that doesn’t fall into the category of resolution. This transition, , is similar to the notion of escape tone (ET), which suspends the tension instead of directly resolving it. For instance, , which will eventually resolve to . All of these rules are automatically identified during rule-learning.

Figure 2: Bigram rule examples from Bach’s chorales.

4.2 Customized Rule-Learning Traces

Despite its readability for every single rule, the rule book is in general hard to read as a whole due to its length and lack of organization. MUS-ROVER’s self-learning loop solves both challenges by offering customized rule-learning traces — ordered rule sequences — resulting from its iterative extraction. Therefore, MUS-ROVER not only outputs a comprehensive rule book, but more crucially, suggests ways to read it, tailored to different types of students.

We propose two criteria, efficiency and memorability, to assess a rule-learning trace from the unigram model. The efficiency measures the speed in approaching Bach’s style; the memorability measures the complexity in memorizing the rules. A good trace is both efficient in imitation and easy to memorize.

To formalize these two notions, we first define a rule-learning trace as the ordered list of the rule set , and quantify the gap against Bach by the KL divergence in the raw feature space: . The efficiency of with efficiency level is defined as the minimum number of iterations that is needed to achieve a student that is -close to Bach if possible:

The memorability of is defined as the average entropy Pape et al. (2015) of the feature distributions from the first few efficient rules:

where . There is a tradeoff between efficiency and memorability. At one extreme, it is most efficient to just memorize , which takes only one step to achieve a zero gap, but is too complicated to memorize or learn. At the other extreme, it is easiest to just memorize for ordering related features, which are (nearly) deterministic but less useful, since memorizing the orderings takes you acoustically nowhere closer to Bach. The parameter in the scoring function of (2) is specially designed to balance the tradeoff, with a smaller for more memorability and a larger for more efficiency (Table 1).

Unigram Rule-Learning Traces
1 (1,4), order (1,4), order (1,2,3), pitch
2 (1,3), order (1,3), order (2,3,4), pitch
3 (2,4), order (2,4), order (1,2,3,4), pitch12
4 (1,2), order (1,2), order (1,3,4), pitch
5 (2,3), order (2,3,4), order (1,2,4), pitch
6 (3,4), order (1,3,4), pitch (1,2,3,4), interv
Table 1: Three unigram rule-learning traces () with . The top figure shows the footprints that mark the diminishing gaps. The bottom table records the first six rules, and shows the tradeoff between efficiency and memorability (). The trace with shows the most efficiency, but the least memorability.

To study the rule entangling problem, we generalize the notion of from the raw feature to all high-level features:

Plotting the footprints of the diminishing gaps for a given feature reveals the (possible) implication of its associated rule from other rules. For instance, Figure 3 shows two sets of footprints for and . By starring the iteration when the rule of interest is actually learned, we see that cannot be implied from the previous rules, since learning this rule dramatically closes the gap; on the contrary, can be implied from the starting seven or eight rules.

Figure 3: Rule entanglement: two sets of footprints that mark the diminishing gaps, both of which are from the rule-learning trace with . The location of the star shows whether the associated rule is entangled (right) or not (left).

Given a rule-learning trace in the bigram setting, the analysis on efficiency and memorability, as well as feature entanglement, remains the same. However, every trace from the bigram model is generated as a continuation of unigram learning: the bigram student is initialized from the latest unigram student. This implies the bigram rule set is initialized from the unigram rule set, rather than an empty set. MUS-ROVER uses the extracted bigram rules to overwrite their unigram counterparts — rules with the same features — highlighting the differences between the two language models. The comparison between a bigram rule and its unigram counterpart is key in recovering rules that are otherwise unnoticeable from the bigram rule alone, such as “Parallel P8s are avoided!” Thus, MUS-ROVER emphasizes the necessity of tracking a series of evolving -grams, as opposed to learning from the highest possible order only.

5 Discussion and Future Work

MUS-ROVER takes a first step in automatic knowledge discovery in music, and opens many directions for future work. Its outputs — the rule book and the learning traces — serve as static and dynamic signatures of an input style. We plan to extend MUS-ROVER beyond chorales, so we can analyze similarities and differences of various genres through these signatures, opening opportunities for style mixing. Moreover, while this paper depicts MUS-ROVER as a fully-automated system, we could have a human student become the generative component, interacting with “the teacher” to get iterative feedback on his/her compositions.

A more detailed version of this work will appear elsewhere Yu et al. (2016).


We thank Professor Heinrich Taube, President of Illiac Software, Inc., for providing Harmonia’s MusicXML corpus of Bach’s chorales.111http://www.illiacsoftware.com/harmonia


  • Bengio (2009) Bengio, Yoshua. Learning deep architectures for AI. Found. Trends Mach. Learn., 2(1):1–127, 2009.
  • Bengio et al. (2013) Bengio, Yoshua, Courville, Aaron, and Vincent, Pierre. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798–1828, 2013.
  • Biles (1994) Biles, John.

    Genjam: A genetic algorithm for generating jazz solos.

    In Proc. ICMC, pp. 131–131, 1994.
  • Cope (1996) Cope, David. Experiments in musical intelligence, volume 12. AR editions Madison, WI, 1996.
  • Dash et al. (2015) Dash, Sanjeeb, Malioutov, Dmitry M, and Varshney, Kush R. Learning interpretable classification rules using sequential rowsampling. In Proc. ICASSP, pp. 3337–3341, 2015.
  • Goodfellow et al. (2014) Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In Proc. NIPS, pp. 2672–2680, 2014.
  • Gutmann & Hyvärinen (2010) Gutmann, Michael and Hyvärinen, Aapo. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proc. AISTATS, pp. 297–304, 2010.
  • Laitz (2012) Laitz, Steven Geoffrey. The complete musician: an integrated approach to tonal theory, analysis, and listening. Oxford University Press, 2012.
  • Malioutov & Varshney (2013) Malioutov, Dmitry and Varshney, Kush. Exact rule learning via boolean compressed sensing. In Proc. ICML, pp. 765–773, 2013.
  • Merz (2014) Merz, Evan X. Implications of ad hoc artificial intelligence in music. In Proc. AIIDE, 2014.
  • Mozer (1994) Mozer, Michael C. Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multi-scale processing. Conn. Sci., 6(2-3):247–280, 1994.
  • Pape et al. (2015) Pape, Andreas D, Kurtz, Kenneth J, and Sayama, Hiroki. Complexity measures and concept learning. J. Math. Psychol., 64:66–75, 2015.
  • Rajanna et al. (2015) Rajanna, Arjun Raj, Aryafar, Kamelia, Shokoufandeh, Ali, and Ptucha, Raymond. Deep neural networks: A case study for music genre classification. In Proc. ICMLA, pp. 655–660, 2015.
  • Rohrmeier & Cross (2008) Rohrmeier, Martin and Cross, Ian. Statistical properties of tonal harmony in bach’s chorales. In Proc. ICMPC, pp. 619–627, 2008.
  • Simon et al. (2008) Simon, Ian, Morris, Dan, and Basu, Sumit. MySong: automatic accompaniment generation for vocal melodies. In Proc. CHI, pp. 725–734, 2008.
  • Taube (1999) Taube, Heinrich. Automatic tonal analysis: Toward the implementation of a music theory workbench. Comput. Music J., 23(4):18–32, 1999.
  • Yu et al. (2016) Yu, Haizi, Varshney, Lav R, Garnett, Guy E, and Kumar, Ranjitha. MUS-ROVER: A self-learning system for musical compositional rules. In Proc. MUME, 2016. to appear.