We’ve all had the experience of hearing a piece of music that we’ve never heard before, but immediately recognizing the composer based on the piece’s style. This paper explores this phenomenon in the context of sheet music. The question that we want to answer is: “Can we predict the composer of a previously unseen page of piano sheet music based on its compositional style?"
Many previous works have studied the composer classification problem. These works generally fall into one of two categories. The first category of approach is to construct a set of features from the music, and then feed the features into a classifier. Many works use manually designed features that capture musically meaningful information (e.g. ). Other works feed minimally preprocessed representations of the data (e.g. 2-D piano rolls 
or tensors encoding note pitch & duration information
) into a convolutional model, and allow the model to learn a useful feature representation. The second category of approach is to train one model for each composer, and then select the model that has the highest likelihood of generating a given sequence of music. Common approaches in this category include N-gram language models
and Markov models.
Our approach to the composer classification task addresses what we perceive to be the biggest common obstacle to the above approaches: lack of data. All of the above approaches assume that the input is in the form of a symbolic music file (e.g. MIDI or **kern). Because symbolic music formats are much less widely used than audio, video, and image formats, the amount of training data that is available is quite limited. We address this issue of data scarcity in two ways: (1) we re-define the composer classification task to be based on sheet music images, for which there is a lot of data available online, and (2) we propose an approach that can be trained on unlabeled data.
Our work takes advantage of recent developments in transfer learning in the natural language processing (NLP) community. Prior to 2017, transfer learning in NLP was done in a limited way. Typically, one would use pretrained word embeddings such as word2vec or GloVe vectors as the first layer in a model. The problem with this paradigm of transfer learning is that the entire model except the first layer needs to be trained from scratch, which requires a large amount of labeled data. This is in contrast to the paradigm of transfer learning in computer vision, where a model is trained on the ImageNet classification task , the final layer is replaced with a different linear classifier, and the model is finetuned for a different task. The benefit of this latter paradigm of transfer learning is that the entire model except the last layer is pretrained, so it can be finetuned with only a small amount of labeled data. This paradigm of transfer learning has been widely used in computer vision in the last decade  using pretrained models like VGG , ResNet , Densenet , etc. The switch to ImageNet-style transfer learning in the NLP community occurred in 2017, when Howard et al.  proposed a way to pretrain an LSTM-based language model on a large set of unlabeled data, add a classification head on top of the language model, and then finetune the classifier on a new task with a small amount of labeled data. This was quickly followed by several other similar language model pretraining approaches that replaced the LSTM with transformer-based architectures (e.g. GPT , GPT-2 , BERT ). These pretrained language models have provided the basis for achieving state-of-the-art results on a variety of NLP tasks, and have been extended in various ways (e.g. Transformer-XL , XLNet ).
Our approach is similarly based on language model pretraining. We first convert each sheet music image into a sequence of words based on the bootleg score feature representation . We then feed this sequence of words into a text classifier. We show that it is possible to significantly improve the performance of the classifier by training a language model on a large set of unlabeled data, initialize the classifier with the pretrained language model weights, and finetune the classifier on a small amount of labeled data. In our experiments, we train language models on all piano sheet music images in the International Music Score Library Project (IMSLP)111http://imslp.org/ using the AWD-LSTM , GPT-2 , and RoBERTa  language model architectures. By using pretraining, we are able to improve the accuracy of our GPT-2 model from 46% to 70% on a 9-way classification task.222Code can be found at https://github.com/tjtsai/PianoStyleEmbedding.
2 System Description
We will describe our system in the next four subsections. In the first subsection, we give a high-level overview and rationale behind our approach. In the following three subsections, we describe the three main stages of system development: language model pretraining, classifier finetuning, and inference.
Figure 1 summarizes our training approach. In the first stage, we convert each sheet music image into a sequence of words based on the bootleg score representation , and then train a language model on these words. Since this task does not require labels, we can train our language model on a large set of unlabeled data. In this work, we train our language model on all piano sheet music images in the IMSLP dataset. In the second stage, we train a classifier that predicts the composer of a short fragment of music, where the fragment is a fixed-length sequence of symbolic words. We do this by adding one or more dense layers on top of the language model, initializing the weights of the classifier with the language model weights, and then finetuning the model on a set of labeled data. In the third stage, we use the classifier to predict the composer of an unseen scanned page of piano sheet music. We do this by converting the sheet music image to a sequence of symbolic words, and then either (a) applying the classifier to a single variable length input sequence, or (b) averaging the predictions of fixed-length crops sampled from the input sequence. We will describe each of these three stages in more detail in the following three subsections.
The guiding principle behind our approach is to maximize the amount of data. This impacts our approach in three significant ways. First, it informs our choice of data format. Rather than using symbolic scores (as in previous approaches), we instead choose to use raw sheet music images. While this arguably makes the task much more challenging, it has the benefit of having much more data available online. Second, we choose an approach that can utilize unlabeled data. Whereas labeled data is usually expensive to annotate and limited in quantity, unlabeled data is often extremely cheap and available in abundance. By adopting an approach that can use unlabeled data, we can drastically increase the amount of data available to train our models. Third, we use data augmentation to make the most of the limited quantity of labeled data that we do have. Rather than fixating on the page classification task, we instead define a proxy task where the goal is to predict the composer given a fixed-length sequence of symbolic words. By defining the proxy task in this way, we can aggressively subsample fragments from the labeled data, resulting in a much larger number of unique training data points than there are actual pages of sheet music. Once the proxy task classifier has been trained, we can apply it to the full page classification task in a straightforward manner.
2.2 Language Model Pretraining
The language model pretraining consists of three steps, as shown in the upper half of Figure 1. These three steps will be described in the next three paragraphs.
The first step is to convert the sheet music image into a bootleg score. The bootleg score is a low-dimensional feature representation of piano sheet music that encodes the position of filled noteheads relative to the staff lines . Figure 2 shows an example of a section of sheet music and its corresponding bootleg score representation. The bootleg score itself is a binary matrix, where indicates the total number of possible staff line positions in both the left and right hands, and where
indicates the total number of estimated simultaneous note onset events. Note that the representation discards a significant amount of information: it does not encode note duration, key signature, time signature, measure boundaries, accidentals, clef changes, or octave markings, and it simply ignores non-filled noteheads (e.g. half or whole notes). Nonetheless, it has been shown to be effective in aligning sheet music and MIDI, and we hypothesize that it may also be useful in characterizing piano style. The main benefit of using the bootleg score representation over a full optical music recognition (OMR) pipeline is processing time: computing a bootleg score only takes about second per page using a CPU, which makes it suitable for computing features on the entire IMSLP dataset.333In contrast, the best performing music object detectors take 40-80 seconds to process each page at inference time using a GPU . We use the code from  as a fixed feature extractor to compute the bootleg scores.
The second step is to tokenize the bootleg score into a sequence of word or subword units. We do this differently for different language models. For word-based language models (e.g. AWD-LSTM ), we consider each bootleg score column as a single word consisting of a 62-character string of 0s and 1s. We limit the vocabulary to the most frequent words, and map infrequent words to a special unknown word token <unk>. For subword-based language models (e.g. GPT-2 , RoBERTa ), we use a byte pair encoding (BPE) algorithm  to learn a vocabulary of subword units in an unsupervised manner. The BPE algorithm starts with an initial set of subword units (e.g. the set of unique characters  or the unique byte values that comprise unicode characters ), and it iteratively merges the most frequently occurring pair of adjacent subword units until a desired vocabulary size has been reached. We experimented with both character-level and byte-level encoding schemes (i.e. representing each word as a string of 62 characters vs. a sequence of 8 bytes), and we found that the byte-level encoding scheme performs much better. We only report results with the byte-level BPE tokenizer. For both subword-based language models explored in this work, we use the same shared BPE tokenizer with a vocabulary size of (which is the vocabularly size used in the RoBERTa model). At the end of the second step, we have represented the sheet music image as a sequence of words or subword units.
The third step is to train a language model on a set of unlabeled data. In this work, we explore three different language models, which are representative of state-of-the-art models in the last 3-4 years. The top half of Figure 3 shows a high-level overview of these three language models. The first model is AWD-LSTM . This is a 3-layer LSTM architecture that makes heavy use of regularization techniques throughout the model, including four different types of dropout. The output of the final LSTM layer is fed to a linear decoder whose weights are tied to the input embedding matrix. This produces an output distribution across the tokens in the vocabulary. The model is then trained to predict the next token at each time step. We use the fastai implementation of the AWD-LSTM model with default parameters. The second model is openAI’s GPT-2 . This architecture consists of multiple transformer decoder layers 
. Each transformer decoder layer consists of a masked self-attention, along with feedforwards layers, layer normalizations, and residual connections. While transformer encoder layers allow each token to attend to all other tokens in the input, the transformer decoder layers only allow a token to attend to previous tokens.444This is because, in the original machine translation task , the decoder generates the output sentence autoregressively. Similar to the AWD-LSTM model, the outputs of the last transformer layer are fed to a linear decoder whose weights are tied to the input embeddings, and the model is trained to predict the next token at each time step. We use the huggingface implementation of the GPT-2 model with default parameters, except that we reduce the vocabulary size from to (to use the same tokenizer as the RoBERTa model), the amount of context from 1024 to 512, and the number of layers from 12 to 6. The third model is RoBERTa , which is based on Google’s BERT language model . This architecture consists of multiple transformer encoder layers. Unlike GPT-2, each token can attend to all other tokens in the input and the goal is not to predict the next token. Instead, a certain fraction of the input tokens are randomly converted to a special <mask> token, and the model is trained to predict the masked tokens. We use the huggingface implementation of RoBERTa with default parameter settings, except that we reduce the number of layers from 12 to 6.
2.3 Classifier Finetuning
In the second main stage, we finetune a classifier based on a set of labeled data. The labeled data consists of a set of sheet music images along with their corresponding composer labels. The process of training the classifier is comprised of four steps (lower half of Figure 1).
The first two steps are to compute and tokenize a bootleg score into a sequence of symbolic words. We use the same fixed feature extractor and the same tokenizer that were used in the language model pretraining stage.
The third step is to sample short, fixed-length fragments of words from the labeled data. As mentioned in Section 2.1, we define a proxy task where the goal is to predict the composer given a short, fixed-length fragment of words. Defining the proxy task in this way has three significant benefits: (1) we can use sampling to generate many more unique training data points than there are actual pages of sheet music in our dataset, (2) we can sample the data in such a way that the classes are balanced, which avoids problems during training, and (3) using fixed-length inputs allows us to train more efficiently in batches. Our approach follows the general recommendations of a recent study on best practices for training a classifier with imbalanced data . Each sampled fragment and its corresponding composer label constitute a single training pair for the proxy task.
The fourth step is to train the classifier model. The bottom half of Figure 3
shows how this is done with our three models. Our general approach is to add a classifier head on top of the language model, initialize the weights of the classifier with the pretrained language model weights, and then finetune the classifier on the proxy task data. For the AWD-LSTM, we take the outputs from the last LSTM layer and construct a fixed-size representation by concatenating three things: (a) the output at the last time step, (b) the result of max pooling the outputs across the sequence dimension, and (c) the result of average pooling the outputs across the sequence dimension. This fixed-size representation (which is three times the hidden dimension size) is then fed into the classifier head, which consists of two dense layers with batch normalization and dropout. For the GPT-2 model, we take the output from the last transformer layer at the last time step, and then feed it into a single dense (classification) layer. Because the GPT-2 and RoBERTa models require special tokens during training, we insert special symbols
and at the beginning and end of every training input, respectively. Because of the masked self-attention, we must use the output of the last token in order to access all of the information in the input sequence. For the RoBERTa model, we take the output from the last transformer layer corresponding to the token, and feed it into a single dense (classification) layer. The takes the place of the special [CLS] token described in the original paper.
We integrated all models into the fastai framework and finetuned the classifier in the following manner. We first select an appropriate learning rate using a range test, in which we sweep the learning rate across a wide range of values and observe the impact on training loss. We initially freeze all parameters in the model except for the untrained classification head, and we gradually unfreeze more and more layers in the model as the training converges. To avoid overly aggressive changes to the pretrained language model weights, we use discriminative finetuning, in which earlier layers of the model use exponentially smaller learning rates compared to later layers in the model. All training is done with (multiple cycles of) the one cycle training policy , in which learning rate and momentum are varied cyclically over each cycle. The above practices were proposed in  and found to be effective in finetuning language models for text classification.
The third main stage is to apply the proxy classifier to the original full page classification task. We explore two different ways to do this. The first method is to convert the sheet music image into a bootleg score, tokenize the bootleg score into a sequence of word or subword units, and then apply the proxy classifier to a single variable-length input. Note that all of the models can handle variable-length inputs up to a maximum context length. The second method is identical to the first, except that it averages the predictions from multiple fixed-length crops taken from the input sequence. The fixed-length crops are the same size as is used during classifier training, and the crops are sampled uniformly with overlap.555We also experimented with applying a Bayesian prior to the classifier softmax outputs, as recommended in , but found that the results were not consistently better.
3 Experimental Setup
In this section we describe the data collection process and the metrics used to evaluate our approach.
The data comes from IMSLP. We first scraped the website and downloaded all PDF scores and accompanying metadata.666We downloaded the data over a span of several weeks in May of 2018. We filtered the data based on its instrumentation in order to identify a list of solo piano scores. We then computed bootleg score features for all of the piano sheet music images using the XSEDE supercomputing infrastructure , and discarded any pages that had less than a minimum threshold of features. This latter step is designed to remove non-music pages such as the title page, foreword, or table of contents. The resulting set of data contained PDFs,777Note that a PDF may contain multiple pieces (e.g. the complete set of Chopin etudes). pages and a total of million bootleg score features. This set of data is what we refer to as the IMSLP dataset in this work (e.g. the IMSLP pretrained language model). For language model training, we split the IMSLP data by piece, using for training and for validation.
The classification task uses a subset of the IMSLP data. We first identified a list of composers with a significant amount of data (composers shown in Figure 4). We limited the list to nine composers in order to avoid extreme class imbalance. Because popular pieces tend to have many sheet music versions in the dataset, we select one version per piece in order to avoid over-representation of a small subset of pieces. Next, we manually labeled and discarded all filler pages, and then computed bootleg score features on the remaining sheet music images. This cleaned dataset is what we refer to as the target data in this work (e.g. the target pretrained language model). Figure 4 shows the total number of pages and bootleg score features per composer for the target dataset, along with the distribution of the number of bootleg score features per page. For training and testing, we split the data by piece, using of the pieces for training ( pages), for validation ( pages), and for testing ( pages). To generate data for the proxy task, we randomly sampled fixed-length fragments from the target data. We sample the same number of fragments for each composer to ensure class balance. We experimented with fragment sizes of 64/128/256 and sampled 32400/16200/8100 fragments for training and 10800/5400/2700 fragments for validation/test, respectively. This sampling scheme ensures the same data coverage regardless of fragment length. Note that the classification data is carefully curated, while the IMSLP data requires minimal processing.
We use two different metrics to evaluate our systems. For the proxy task, accuracy is an appropriate metric since the data is balanced. For the full page classification task – which has imbalanced data – we report results in macro F1 score. Macro F1 is a generalization of F1 score to a multi-class setting, in which each class is treated as a one-versus-all binary classification task and the F1 scores from all classes are averaged.
4 Results & Analysis
In this section we present our experimental results and conduct various analyses to answer key questions of interest. While the proxy task is an artificially created task, it provides a more reliable indicator of classifier performance than the full page classification. This is because the test set of the full page classification task is both imbalanced and very small ( data points). Accordingly, we will report results on both the proxy task and full page classification task.
4.1 Proxy Task
We first consider the performance of our models on the proxy classification task. We would like to understand the effect of (a) model architecture, (b) pretraining condition, and (c) fragment size.
We evaluate four different model architectures. In addition to the AWD-LSTM, GPT-2, and RoBERTa models previously described, we also measure the performance of a CNN-based approach recently proposed in . Note that we cannot use the exact same model in  since we do not have symbolic score information. Nonetheless, we can use the same general approach of computing local features, aggregating feature statistics across time, and applying a linear classifier. The design of our 2-layer CNN model roughly matches the architecture proposed in .
We consider three different language model pretraining conditions. The first condition is with no pretraining, where we train the classifier from scratch only on the proxy task. The second condition is with target language model pretraining, where we first train a language model on the target data, and then finetune the classifier on the proxy task. The third condition is with IMSLP language model pretraining. Here, we train a language model on the full IMSLP dataset, finetune the language model on the target data, and then finetune the classifier on the proxy task.
Figure 5 shows the performance of all models on the proxy task. There are three things to notice. First, regarding (a), the transformer-based models generally outperform the LSTM and CNN models. Second, regarding (b), language model pretraining improves performance significantly across the board. Regardless of architecture, we see a large improvement going from no pretraining (condition 1) to target pretraining (condition 2), and another large improvement going from target pretraining (condition 2) to IMSLP pretraining (condition 3). For example, the performance of the GPT-2 model increases from to to across the three pretraining conditions. Because the data in conditions 1 & 2 is exactly the same, the improvement in performance must be coming from more effective use of the data. We can interpret this from an information theory perspective by noting that the classification task provides the model bits of information per fragment, whereas the language modeling task provides bits of information per bootleg score feature where is the vocabulary size. The performance gap between condition 2 and condition 3 can also be interpreted as the result of providing more information to the model, but here the information is coming from having additional data. Third, regarding (c), larger fragments result in better performance, as we might expect.
4.2 Full Page Classification
Next, we consider performance of our models on the full page classification task. We would like to understand the effect of (a) model architecture, (b) pretraining condition, (c) fragment size, and (d) inference type (single vs. multi-crop). Regarding (d), we found that taking multiple crops improved results with all models except the CNN. This suggests that this type of test time augmentation does not benefit approaches that simply average feature statistics over time. In the results presented below, we only show the optimal inference type for each model architecture (i.e. CNN with single crop, all others with multi-crop).
Figure 6 shows model performance on the full page classification task. There are two things to notice. First, we see the same general trends as in Figure 5 for model architecture and pretraining condition: the transformer-based models generally outperform the CNN and LSTM models, and pretraining helps substantially in every case. The macro F1 score of our best model (GPT-2 with fragment size 64) increases from to to across the three pretraining conditions. Second, we see the opposite trend as the proxy task for fragment size: smaller fragments have better page classification performance. This strongly indicates a data distribution mismatch. Indeed, when we look at the distribution of the number of bootleg score features in a single page (Figure 4), we see that a significant fraction of pages have less than 256 features. Because we only sample fragments that contain a complete set of 256 words, our proxy task data is biased towards longer inputs. This leads to poor performance when the classifier is faced with short inputs, which are never seen in training. Using a fragment size of 64 minimizes this bias.
4.3 t-SNE Plots
Another key question of interest is, “Can we use our model to characterize the style of any page of piano sheet music?" The classification task forces the model to project the sheet music into a feature space where the compositional style of the nine composers can be differentiated. We hypothesize that this feature space might be useful in characterizing the style of any page of piano sheet music, even from composers not in the classification task.
To test this hypothesis, we fed data from 5 novel composers into our models and constructed t-SNE plots of the activations at the second-to-last layer. Figure 7 shows such a plot for the RoBERTa model. Each data point corresponds to a single page of sheet music from a novel composer. Even though we have not trained the classifier to distinguish between these five composers, we can see that the data points are still clustered, suggesting that the feature space can describe the style of new composers in a useful manner.
We propose a method for predicting the composer of a single page of piano sheet music. Our method first converts the raw sheet music image into a bootleg score, tokenizes the bootleg score into a sequence of musical words, and then feeds the sequence into a text classifier. We show that by pretraining a language model on a large set of unlabeled data, it is possible to significantly improve the performance of the classifier. We also show that our trained model can be used as a feature extractor to characterize the style of any page of piano sheet music. For future work, we would like to explore other forms of data augmentation and other model architectures that explicitly encode musical knowledge.
This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Large-scale computations on IMSLP data were performed with XSEDE Bridges at the Pittsburgh Supercomputing Center through allocation TG-IRI190019. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU used for training the models.
Musical stylometry, machine learning and attribution studies: a semi-supervised approach to the works of josquin. In Proc. of the Biennial Int. Conf. on Music Perception and Cognition, pp. 91–97. Cited by: §1.
A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks 106, pp. 249–259. Cited by: §2.3, footnote 5.
A supervised learning approach to musical style recognition. In
Proc. of the international conference on music and artificial intelligence (ICMAI), Vol. 2002, pp. 167. Cited by: §1.
-  (2019) Transformer-XL: attentive language models beyond a fixed-length context. In Proc. of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978–2988. Cited by: §1.
-  (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §2.2.
-  (1994) A new algorithm for data compression. C Users Journal 12 (2), pp. 23–38. Cited by: §2.2.
-  (2016) Multilingual language processing from bytes. In Proc. of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1296–1306. Cited by: §2.2.
Deep residual learning for image recognition.
Proc. of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1.
-  (2016) Composer classification models for music-theory building. In Computational Music Analysis, pp. 369–392. Cited by: §1.
-  (2010) String quartet classification with monophonic models.. In Proc. of the International Society for Music Information Retrieval Conference (ISMIR), pp. 537–542. Cited by: §1.
-  (2013) Modeling musical style with language models for composer recognition. In Iberian Conference on Pattern Recognition and Image Analysis, pp. 740–748. Cited by: §1.
-  (2018) Universal language model fine-tuning for text classification. In Proc. of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 328–339. Cited by: §1, §2.3.
-  (2017) Densely connected convolutional networks. In Proc. of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §1.
Weighted markov chain model for musical composer identification. In
European Conference on the Applications of Evolutionary Computation, pp. 334–343. Cited by: §1.
-  (2018) Where does haydn end and mozart begin? composer classification of string quartets. arXiv preprint arXiv:1809.05075. Cited by: §1.
-  (2019) RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1, §2.2, §2.2.
-  (2008) Visualizing data using t-SNE. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: §4.3.
-  (2017) Regularizing and optimizing LSTM language models. arXiv preprint arXiv:1708.02182. Cited by: §1, §2.2, §2.2.
-  (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Cited by: §1.
-  (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §1.
A baseline for general music object detection with deep learning. Applied Sciences 8 (9), pp. 1488. Cited by: footnote 3.
-  (2014) GloVe: global vectors for word representation. In Proc. of the conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543. Cited by: §1.
Classification of melodies by composer with hidden markov models. In Proc. of the First International Conference on WEB Delivering of Music, pp. 88–95. Cited by: §1.
-  (2018) Improving language understanding by generative pre-training. OpenAI Blog. Cited by: §1.
-  (2019) Language models are unsupervised multitask learners. OpenAI Blog 1 (8), pp. 9. Cited by: §1, §1, §2.2, §2.2.
-  (2015) ImageNet large scale visual recognition challenge. International journal of computer vision 115 (3), pp. 211–252. Cited by: §1.
-  (2017) Classification of music by composer using fuzzy min-max neural networks. In Proc. of the 12th International Conference for Internet Technology and Secured Transactions (ICITST), pp. 189–192. Cited by: §1.
-  (2016) Neural machine translation of rare words with subword units. In Proc. of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 1715–1725. Cited by: §2.2.
-  (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1.
-  (2018) A disciplined approach to neural network hyper-parameters: part 1–learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820. Cited by: §2.3.
-  (2014-Sept.-Oct.) XSEDE: accelerating scientific discovery. Computing in Science & Engineering 16 (5), pp. 62–74. External Links: Cited by: §3.
-  (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §2.2, footnote 4.
-  (2018) Convolution-based classification of audio and symbolic representations of music. Journal of New Music Research 47 (3), pp. 191–205. Cited by: §1.
-  (2016) Composer recognition based on 2d-filtered piano-rolls. In Proc. of the International Society for Music Information Retrieval Conference (ISMIR), pp. 115–121. Cited by: §1.
-  (2019) Convolutional composer classification. In Proc. of the International Society for Music Information Retrieval Conference (ISMIR), pp. 549–556. Cited by: §1, §4.1.
-  (2013) Evaluation of N-gram-based classification approaches on classical music corpora. In International Conference on Mathematics and Computation in Music, pp. 213–225. Cited by: §1.
-  (2019) MIDI passage retrieval using cell phone pictures of sheet music. In Proc. of the International Society for Music Information Retrieval Conference (ISMIR), pp. 916–923. Cited by: §1, §2.1, §2.2.
-  (2019) XLNet: generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pp. 5754–5764. Cited by: §1.
-  (2014) How transferable are features in deep neural networks?. In Advances in neural information processing systems, pp. 3320–3328. Cited by: §1.