Toward Faultless Content-Based Playlists Generation for Instrumentals

by   Yann Bayle, et al.
Université de Bordeaux

This study deals with content-based musical playlists generation focused on Songs and Instrumentals. Automatic playlist generation relies on collaborative filtering and autotagging algorithms. Autotagging can solve the cold start issue and popularity bias that are critical in music recommender systems. However, autotagging remains to be improved and cannot generate satisfying music playlists. In this paper, we suggest improvements toward better autotagging-generated playlists compared to state-of-the-art. To assess our method, we focus on the Song and Instrumental tags. Song and Instrumental are two objective and opposite tags that are under-studied compared to genres or moods, which are subjective and multi-modal tags. In this paper, we consider an industrial real-world musical database that is unevenly distributed between Songs and Instrumentals and bigger than databases used in previous studies. We set up three incremental experiments to enhance automatic playlist generation. Our suggested approach generates an Instrumental playlist with up to three times less false positives than cutting edge methods. Moreover, we provide a design of experiment framework to foster research on Songs and Instrumentals. We give insight on how to improve further the quality of generated playlists and to extend our methods to other musical tags. Furthermore, we provide the source code to guarantee reproducible research.


page 1

page 2

page 3

page 4


Leveraging the structure of musical preference in content-aware music recommendation

State-of-the-art music recommendation systems are based on collaborative...

Tangible Music Interfaces Using Passive Magnetic Tags

The technologies behind passive resonant magnetically coupled tags are i...

Tag2Risk: Harnessing Social Music Tags for Characterizing Depression Risk

Musical preferences have been considered a mirror of the self. In this a...

Quantized GAN for Complex Music Generation from Dance Videos

We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal fr...

Flow Moods: Recommending Music by Moods on Deezer

The music streaming service Deezer extensively relies on its Flow algori...

Tagged Documents Co-Clustering

Tags are short sequences of words allowing to describe textual and non-t...

SongDriver: Real-time Music Accompaniment Generation without Logical Latency nor Exposure Bias

Real-time music accompaniment generation has a wide range of application...

Code Repositories


Python code to reproduce our article "Toward faultless content-based playlists generation for instrumentals"

view repo

1 Introduction

Playlists are becoming the main way of consuming music (Song et al., 2012; Wikström, 2015; Choi et al., 2016; Nakano et al., 2016). This phenomenon is also confirmed on web streaming platforms, where playlists represent 40% of musical streams as stated by De Gemini from Deezer111, accessed on 27 September 2017 during the last MIDEM222, accessed on 27 September 2017. Playlists also play a major role in other media like radios, personal devices such as laptops, smartphones (Thalmann et al., 2016), MP3 Players (Nettamo et al., 2006), and connected speakers. Users can manually create their playlists, but a growing number of them listens to automatically generated playlists (Uitdenbogerd and Schyndel, 2002) created by music recommender systems (Yoshii et al., 2007; Schedl et al., 2015) that suggest tracks fitting the taste of each listener.

Such playlist generation implicitly requires selecting tracks with a common characteristic like genre or mood. This equates to annotating tracks with meaningful information called tags (Jäschke et al., 2007). A musical piece can gather one or multiple tags that can be comprehensible by common human listeners such as "happy", or not like "dynamic complexity" (Streich, 2006; Laurier and Herrera, 2007). A tag can also be related to the audio content, such as "rock" or "high tempo". Moreover, editorial writers can provide tags like "summer hit" or "70s classic". Turnbull et al. (2008) distinguish five methods to collect music tags. Three of them require humans, e.g. social tagging websites (Shardanand and Maes, 1995; Breese et al., 1998; Levy and Sandler, 2007; Shepitsen et al., 2008) used by Last.fm333, accessed on 27 September 2017, music annotation games (Law et al., 2007; Turnbull et al., 2007; Mandel and Ellis, 2008), and online polls (Turnbull et al., 2008). The last two tagging methods are computer-based and include text mining web-documents (Whitman and Ellis, 2004; Knees et al., 2007) and audio content analysis (Tzanetakis and Cook, 2002; Bertin-Mahieux et al., 2010; Prockup et al., 2015). Multiple drawbacks stand out when reviewing the different tagging methods. Indeed, human labelling is time-consuming (Kim and Whitman, 2002; Skowronek et al., 2006) and prone to mistakes (Sturm, 2013, 2015). Furthermore, human labelling and text mining web-documents are limited by the ever-growing musical databases that increase by 4,000 new CDs by month (Pachet and Roy, 1999) in western countries. Hence, this amount of music cannot be labelled by humans and implies that some tracks cannot be recommended because they are not rated or tagged (Eck et al., 2007; Li et al., 2007; Schafer et al., 2007; Schlüter and Grill, 2015). This lack of labelling is a vicious circle in which unpopular musical pieces remain poorly labelled, whereas popular ones are more likely to be annotated on multiple criteria (Eck et al., 2007) and therefore found in multiple playlists444, accessed on 27 September 2017. This phenomenon is known as the cold start issue or as the data sparsity problem (Song et al., 2012). Text-mining web documents is tedious and error-prone, as it implies collecting and sorting redundant, contradictory, and semantic-based data from multiple sources. Audio content-based tagging is faster than human labelling and solves the major problems of cold starts, popularity bias, and human-gathered tags (Logan, 2002; Hoashi et al., 2003; Celma et al., 2005; Eck et al., 2007; Sordo et al., 2007; Turnbull et al., 2007; Mandel and Ellis, 2008; Tingle et al., 2010). A makeshift solution combines the multiple tag-generating methods (Bu et al., 2010) to produce robust tags and to process every track. However, audio content analysis alone remains improvable for subjective and ambivalent tags such as the genre (Hsu et al., 2016; Jeong and Lee, 2016; Lu et al., 2016; Oramas et al., 2016).

In light of all these issues, a new paradigm is needed to rethink the classification problem and focus on a well-defined question555, accessed on 27 September 2017 that needs solving (Sturm, 2016) to break the "glass ceiling" (Wiggins, 2009)

in Music Information Retrieval (MIR). Indeed, setting up a problem with a precise definition will lead to better features and classification algorithms. Certainly, cutting-edge algorithms are not suited for faultless playlist generation since they are built to balance precision and recall. The presence of few wrong tracks in a playlist diminishes the trust of the user in the perceived service quality of a recommender system

(Chau et al., 2013) because users are more sensitive to negative than positive messages (Yin et al., 2010)

. A faultless playlist based on a tag needs an algorithm that achieves perfect precision while maximizing recall. It is possible to partially reach this aim by maximizing the precision and optimizing the corresponding recall, which is a different issue than optimizing the f-score. A low recall is not a downside when considering the large amount of tracks available on audio streaming applications. For example, Deezer provides more than 40 million tracks

666, accessed on 27 September 2017 in 2017. Moreover, the maximum playlist size authorized on streaming platforms varies from 1,000777, accessed on 27 September 2017 for Deezer to 10,000888, accessed on 27 September 2017 for Spotify, while YouTube999, accessed on 27 September 2017 and Google Play Music have a limit of 5,000 tracks per playlist. However, there is a mean of 27 tracks in the private playlists of the users from Deezer with a standard variation of 70 tracks101010Personal communication from Manuel Moussallam, Deezer R&D team. Thus, it seems feasible to create tag-based playlists containing hundreds of tracks from large-scale musical databases.

In this article, we focus on improving audio content analysis to enhance playlist generation. To do so, we perform Songs and Instrumentals Classification (SIC) in a musical database. Songs and Instrumentals are well-defined, relatively objective, mutually exclusive, and always relevant (Gouyon et al., 2014). We define a Song as a musical piece containing one or multiple singing voices either related to lyrics or onomatopoeias and that may or may not contain instrumentation. Instrumental is thus defined as a musical piece that does not imply any sound directly or indirectly coming from the human voice. An example of an indirect sound made by the human voice is the talking box effect audible in Rocky Mountain Way from Joe Walsh.

People listen to instrumental music mostly for leisure. However, we chose to focus on Instrumental detection in this study because Instrumentals are essential in therapy (Rosenblatt, 2015) and learning enhancement methods (Suárez et al., 2016; Zhao and Kuhl, 2016). Nevertheless, audio content analysis is currently limited by the distinction of singing voices from instruments that mimic voices. Such distinction mistakes lead to plenty of Instrumental being labelled as Song. Aerophones and fretless stringed instruments, for example, are known to produce similar pitch modulations as the human voice (Rao et al., 2009; Panteli et al., 2017). This study focuses on improving Instrumental detection in musical databases because the current state-of-the-art algorithms are unable to generate a faultless playlist with the tag Instrumental (Ghosal et al., 2013; Bayle et al., 2016). Moreover, precision and accuracy of SIC algorithms decline when faced with bigger musical databases (Bayle et al., 2016; Bogdanov et al., 2016). The ability of these classification algorithms to generate faultless playlists is consequently discussed here.

In this paper, we define solutions to generate better Instrumental and Song playlists. This is not a trivial task because Singing Voice Detection (SVD) algorithms cannot directly be used for SIC. Indeed, SVD aims at detecting the presence of singing voice at the frame scale for one track, but related algorithms produce too many false positives (Lehner et al., 2014), especially when faced with Instrumentals. Our work addresses this issue and the major contributions are:

  • [leftmargin=*,labelsep=4mm]

  • The first review of SIC systems in the context of playlist generation.

  • The first formal design of experiment of the SIC task.

  • We show that the use of frame features outperforms the use of global track features in the case of SIC and thus diminishes the risk of an algorithm being a "Horse".

  • A knowledge-based SIC algorithm —easily explainable— that can process large musical database whereas state-of-the-art algorithms cannot.

  • A new track tagging method based on frame predictions that outperforms the Markov model in terms of accuracy and f-score.

  • A demonstration that better playlists related to a tag can be generated when the autotagging algorithm focuses only on this tag.

As the major problem in MIR tasks concerns the lack of a big and clean labelled musical database (Yoshii et al., 2007; Casey et al., 2008), we thus detail in Section 2 the use of SATIN (Bayle et al., 2017), which is a persistent musical database. This section also details the solution we use to guarantee reproducibility over SATIN for our research code. In Section 3 we describe the state-of-the-art methods in SIC and we detail their implementation in Section 4. We then evaluate their performances and limitations in three experiments from Section 5 to Section 7. Section 8 settles the formalism for the new paradigm as described by (Sturm, 2016) and compares our new proposed method to the state-of-the-art methods. We finally discuss our results and perspectives in Section 9.

2 Musical database

The musical database considered in this paper is twofold. The first part of the musical database comprises 186 musical tracks evenly distributed between Songs and Instrumentals. Tracks were chosen from previously existing musical databases. This first part of our musical database is further referred as . All tracks are available for research purposes and are commonly used by the MIR community (Ramona et al., 2008; Bittner et al., 2014; Lehner et al., 2014; Liutkus et al., 2014; Schlüter and Grill, 2015; Schlüter, 2016). includes tracks from the MedleyDB database (Bittner et al., 2014), the ccMixter database (Liutkus et al., 2014), and the Jamendo database (Ramona et al., 2008).

  • [leftmargin=*,labelsep=4mm]

  • The MedleyDB database111111, accessed on 27 September 2017 is a musical database of multi-track audio for music research proposed by Bittner et al. (2014). Forty-three tracks of MedleyDB are used as Instrumentals in .

  • The ccMixter database contains 50 Songs compiled by Liutkus et al. (2014) and retrieved on ccMixter121212, accessed on 27 September 2017. For each Song in the ccMixter database, there is the corresponding Instrumental track. These Instrumentals tracks are included in .

  • The Jamendo database131313, accessed on 27 September 2017 has been proposed by Ramona et al. (2008) and contains 93 Songs and the corresponding annotations at the frame scale concerning the presence of a singing voice. These Songs have been retrieved from Jamendo Music141414, accessed on 27 September 2017.

We chose tracks from the Jamendo database because the MIR community already provided ground truths concerning the presence of a singing voice at the frame scale (Ramona et al., 2008). These frame scale ground truths are indeed needed for the training process of the algorithm proposed in Section 8. There are only 93 Songs because producing corresponding frame scale ground truths is a tedious task, which is, to some extent, ill-defined (Kim and Whitman, 2002). We chose tracks from the MedleyDB database because they are tagged as per se Instrumentals, whereas we chose tracks from the ccMixter database because they were meant to accompany a singing voice. Choosing such different tracks helps to reflect the diversity of Instrumentals.

The second part of the musical database comes from the SATIN (Bayle et al., 2017) database and will be referred to as . is uneven and references 37,035 Songs and 4,456 Instrumentals, leading to a total of 41,491 tracks that are identified by their International Standard Recording Code (ISRC151515, accessed on 27 September 2017) provided by the International Federation of the Phonographic Industry (IFPI161616, accessed on 27 September 2017). These standard identifiers allow a unique identification of the different releases of a track over the years and across the interpretations from different artists. The corresponding features of the tracks contained in SATIN have been extracted for Bayle et al. (2017) by Simbals171717, accessed on 27 September 2017 and Deezer and are stored in SOFT1. To allow reproducibility, we provide the list of ISRC used for the following experiments along with our reproducible code on our GitHub account181818, accessed on 27 September 2017. The point of sharing the ISRC for each track is to facilitate result comparison between future studies and our own.

3 State-of-the-art

As far as we know, only a few recent studies have been dedicated to SIC (Ghosal et al., 2013; Hespanhol, 2013; Zhang and Kuo, 2013; Gouyon et al., 2014; Bayle et al., 2016) compared to the extensive literature devoted to music genre recognition (Sturm, 2014), for example. The SIC task in a database must not be confused with the SVD task that tries to identify the presence of a singing voice at the frame scale for one track. In this section, we describe existing algorithms for SIC and we benchmark them in the next section.

3.1 Ghosal’s Algorithm

To segregate Songs and Instrumentals, Ghosal et al. (2013) extracted for each track the first thirteen Mel-Frequency Cepstral Coefficients (MFCC), excluding the . Indeed, akin to Zhang and Kuo (2013)

, the authors posit that Songs differ from Instrumentals in the stable frequency peaks of the spectrogram visible in MFCC. The authors then categorize an in-house database of 540 tracks evenly distributed with a classifier based on Random Sample and Consensus (RANSAC)

(Fischler and Bolles, 1981; Ghosal et al., 2013). Their algorithm reaches an accuracy of 92.96% for a 2-fold cross-validation classification task. This algorithm will hereafter be denoted as GA.

3.2 Svmbff

Gouyon et al. (2014) posit a variant of the algorithm from Ness et al. (2009)

. The seventeen low-level features extracted from each frame are normalized and consist of the zero crossing rate, the spectral centroid, the roll-off and flux, and the first thirteen MFCC. A linear Support Vector Machine (SVM) classifier is trained to output probabilities for the mean and the standard deviation of the previous low-level features from which tags are selected. The authors tested SVMBFF against three different musical databases comprising between 502 and 2,349 tracks. The f-score of SVMBFF ranges from 0.89 to 0.95 for Songs across the three musical databases. As for Instrumentals, the f-score is between 0.45 and 0.80. The authors did not comment on this substantial variation and readers can foresee that the poor performance in Instrumental detection is not yet well understood.

3.3 Vqmm

This approach has been proposed by Langlois and Marques (2009) and enhanced by Gouyon et al. (2014). VQMM uses the YAAFE toolbox to compute the thirteen MFCC after the

with an analysis frame of 93 ms and an overlap of 50%. VQMM then codes a signal using vector quantization (VQ) in a learned codebook. Afterwards, it estimates conditional probabilities in first-order Markov models (MM). The originality of this approach is found in the statistical language modelling. The authors tested VQMM against three different musical databases comprising between 502 and 2,349 tracks. The f-score of VQMM is comprised between 0.83 and 0.95 for Songs across the three musical databases. The f-score for Instrumentals is between 0.54 and 0.66. As for SVMBFF, the f-score of Instrumentals is lower than the f-score for Songs and depicts the difficulty to detect correctly Instrumentals, regardless of the musical database.

3.4 Srcam

Gouyon et al. (2014) used a variation of the sparse representation classification (SRC) (Panagakis et al., 2009; Wright et al., 2009; Sturm, 2012; Sturm and Noorzad, 2012) applied to auditory temporal modulation features (AM). Gouyon et al. (2014) tested SRCAM against three different musical databases comprising between 502 and 2,349 tracks. The f-score of SRCAM is comprised between 0.90 and 0.95 for Songs across the three musical databases. The f-score for Instrumentals is between 0.57 and 0.80. As for SVMBFF and VQMM, the f-score for Instrumentals is lower than the f-score for Songs.

GA and SVMBFF use track scale features, whereas VQMM uses features at the frame scale. The three algorithms use thirteen MFCC, as those peculiar features are well known to capture singing voice presence in tracks. GA, SVMBFF, and VQMM are all tested under K-fold cross-validation on the same musical database. In next section, we compare the performances of these three algorithms on the musical database .

4 Source code of the state-of-the-art for SIC

This section describes the implementation we used to benchmark existing algorithms for SIC. For all algorithms, the features proposed in SOFT1 were extracted and provided by Simbals and Deezer, thanks to the identifiers contained in SATIN. More technical details about the classification process can be found on our previously mentioned GitHub repository.

4.1 Ga

Ghosal et al. (2013) did not provide source code for reproducible research, so the YAAFE191919, accessed on 27 September 2017 toolbox was used to extract the corresponding MFCC in this study. The RANSAC algorithm provided by the Python package scikit-learn (Pedregosa et al., 2011) is used for classification.

4.2 Svmbff

Gouyon et al. (2014) used the Marsyas framework202020, accessed on 27 September 2017 to extract their features and to perform the classification, so we used the same framework along with the same parameters.

4.3 Vqmm

The original implementation of VQMM made by Langlois and Marques (2009) is freely available on their online repository212121, accessed on 27 September 2017. We used this implementation with the same parameters that were used in their study.

4.4 Srcam

SRCAM (Gouyon et al., 2014) is dismissed as the source code is in Matlab. Indeed, as tracks are stored on a remote industrial server, only algorithms for which the programming language is supported by our industrial partner can be computed. It would be interesting to implement SRCAM in Python or in C to assess its performance on , but SRCAM displays similar results as SVMBFF on three different musical databases (Gouyon et al., 2014).

5 Benchmark of existing algorithms for SIC

In MIR, the aim of a classification task is to generate an algorithm capable of labelling each track of a musical database with meaningful tags. Previous studies in SIC used musical databases containing between 502 and 2,349 unique tracks and performed a cross-validation with two to ten folds (Ghosal et al., 2013; Hespanhol, 2013; Zhang and Kuo, 2013; Gouyon et al., 2014; Bayle et al., 2016). This section introduces a similar experiment by benchmarking existing algorithms on a new musical database. Table 1 displays the accuracy and the f-score of GA, SVMBFF, and VQMM with a 5-fold cross-validation classification task on .

Algorithm Accuracy F-score
Table 1: Average standard deviation for accuracy and f-score for GA, SVMBFF, and VQMM with a 5-fold cross-validation classification task on the evenly balanced database of 186 tracks. Bold numbers highlight the best results achieved for each metric.

The mean accuracy and f-score for the three algorithms do not differ significantly (one-way ANOVA, ,

). The high variance, low accuracy, and the f-score of the three algorithms indicate that these algorithms are too dependent on the musical database and are not suitable for commercial applications.

K-fold cross-validation on the same musical database is regularly used as an accurate approximation of the performance of a classifier on different musical databases. However, the size of the musical databases used in previous studies for SIC seems to be insufficient to assert the validity of any classification method (Livshin and Rodet, 2003; Guaus, 2009). Indeed, evaluating an algorithm on such small musical databases —even with the use of K-fold cross-validation— does not guarantee its generalization abilities because the included tracks might not necessarily be representative of all existing musical pieces (Ng, 1997). K-fold cross-validation on small-sized musical databases is indeed prone to biases (Herrera et al., 2003; Livshin and Rodet, 2003; Bogdanov et al., 2011), hence additional cross-database experiments are recommended in other scientific fields (Chudáček et al., 2009; Bekios-Calfa et al., 2011; Llamedo et al., 2012; Erdoğmuş et al., 2014; Fernández et al., 2015). Yet, creating a novel and large training set with corresponding ground truths consumes plenty of time and resources. In fact, in the big data era, a small proportion of all existing tracks are reliably tagged in the musical databases of listeners or industrials, as can be seen on or Pandora222222, accessed on 27 September 2017, for example. Thus, the numerous unlabelled tracks can only be classified with very few training data. The precision of the classification reached in these conditions is uncertain. The next section tackles this issue.

6 Behaviour of the algorithms at scale

This section compares the accuracy and the f-score of GA, SVMBFF, and VQMM in a cross-database validation experiment. This experiment employs the test set that is 48 times bigger than the train set

. This is a scale-up experiment compared to the number of tracks used in the previous experiment. The reason for the use of a bigger test set is twofold. Firstly, this behaviour mimics conditions in which there are more untagged than tagged data, which is common in the musical industry. Secondly, existing classification algorithms for SIC cannot handle such an amount of musical data due to limitations of their own machine learning during the training process.

The test set of 8,912 tracks is evenly distributed between Songs and Instrumentals. As there are fewer Instrumentals than Songs, all of them are used while eight successive random samples of Songs in are taken without replacement. In Table 2, we compare the accuracy and f-score for GA, SVMBFF, and VQMM.

Algorithm Accuracy F-score
Table 2: Average standard deviation for accuracy and f-score for GA, SVMBFF, and VQMM. The train set is constituted of the balanced database of 186 tracks. The test set is successively constituted of eight evenly balanced sets of 8,912 tracks randomly chosen from the unbalanced database of 41,491 tracks. Bold numbers highlight the best results achieved for each metric.

The accuracy and f-score of VQMM are higher than those of GA and SVMBFF, which may come from the use of local features by VQMM whereas GA and SVMBFF use track scale features. Indeed, the accuracy and the f-score of GA, SVMBFF, and VQMM differ significantly (Posthoc Dunn test, ). The accuracy of VQMM is respectively 0.086 (13.8%) and 0.143 (25.3%) higher than those of GA and SVMBFF. The f-score of VQMM is respectively 0.103 (17.1%) and 0.165 (30.4%) higher than those of GA and SVMBFF.

Compared to the results of the first experiment in the same collection validation, the three algorithms have a lower accuracy: -0.011 (-1.7%), -0.121 (-17.6%), and -0.047 (-6.2%), respectively for GA, SVMBFF, and VQMM. The same trend is visible for the f-score with -0.021 (-3.4%), -0.154 (-22.1%), and -0.046 (-6.1%), respectively for GA, SVMBFF, and VQMM.

The lower values of the accuracy and the f-score for the three algorithms in this experiment clearly depict the conjecture that same-database validation is not a suited experiment to assess the performances of an autotagging algorithm (Herrera et al., 2003; Livshin and Rodet, 2003; Guaus, 2009; Bogdanov et al., 2011). Moreover, the low values of the accuracy and the f-score of GA and SVMBFF in this untested database reveal that those algorithms might be "Horses" and might have overfit on the database proposed by their respective authors. GA, SVMBFF, and VQMM are thus limited in accuracy and f-score when a bigger musical database is used, even if its size is far from reaching the 40 million tracks available via Deezer. It is highly probable that the accuracy and f-score of GA, SVMBFF, and VQMM will diminish further when faced with millions of tracks.

Furthermore, there is an uneven distribution of Songs and Instrumentals in personal and industrial musical databases. Indeed, the salience of tracks containing singing voice in the recorded music industry is indubitable. Instrumentals represent 11 to 19% of all tracks in musical databases232323Personal communication from Manuel Moussallam, Deezer R&D team. The next section investigates the possible differences in performance caused by this uneven distribution.

7 Uneven class distribution

This section evaluates the impact of a disequilibrium between Songs and Instrumentals on the precision, the recall, and the f-score of GA, SVMBFF, and VQMM. It was not possible to perform a comparison between the existing algorithms dedicated to SIC using a K-fold cross-validation because the implementation of VQMM and SVMBFF cannot train on such a great amount of musical features and crashed when we tried to do so. This section depicts a cross-database experiment with the 186 tracks of the balanced train set and the test set composed of 37,035 Songs (89%) and 4,456 Instrumentals (11%). We compare in Table 3 the accuracy and the f-score of GA, SVMBFF, and VQMM. To understand what is happening for the uneven distribution, we indicate which results are produced by a random classification algorithm further denoted RCA, i.e., where half of the musical database is randomly classified as Songs and the other half as Instrumentals.

Algorithm Accuracy F-score
Table 3: Average accuracy and f-score for GA, SVMBFF, and VQMM against a random classification algorithm denoted RCA. The train set is constituted of the balanced database of 186 tracks. The test set is constituted of the unbalanced database of 41,491 tracks composed of 37,035 Songs (89%) and 4,456 Instrumentals (11%). Bold numbers highlight the best results achieved for each metric.

VQMM, which uses frame scale features, has a higher accuracy and f-score than GA and SVMBFF, which use track scale features. GA and VQMM perform better than RCA in terms of accuracy and f-score, contrary to SVMBFF. The results of SVMBFF seem to depend on the context, i.e., on the musical database, because they display a lower global accuracy and f-score than RCA. The poor performances of SVMBFF might be explained by the imbalance between Songs and Instrumentals. As there is an uneven distribution between Instrumental and Songs in musical databases, we now analyse the precision, recall, and f-score for each class.

7.1 Results for Songs

The Table 4 displays the precision and the recall for Songs detection for GA, SVMBFF, and VQMM against a random classification algorithm denoted RCA and via the algorithm AllSong that classifies every track as Song.

Algorithm Precision Recall F-score
Table 4: Song precision and Recall for the three algorithms defined in Section 3 against a random classification algorithm denoted RCA and via an algorithm that classifies every track as Song denoted AllSong. The train set is constituted of the balanced database of 186 tracks. The test set is constituted of the unbalanced database of 41,491 tracks composed of 37,035 Songs (89%) and 4,456 Instrumentals (11%). Bold numbers highlight the best results achieved for each metric.

The precision for RCA and AllSong corresponds to the prevalence of the tag in the musical database. RCA has a 50% recall because half of the retrieved tracks is of interest, whereas AllSong has a recall of 100%. For GA, SVMBFF, and VQMM there is an increase in precision of respectively 0.02 (2.1%), 0.04 (4.8%), and 0.07 (7.5%) compared to RCA and AllSong.

When all tracks are tagged as Song in a musical database it leads to a similar f-score than the state-of-the-art algorithm because Songs are in majority in such database. Indeed, 100% of recall is achieved by AllSong, which significantly increases the f-score. The f-score is also increased by the high precision. This precision corresponds to the prevalence of Songs, which are in majority in our musical database. In sum, these results indicate that the best song playlist can be obtained by classifying every track of an uneven musical database as Song and that there is no need for a specific or complex algorithm. We study in the next section the impact of such random classification on Instrumentals.

7.2 Results for Instrumentals

The Table 5 displays the precision and the recall for Instrumentals detection for GA, SVMBFF, and VQMM against RCA and via the algorithm AllInstrumental that classifies every track as Instrumental.

Algorithm Precision Recall F-score
Table 5: Instrumental precision and recall for the three algorithms defined in Section 3 against a random classification algorithm denoted RCA and via an algorithm that classifies every track as Instrumental denoted AllInstrumental. The train set is constituted of the balanced database of 186 tracks. The test set is constituted of the unbalanced database of 41,491 tracks composed of 37,035 Songs (89%) and 4,456 Instrumentals (11%). Bold numbers highlight the best results achieved for each metric.

As with AllSong, the precision for RCA and AllInstrumental corresponds to the prevalence of the instrumental tag in

. RCA has a 50% recall because half of the retrieved tracks is of interest, whereas AllInstrumental has a recall of 100%. The precision of GA, SVMBFF, and VQMM is 0.06 (57.3%), 0.02 (13.6%), and 0.19 (170.9%) higher respectively compared to RCA. As for previous experiments, the better performance of VQMM over GA and SVMBFF might be imputable to the use of features at the frame scale. Even if the use of features at the frame scale by VQMM provides better performances than GA and SVMBFF, the precision remains very low for Instrumentals as VQMM only reaches 29.8%.

In light of those results, guaranteeing faultless Instrumental playlists seems to be impossible with current algorithms. Indeed, Instrumentals are not correctly detected in our musical database with state-of-the-art methods that reach, at best, a precision of 29.8%. As for the detection of Songs, classifying every track as a Song in our musical database produces a high precision that is only slightly improved by GA, SVMBFF, or VQMM. A human listener might find inconspicuous the difference between a playlist generated by GA, SVMBFF, VQMM or by AllSong. However, producing an Instrumental playlist remains a challenge. The best Instrumental playlist feasible with GA, SVMBFF or VQMM contains at least 35 false positives —i.e., Songs— every 50 tracks, according to our experiments. It is highly probable that listeners will notice it. Thus, the precision of existing methods is not satisfactory enough to produce a faultless Instrumental playlist. One might think a solution could be to select a different operating point on the receiver operating characteristic (ROC) curve.

7.3 Results for different operating points

Figure 1 shows the ROC curve for the three algorithms and the area under the curve (AUC) for the Songs.

Figure 1: Receiver operating characteristic curve for the three algorithms defined in Section 3 along the area under the curve between brackets for the Songs. The train set is constituted of the balanced database of 186 tracks. The test set is constituted of the unbalanced database of 41,491 tracks composed of 37,035 Songs (89%) and 4,456 Instrumentals (11%).

The ROC curves of Figure 1 indicate that the only operating point for 100% of true positive for GA, SVMBFF, and VQMM corresponds to 100% of false positive. Moreover, by design, there is a maximum of three operating points displayed by VQMM (Figure 1). Thus, a faultless playlist cannot be guaranteed by tuning the operating point of GA, SVMBFF, and VQMM.

7.4 Class-weight alternative

To guarantee a faultless playlist, another idea would be to tune algorithms by impacting the class weighting. Indeed, we would guarantee 100% precision even if the recall plummets. Even if a recall of 1% is reached on the 40 million tracks of Deezer, it provides a sufficient amount of tracks for generating 40 playlists fulfilling the maximum size authorized on streaming platforms. Moreover, with such recall for the Instrumental tag, listeners can still apply another tag filter, such as "Jazz", to generate an Instrumental Jazz playlist, for example.

GA can be tuned, but not extensively enough to guarantee 100% of precision because it uses RANSAC. RANSAC is a regression algorithm robust to outliers and its configuration can only produce slight changes in performances, owing to its trade-off between accuracy and inliers. VQMM can also be tuned, but the increase in performance is limited due to the generalization made by the Markov model. SVMBFF can be tuned because class weights can be provided to SVM. However, after trying different class weightings, the precision of SVMBFF only slightly varies, as the features used are not discriminating enough.

We also could have performed an N-fold cross-validation on , but SVMBFF and VQMM cannot manage such an amount of musical data in the training phase.

We thus propose using different features and algorithms to generate a better instrumental playlist than the ones possible with state-of-the-art algorithms.

8 Toward better instrumental playlist

Experiments in previous sections indicate that GA, SVMBFF, and VQMM failed to generate a satisfactory enough Instrumental playlist out of an uneven and bigger musical database. As previously mentioned, such a playlist requires the highest precision possible while optimizing the recall. GA, SVMBFF, and VQMM might be "Horses" (Sturm, 2014), as they may not be addressing the problem they claim to solve. Indeed, they are not dedicated to the detection of singing voice without lyrics such as onomatopoeias or the indistinct sound present in the song Crowd Chant from Joe Satriani, for example. To avoid similar mistakes, a proper goal (Sturm, 2016) has to be clarified for SIC. Indeed, a use case, a formal design of experiments (DOE) framework, and a feedback from the evaluation to system design are needed.

Our use case is composed of four elements: the music universe (), the music recording universe (), the description universe (), and a success criterion. is composed of the polyphonic recording excerpts of the music in . Songs and Instrumentals are the two classes of . The success criterion is reached when an Instrumental playlist without false positives is generated from autotagging.

Six treatments are applied. Two are control treatments (Random Classification and the classification of every track as Instrumental), i.e. baselines. Three treatments are state-of-the-art methods (GA, VQMM, and SVMBFF) and the last treatment is the proposed methodology. The experimental units and the observational units are the entire collection of audio recordings. As no cross-validation is processed, there is a unique treatment structure. There are two responses model since our proposed algorithm has a two-stage process. The first response model is binary because a track is either Instrumental or not. The second response model is composed of the aggregate statistics (precision and recall). The generated playlist is the treatment parameter. The feedback is constituted of the number of Instrumentals in the final playlist. The experimental design of features and classifiers are detailed in the following section. The treatment parameter is the generalization process made by our proposed algorithm, since this is the difference between the state-of-the-art algorithms and our proposed algorithm. The materials in the DOE comes from the database SATIN (Bayle et al., 2017). We describe below the music universe () —i.e. SATIN— and its biases. The biases in the database used in previous studies might have cause GA, VQMM, and SRCAM to overfit. The biases in have thus to be considered for the interpretation of the results. SATIN is a 41,491 semi-randomly sampled audio recordings out of 40M available on streaming platforms. The sampling of tracks in SATIN has been made in order to retrieve all the tracks that have a validated identifiers link between Deezer, Simbals, and Musixmatch. SATIN is representative in terms of genres and song/instrumental ratio. SATIN is biased towards the mainstream music as the tracks come from Deezer and Simbals. The database does not include independent labels and artists that are available on SoundCloud, for example. The tracks have been recorded in the last 30 years. Finally, SATIN is biased toward English artists because these represent more than one third of the database.

8.1 Dedicated features for Instrumental detection

The three experiments of this study show that using every feature at the frame scale increases more the performance than using features at the track scale. In SVD, using frame features leads to Instrumentals misclassification, a high false positive rate, and indecision concerning the presence of singing voice at the frame scale. However, for our task, using the classified frames together can enhance SIC and lead to better results at the track scale. In order to use frame classification to detect Instrumentals, we propose a two-step algorithm. The first step is similar to a regular SVD algorithm because it provides the probability that each frame contains singing voice or not. In the second step, the algorithm uses the previously mentioned probabilities to classify each track as Song or Instrumental. Figure 2 details the underpinning mechanisms for the first step of Instrumental detection, which is a regular SVD method.

Figure 2: Schema detailing the algorithm for the detection of Instrumentals.

Our algorithm extracts the thirteen MFCC after the and the corresponding deltas and double deltas from each 93 ms frame of the tracks contained in . These features are then aligned with a frame ground truth made up by human annotators on the Jamendo database (Ramona et al., 2008), which contains 93 Songs. It is possible to have frame-precise alignments as the annotations provided by Ramona et al. (2008) are in forms of interval in which there is a singing voice or not. As for Instrumentals in

, all extracted features are associated with the tag Instrumental. All these features and ground truths are then used to train a Random Forest classifier. Afterwards, the Random Forest classifier outputs a vector of probability that indicates the likelihood of singing voice presence for each frame.

Now, each track has a probability vector corresponding to the singing voice presence likeliness for each frame. The use of such soft annotations instead of binary ones has shown to improve the overall classification results (Foucard et al., 2012). In the second step, the algorithm computes three sets of features for each track. Two out of three are based on the previous probability vector. The three sets of features generalize frame characteristics to produce features at the track scale. The first set of features is a linear 10-bin histogram ranging from 0 to 1 by steps of 0.1 that represents the distribution of each probability vector. Even if multiple frames are misclassified, the main trend of the histogram indicates that most frames are well classified.

Figure 3

details the construction of the second set of features —named n-gram— that uses the probability vector of singing voice presence.

Figure 3: Detailed example for the n-gram construction.

These song n-grams are computed in two steps. In the first step, the algorithm counts the number of consecutive frames that were predicted to contain singing voice. It then computes the corresponding normalized 30-bin histogram where n-grams greater than 30 are merged up with the last bin. Indeed, chances are that an Instrumental will possess fewer consecutive frames classified as containing a singing voice than a Song. Consequently, an Instrumental can be distinguished from a Song by its low number of long consecutive predicted song frames. By using this whole set of features against such an amount of musical data, we hope to keep "Horses" away (Sturm et al., 2014; Sturm, 2014). Indeed, we increase the probability that our algorithm is addressing the correct problem of distinguishing Instrumentals from Songs because of two reasons. The first reason comes from the use of a sufficient amount of musical data that can reflects the diversity in music. Indeed, our supervised algorithm can leverage instrumentals that contain violin to distinguish this amplitude modulation from the singing voice, for example. This could not have been the case if the musical database was only constituted of rock music, for example. The second reason comes from the features used that have been proven to detect the singing voice presence in multiple track modifications related to the pitch, the volume, and the speed (Bayle et al., 2016). These kinds of musical data augmentation (Schlüter and Grill, 2015) are known to diminish the risk of overfitting (Krizhevsky et al., 2012) and to improve the figures of merit in imbalanced class problems (Chawla, 2009; Wong et al., 2016), thus diminishing the risk of our algorithm being a "Horse".

Finally, the third and last set of features consists of the mean values for MFCC, deltas, and double deltas.

All these features are then used as training materials for an AdaBoost classifier, as described in the following section.

8.2 Suited classification algorithm for Instrumental retrieval

It is necessary to choose a machine learning algorithm that can focus on Instrumentals because these are not well detected and are in minority in musical databases. Thus, we choose to use boosting algorithms because they alter the weights of training examples to focus on the most intricate tracks. Boosting is preferred over Bagging, as the former aims to decrease bias and the latter aims to decrease variance. In this particular applicative context of generating an Instrumental playlist from a big musical database, it is preferred to decrease the bias. Among boosting algorithms, the AdaBoost classifier is known to perform well for the classification of minority tags (Foucard et al., 2012) and music (Bergstra et al., 2006)

. A decision tree is used as the base estimator in Adaboost. The first reason for using decision trees lies in the logarithmic training curve displayed by decision trees and the second reason involves their better performances in the detection of the singing-voice by tree-based classifiers

(Lehner et al., 2014; Bayle et al., 2016). We use the AdaBoost implementation provided by the Python package scikit-learn (Pedregosa et al., 2011) to guarantee reproducibility.

8.3 Evaluation of the performances of our algorithm

This section evaluates the performances of the proposed algorithm in the same experiment as the one conducted in Section 7. We remind the reader that we train our algorithm on the 186 tracks of and test it against the 41,941 tracks of . Our algorithm reaches a global accuracy of and a global f-score of . Table 6 displays the precision and recall of our algorithm for Instrumentals classification and we display once again the previous corresponding results for AllInstrumental, GA, SMVBFF, and VQMM.

Algorithm Precision Recall
Proposed algorithm
Table 6: Precision and recall of the new proposed algorithm. The train set is constituted of the balanced database of 186 tracks. The test set is constituted of the unbalanced database of 41,491 tracks composed of 37,035 Songs (89%) and 4,456 Instrumentals (11%). The bold number highlights the best precision achieved.

As indicated in Table 6, the main difference between our algorithm and GA, SVMBFF, and VQMM comes from the higher precision reached for Instrumental detection. This precision of our algorithm is indeed 0.527 (276.8%) higher than the best existing method —i.e. VQMM— and 0.715 (750.0%) higher than RCA. From a practical point of view, if GA, SVMBFF, and VQMM are used to build an Instrumental playlist, they can at best retrieve 30% of true positive, i.e., Instrumentals, whereas our proposed method increases this number beyond 80%, which is noteworthy for any listeners. The high precision reached cannot be imputed to an over-fitting effect because the training set is 223 times smaller than the testing one. The results from GA, SVMBFF and VQMM might have suffer from over-fitting because their experiment did imply a too restricted music universe (), in terms of size and representativeness of the tracks’ origins. Our algorithm brought the detection of Instrumentals closer to human-performance level than state-of-the-art algorithms.

When applying the same proposed algorithm to Songs instead of Instrumentals, our algorithm reaches a precision of 0.959 and a recall of 0.844 on Song detection, which is respectively 0.07 (7.9%), and 34.4 (68.8%) higher than RCA. In this configuration, the global accuracy and f-score reached by our algorithm are respectively of 0.829 and 0.852.

8.4 Limitations of our algorithm

Just like for VQMM in Fig. 1, we cannot tune our algorithm to guarantee 100% of precision. Our algorithm has only one operating point due to the use of the AdaBoost classifier. We tried to use SVM and Random Forest classifiers — which have multiple operating points — but they cannot guarantee as much precision as AdaBoost did. Our algorithm in its current state performs better in Instrumental detection than state-of-the-art algorithms but it is still impossible to guarantee a faultless playlist. As we aim to reduce the false positives to zero, the proposed classification algorithm seems to be limited by the set of features used. A benchmark of SVD methods (Lukashevich et al., 2007; Ramona et al., 2008; Regnier and Peeters, 2009; Lehner et al., 2014; Leglaive et al., 2015; Lehner et al., 2015; Nwe et al., 2004; Schlüter and Grill, 2015; Schlüter, 2016) is needed to assess the impact of additional features on the precision and the recall when used with our generalization method. Indeed, features such as the Vocal Variance (Lehner et al., 2014), the Voice Vibrato (Regnier and Peeters, 2009), the Harmonic Attenuation (Nwe et al., 2004) or the Auto-Regressive Moving Average filtering (Lukashevich et al., 2007) have to be reviewed.

Apart from benchmarking features, a deep learning approach for SVD has been proposed

(Kereliuk et al., 2015; Leglaive et al., 2015; Lehner et al., 2015; Schlüter and Grill, 2015; Lidy and Schindler, 2016; Pons et al., 2016). However, deep learning is still a nascent and little understood approach in MIR242424 and to the best of our knowledge no tuning of the operating point has been performed as it is intricate to analyse the inner layers (Woods and Bowyer, 1997; Zhao et al., 2011). Furthermore, it is intricate to fit the whole spectrograms of full-length tracks of a given musical database into the memory of a GPU and thus it is intricate for a given deep learning model to train on full-length tracks on the SIC task. Current deep learning approaches indeed require to fit into memory batches of tracks large enough —usually 32 (Miron et al., 2017; Oramas et al., 2017)

— to guarantee a good generalisation process. For instance, neural network architecture for SVD algorithms like the one from

Schlüter and Grill (2015) takes around 240MB in memory for 30 seconds spectrograms with 40 frequency bins for each track. This architecture and batch size just fit in a high-end GPU with around 8GB of RAM. To analyse full-length tracks of more than 4 minutes it would require to diminish the batch size below 4 thus decreasing harmfully the model generalization process. This demonstration indicates that creating faultless instrumental playlist with a deep learning approach is not practically feasible now and currently the only solution toward better Instrumental playlists will require to enhance the input feature set of our algorithm.

9 Conclusion

In this study, we propose solutions toward content-based driven generation of faultless Instrumental playlists. Our new approach reaches a precision of 82.5% for Instrumental detection, which is approximately three times better than state-of-the-art algorithms. Moreover, this increase in precision is reached for a bigger musical database than the ones used in previous studies.

Our study provides five main contributions. We provide the first review of SIC, which is in the applicative context of playlist generation —in Section 3 to 7. We show in Section 8 that the use of frame features outperforms the use of global track features in the case of SIC and thus diminishes the risk of an algorithm being a "Horse". This improvement is magnified when frame ground truths are used alongside frame features, which is the key difference between our proposed algorithm and state-of-the-art algorithms. Furthermore, our algorithm’s implementation can process large musical databases whereas the current implementation of SVMBFF, SRCAM, and VQMM cannot. Additionally, we propose in Section 8 a new track tagging method based on frame predictions that outperforms the Markov model in terms of accuracy and f-score. Finally, we demonstrate that better playlists related to a tag can be generated when the autotagging algorithm focuses only on this tag. This increase is accentuated when the tag is in minority, which is the case for most tags and especially here for Instrumentals.

The source code is available online at

The authors thank Thibault Langlois and Fabien Gouyon for their help reproducing VQMM and SVMBFF classification algorithms respectively. The authors thank Manuel Moussallam from Deezer for the industrial acumen in music recommendations and fruitful discussions. The authors thank Bob L. Sturm for his help formalizing the Songs and Instrumentals Classification task. The authors thank Jordi Pons for fruitful discussions on deep learning approaches. The authors thank Fidji Berio and Kimberly Malcolm for insightful proofreading. All authors contributed equally to this work. The authors declare no conflict of interest. The industrial partners had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. The following abbreviations are used in this manuscript:
ANOVA: ANalysis Of VAriance
AUC: Area Under the Curve
DOE: Design Of Experiments
GA: Ghosal’s Algorithm
IFPI: International Federation of the Phonographic Industry
ISRC: International Standard Recording Code
MFCC: Mel-Frequency Cepstral Coefficients
MIR: Music Information Retrieval
RANSAC: Random Sample and Consensus
RCa: Random Classification Algorithm
ROC: Receiver Operating Characteristic
SATIN: Set of Audio Tags and Identifiers Normalized
SIC: Songs and Instrumental Classification
SRCAM: Sparse Representation Classification and Auditory temporal Modulation features
SVD: Singing Voice Detection
SVM: Support Vector Machine
SVMBFF: Support Vector Machine and Bags of Frames of Features
VQMM: Vector Quantization and Markov Models


  • Song et al. (2012) Song, Y.; Dixon, S.; Pearce, M. A survey of music recommendation systems and future perspectives. Proc. 9th Int. Symp. Comp. Music Model. Retrieval, 2012, pp. 395–410.
  • Wikström (2015) Wikström, P. Will algorithmic playlist curation be the end of music stardom? J. Bus. Anthrop. 2015, 4, 278–284.
  • Choi et al. (2016) Choi, K.; Fazekas, G.; Sandler, M. Towards playlist generation algorithms using RNNs trained on within-track transitions. Work. Surprise Opposition Obstruction Adapt. Personalized Syst., 2016, pp. 1–4.
  • Nakano et al. (2016) Nakano, T.; Kato, J.; Hamasaki, M.; Goto, M. PlaylistPlayer: an interface using multiple criteria to change the playback order of a music playlist. Proc. 21st Int. Conf. Intell. User Interfaces. ACM, 2016, pp. 186–190.
  • Thalmann et al. (2016) Thalmann, F.S.; Perez Carillo, A.; Fazekas, G.; Wiggins, G.A.; Sandler, M.B. The Semantic Music Player: a smart mobile player based on ontological structures and analytical feature metadata. Proc. 2nd Web Audio Conf., 2016, pp. 1–6.
  • Nettamo et al. (2006) Nettamo, E.; Nirhamo, M.; Häkkilä, J. A cross-cultural study of mobile music: retrieval, management and consumption. Proc. 18th Australia Conf. Comput. Human Interaction Design Activities Artefacts Environ., 2006, pp. 87–94.
  • Uitdenbogerd and Schyndel (2002) Uitdenbogerd, A.; Schyndel, R. A review of factors affecting music recommender success. Proc. 3rd Int. Conf. Music Inform. Retrieval, 2002, pp. 204–208.
  • Yoshii et al. (2007) Yoshii, K.; Goto, M.; Komatani, K.; Ogata, T.; Okuno, H.G. Improving efficiency and scalability of model-based music recommender system based on incremental training. Proc. 8th Int. Conf. Music Inform. Retrieval, 2007, pp. 89–94.
  • Schedl et al. (2015) Schedl, M.; Knees, P.; McFee, B.; Bogdanov, D.; Kaminskas, M. Music recommender systems. In Recommender Systems Handbook; Ricci, F.; Rokach, L.; Shapira, B., Eds.; Springer US, 2015; chapter 13, pp. 453–492.
  • Jäschke et al. (2007) Jäschke, R.; Marinho, L.; Hotho, A.; Schmidt-Thieme, L.; Stumme, G. Tag recommendations in folksonomies. Proc. 11th Euro. Conf. Princ. Practice Know. Disc. Databases, 2007, pp. 506–514.
  • Streich (2006) Streich, S. Music complexity: a multi-faceted description of audio content. PhD thesis, Univ. Pompeu Fabra, Barcelona, Spain, 2006.
  • Laurier and Herrera (2007) Laurier, C.; Herrera, P. Audio music mood classification using support vector machine. MIREX task on Audio Mood Classification, 2007, pp. 1–3.
  • Turnbull et al. (2008) Turnbull, D.; Barrington, L.; Torres, D.; Lanckriet, G. Semantic annotation and retrieval of music and sound effects. IEEE Trans. Audio Speech Lang. Process. 2008, 16, 467–476.
  • Shardanand and Maes (1995) Shardanand, U.; Maes, P. Social information filtering: algorithms for automating “word of mouth”. Proc. Spec. Interest Group Comp. Human Interact. Conf. Human Factors in Comp. Syst., 1995, pp. 210–217.
  • Breese et al. (1998) Breese, J.S.; Heckerman, D.; Kadie, C. Empirical analysis of predictive algorithms for collaborative filtering. Proc. 14th Conf. Uncertainty Artif. Intell., 1998, pp. 43–52.
  • Levy and Sandler (2007) Levy, M.; Sandler, M.B. A semantic space for music derived from social tags. Proc. 8th Int. Conf. Music Inform. Retrieval, 2007, pp. 411–416.
  • Shepitsen et al. (2008) Shepitsen, A.; Gemmell, J.; Mobasher, B.; Burke, R.

    Personalized recommendation in social tagging systems using hierarchical clustering.

    Proc. ACM 2nd Conf. Recomm. Syst., 2008, pp. 259–266.
  • Law et al. (2007) Law, E.L.M.; von Ahn, L.; Dannenberg, R.B.; Crawford, M. Tagatune : a game for music and sound annotation. Proc. 8th Int. Conf. Music Inform. Retrieval, 2007, pp. 361–364.
  • Turnbull et al. (2007) Turnbull, D.; Barrington, L.; Torres, D.; Lanckriet, G. Towards musical query-by-semantic-description using the CAL500 data set. Proc. 30th Annu. Int. ACM SIGIR Conf. Res. Devl. Inf. Retr., 2007, pp. 439–446.
  • Mandel and Ellis (2008) Mandel, M.I.; Ellis, D.P.W. Multiple-instance learning for music information retrieval. Proc. 9th Int. Conf. Music Inform. Retrieval, 2008, pp. 577–582.
  • Whitman and Ellis (2004) Whitman, B.; Ellis, D.P.W. Automatic record reviews. Proc. 5th Int. Conf. Music Inform. Retrieval, 2004, pp. 470–477.
  • Knees et al. (2007) Knees, P.; Pohle, T.; Schedl, M.; Widmer, G. A music search engine built upon audio-based and web-based similarity measures. Proc. 30th Annu. Int. ACM SIGIR Conf. Res. Devl. Inf. Retr., 2007, pp. 447–454.
  • Tzanetakis and Cook (2002) Tzanetakis, G.; Cook, P. Musical genre classification of audio signals. IEEE Trans. Speech Audio Process. 2002, 10, 293–302.
  • Bertin-Mahieux et al. (2010) Bertin-Mahieux, T.; Eck, D.; Mandel, M.I. Automatic Tagging of Audio: The State-of-the-Art. In Mach. Audition Prin. Algo. Syst.; Wang, W., Ed.; Information Science Reference, IGI Global, 2010; chapter 14, pp. 334–352.
  • Prockup et al. (2015) Prockup, M.; Ehmann, A.F.; Gouyon, F.; Schmidt, E.M.; Celma, O.; Kim, Y.E. Modeling genre with the Music Genome Project: comparing human-labeled attributes and audio features. Proc. 16th Int. Soc. Music Inform. Retrieval Conf., 2015, pp. 31–37.
  • Kim and Whitman (2002) Kim, Y.E.; Whitman, B. Singer identification in popular music recordings using voice coding features. Proc. 3rd Int. Conf. Music Inform. Retrieval, 2002, pp. 17–23.
  • Skowronek et al. (2006) Skowronek, J.; McKinney, M.F.; van de Par, S. Ground truth for automatic music mood classification. Proc. 7th Int. Conf. Music Inform. Retrieval, 2006, pp. 395–396.
  • Sturm (2013) Sturm, B.L. The GTZAN dataset: its contents, its faults, their effects on evaluation, and its future use. arXiv 2013, pp. 1–29, [1306.1461].
  • Sturm (2015) Sturm, B.L. Faults in the latin music database and with its use. Proc. Late Breaking Demo 16th Int. Soc. Music Inform. Retrieval Conf., 2015, pp. 1–2.
  • Pachet and Roy (1999) Pachet, F.; Roy, P. Automatic generation of music programs. Proc. 5th Int. Conf. Constraint Prog.; Joxan Jaffar., Ed., 1999, pp. 331–345.
  • Eck et al. (2007) Eck, D.; Lamere, P.; Bertin-Mahieux, T.; Green, S. Automatic generation of social tags for music recommendation. Proc. 21st Conf. Adv. Neur. Inform. Process. Syst., 2007, pp. 385–392.
  • Li et al. (2007) Li, Q.; Myaeng, S.H.; Kim, B.M. A probabilistic music recommender considering user opinions and audio features. Inform. Process. Manag. 2007, 43, 473–487.
  • Schafer et al. (2007) Schafer, B.J.; Frankowski, D.; Herlocker, J.; Sen, S. Collaborative filtering recommender systems. In Adapt. Web, 1 ed.; Brusilovski, P.; Kobsa, A.; Nejdl, W., Eds.; Springer-Verlag Berlin Heidelberg, 2007; chapter 9, pp. 291–324.
  • Schlüter and Grill (2015) Schlüter, J.; Grill, T. Exploring data augmentation for improved singing voice detection with neural networks. Proc. 16th Int. Soc. Music Inform. Retrieval Conf., 2015, pp. 121–126.
  • Logan (2002) Logan, B. Content-based playlist generation: exploratory experiments. Proc. 3rd Int. Conf. Music Inform. Retrieval, 2002, pp. 6–7.
  • Hoashi et al. (2003) Hoashi, K.; Matsumoto, K.; Inoue, N. Personalization of user profiles for content-based music retrieval based on relevance feedback. Proc. 11th ACM Int. Conf. Multimedia, 2003, pp. 110–119.
  • Celma et al. (2005) Celma, Ò.; Ramírez, M.; Herrera, P. Foafing the music: a music recommendation system based on RSS feeds and user preferences. Proc. 6th Int. Conf. Music Inform. Retrieval, 2005, pp. 457–464.
  • Sordo et al. (2007) Sordo, M.; Laurier, C.; Celma, Ò. Annotating music collections: how content-based similarity helps to propagate labels. Proc. 8th Int. Conf. Music Inform. Retrieval, 2007, pp. 531–534.
  • Tingle et al. (2010) Tingle, D.; Kim, Y.E.; Turnbull, D. Exploring automatic music annotation with "acoustically-objective" tags. Proc. 11th ACM Int. Conf. Multimedia Inform. Retrieval, 2010, pp. 55–62.
  • Bu et al. (2010) Bu, J.; Tan, S.; Chen, C.; Wang, C.; Wu, H.; Zhang, L.; He, X. Music recommendation by unified hypergraph: combining social media information and music content. Proc. 18th ACM Int. Conf. Multimedia, 2010, pp. 391–400.
  • Hsu et al. (2016) Hsu, K.C.; Lin, C.S.; Chi, T.S. Sparse coding based music genre classification using spectro-temporal modulations. Proc. 17th Int. Soc. Music Inform. Retrieval Conf., 2016, pp. 744–750.
  • Jeong and Lee (2016) Jeong, I.Y.; Lee, K. Learning temporal features using a deep neural network and its application to music genre classification. Proc. 17th Int. Soc. Music Inform. Retrieval Conf., 2016, pp. 434–440.
  • Lu et al. (2016) Lu, Y.C.; Wu, C.W.; Lu, C.T.; Lerch, A.

    Automatic outlier detection in music genre datasets.

    Proc. 17th Int. Soc. Music Inform. Retrieval Conf., 2016, pp. 101–107.
  • Oramas et al. (2016) Oramas, S.; Espinosa-Anke, L.; Lawlor, A.; Serra, X.; Saggion, H. Exploring customer reviews for music genre classification and evolutionary studies. Proc. 17th Int. Soc. Music Inform. Retrieval Conf., 2016, pp. 150–156.
  • Sturm (2016) Sturm, B.L. Revisiting priorities: improving MIR evaluation practices. Proc. 17th Int. Soc. Music Inform. Retrieval Conf., 2016, pp. 488–494.
  • Wiggins (2009) Wiggins, G.A. Semantic gap?? Schemantic schmap!! Methodological considerations in the scientific study of music. Proc. 11th IEEE Int. Symp. Multimedia, 2009, pp. 477–482.
  • Chau et al. (2013) Chau, P.Y.K.; Ho, S.Y.; Ho, K.K.W.; Yao, Y. Examining the effects of malfunctioning personalized services on online users’ distrust and behaviors. Decision Support Systems 2013, 56, 180–191.
  • Yin et al. (2010) Yin, D.; Bond, S.D.; Zhang, H. Are bad reviews always stronger than good? Asymmetric negativity bias in the formation of online consumer trust. Proc. 31st Int. Conf. Inform. Syst., 2010, pp. 1–18.
  • Gouyon et al. (2014) Gouyon, F.; Sturm, B.L.; Oliveira, J.L.; Hespanhol, N.; Langlois, T. On evaluation validity in music autotagging. arXiv 2014, [1410.0001].
  • Rosenblatt (2015) Rosenblatt, D. Music Listening as Therapy. PhD thesis, Univ. Loma Linda, CA, USA, 2015.
  • Suárez et al. (2016) Suárez, L.; Elangovan, S.; Au, A. Cross-sectional study on the relationship between music training and working memory in adults. Australian J. Psych. 2016, 68, 38–46.
  • Zhao and Kuhl (2016) Zhao, T.C.; Kuhl, P.K. Musical intervention enhances infants’ neural processing of temporal structure in music and speech. Proc. Natl. Acad. Sci. USA 2016, 113, 5212–5217.
  • Rao et al. (2009) Rao, V.; Ramakrishnan, S.; Rao, P. Singing voice detection in polyphonic music using predominant pitch. Proc. 10th Annu. Conf. Inter. Speech Comm. Assoc., 2009, pp. 1131–1134.
  • Panteli et al. (2017) Panteli, M.; Bittner, R.; Bello, J.P.; Dixon, S. Towards the characterization of singing styles in world music. Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2017, pp. 636–640.
  • Ghosal et al. (2013) Ghosal, A.; Chakraborty, R.; Dhara, B.C.; Saha, S.K. A hierarchical approach for speech-instrumental-song classification. SpringerPlus 2013, 2, 1–11.
  • Bayle et al. (2016) Bayle, Y.; Hanna, P.; Robine, M. Classification à grande échelle de morceaux de musique en fonction de la présence de chant. Journées d’Informatique Musicale, 2016, pp. 144–152.
  • Bogdanov et al. (2016) Bogdanov, D.; Porter, A.; Herrera, P.; Serra, X. Cross-collection evaluation for music classification tasks. Proc. 17th Int. Soc. Music Inform. Retrieval Conf., 2016, pp. 379–385.
  • Lehner et al. (2014) Lehner, B.; Widmer, G.; Sonnleitner, R. On the reduction of false positives in singing voice detection. Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2014, pp. 7480–7484.
  • Casey et al. (2008) Casey, M.A.; Veltkamp, R.; Goto, M.; Leman, M.; Rhodes, C.; Slaney, M. Content-based music information retrieval: Current directions and future challenges. Proc. IEEE 2008, 96, 668–696.
  • Bayle et al. (2017) Bayle, Y.; Hanna, P.; Robine, M. SATIN: A Persistent Musical Database for Music Information Retrieval. Proc. 15th Int. Works. Content-Based Multimedia Indexing, 2017, pp. 1–5.
  • Ramona et al. (2008) Ramona, M.; Richard, G.; David, B. Vocal detection in music with support vector machines. Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2008, pp. 1885–1888.
  • Bittner et al. (2014) Bittner, R.M.; Salamon, J.; Tierney, M.; Mauch, M.; Cannam, C.; Bello, J.P. MedleyDB: a multitrack dataset for annotation-intensive MIR research. Proc. 15th Int. Soc. Music Inform. Retrieval Conf., 2014, pp. 155–160.
  • Liutkus et al. (2014) Liutkus, A.; Fitzgerald, D.; Rafii, Z.; Pardo, B.; Daudet, L. Kernel additive models for source separation. IEEE Trans. Signal Process. 2014, 62, 4298–4310.
  • Schlüter (2016) Schlüter, J. Learning to pinpoint singing voice from weakly labeled examples. Proc. 17th Int. Soc. Music Inform. Retrieval Conf., 2016, pp. 44–50.
  • Hespanhol (2013) Hespanhol, N. Using Autotagging for Classification of Vocals in Music Signals. PhD thesis, Univ. Porto, Portugal, 2013.
  • Zhang and Kuo (2013) Zhang, T.; Kuo, C.C.J. Content-Based Audio Classification and Retrieval for Audiovisual Data Parsing; Springer Science & Business Media, 2013; p. 136.
  • Sturm (2014) Sturm, B.L. The state of the art ten years after a state of the art: Future research in music information retrieval. Journal of New Music Research 2014, 43, 147–172.
  • Fischler and Bolles (1981) Fischler, M.A.; Bolles, R.C. Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395.
  • Ness et al. (2009) Ness, S.R.; Theocharis, A.; Tzanetakis, G.; Martins, L.G. Improving automatic music tag annotation using stacked generalization of probabilistic SVM outputs. Proc. 17th ACM Int. Conf. Multimedia, 2009, pp. 705–708.
  • Langlois and Marques (2009) Langlois, T.; Marques, G. A music classification method based on timbral features. Proc. 10th Int. Soc. Music Inform. Retrieval Conf., 2009, pp. 81–86.
  • Panagakis et al. (2009) Panagakis, Y.; Kotropoulos, C.; Arce, G.R. Music genre classification via sparse representations of auditory temporal modulations. Proc. 17th European Signal Process. Conf., 2009, pp. 1–5.
  • Wright et al. (2009) Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y.

    Robust face recognition via sparse representation.

    IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227.
  • Sturm (2012) Sturm, B.L. Two systems for automatic music genre recognition: what are they really recognizing? Proc. 2nd Int. ACM Works. Music Inf. Retrieval User-Centered Multimodal Strat., 2012, pp. 69–74.
  • Sturm and Noorzad (2012) Sturm, B.L.; Noorzad, P. On automatic music genre recognition by sparse representation classification using auditory temporal modulations. Proc. 9th Int. Symp. Comp. Music Model. Retrieval, 2012, pp. 379–394.
  • Pedregosa et al. (2011) Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; Duchesnay, É. Scikit-learn: machine learning in Python. J. Mach. Learning Res. 2011, 12, 2825–2830.
  • Livshin and Rodet (2003) Livshin, A.; Rodet, X. The importance of cross database evaluation in sound classification. Proc. 4th Int. Conf. Music Inform. Retrieval, 2003, pp. 1–2.
  • Guaus (2009) Guaus, E. Audio Content Processing for Automatic Music Genre Classification: Descriptors, Databases, and Classifiers. PhD thesis, Univ. Pompeu Fabra, Barcelona, Spain, 2009.
  • Ng (1997) Ng, A.Y. Preventing "overfitting" of cross-validation data. Proc. 14th Int. Conf. Mach. Learning, 1997, pp. 245–253.
  • Herrera et al. (2003) Herrera, P.; Dehamel, A.; Gouyon, F. Automatic labeling of unpitched percussion sounds. Proc. 114th Audio Eng. Soc. Conv., 2003, pp. 1–14.
  • Bogdanov et al. (2011) Bogdanov, D.; Serrà, J.; Wack, N.; Herrera, P.; Serra, X. Unifying low-level and high-level music similarity measures. IEEE Trans. Multimedia 2011, 13, 687–701.
  • Chudáček et al. (2009) Chudáček, V.; Georgoulas, G.; Lhotská, L.; Stylios, C.; Petrík, M.; Čepek, M. Examining cross-database global training to evaluate five different methods for ventricular beat classification. J. Physio. Measurement 2009, 30, 661–677.
  • Bekios-Calfa et al. (2011) Bekios-Calfa, J.; Buenaposada, J.M.; Baumela, L. Revisiting linear discriminant techniques in gender recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 858–864.
  • Llamedo et al. (2012) Llamedo, M.; Khawaja, A.; Martinez, J.P. Cross-database evaluation of a multilead heartbeat classifier. IEEE Trans. Inf. Technol. Biomed. 2012, 16, 658–664.
  • Erdoğmuş et al. (2014) Erdoğmuş, N.; Vanoni, M.; Marcel, S. Within- and cross- database evaluations for face gender classification via BeFIT protocols. Proc. 16th IEEE Int. Works. Multimedia Signal Process., 2014, pp. 1–6.
  • Fernández et al. (2015) Fernández, C.; Huerta, I.; Prati, A. A comparative evaluation of regression learning algorithms for facial age estimation. In Face and Facial Expression Recognition from Real World Videos; Ji, Q.; Moeslund, T.; Hua, G.; Nasrollahi, K., Eds.; Springer, Cham, 2015; pp. 133–144.
  • Sturm (2014) Sturm, B.L. A simple method to determine if a music information retrieval system is a "Horse". IEEE Trans. Multimedia 2014, 16, 1636–1644.
  • Foucard et al. (2012) Foucard, R.; Essid, S.; Lagrange, M.; Richard, G. Étiquetage automatique de musique : une approche de boosting régressif basée sur une fusion souple d’annotateurs. Proc. 15th Conf. Compression Representation Signaux Audiovisuels, 2012, pp. 169–173.
  • Sturm et al. (2014) Sturm, B.L.; Bardeli, R.; Langlois, T.; Emiya, V. Formalizing the problem of music description. Proc. 15th Int. Soc. Music Inform. Retrieval Conf., 2014, pp. 89–94.
  • Krizhevsky et al. (2012) Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proc. 25th Conf. Advances Neur. Inform. Proc. Syst.; Pereira, F.; Burges, C.J.C.; Bottou, L.; Weinberger, K.Q., Eds.; Curran Associates, Inc., 2012; pp. 1097–1105.
  • Chawla (2009) Chawla, N.V. Data mining for imbalanced datasets: An overview. In Data mining and knowledge discovery handbook; Springer US., Ed.; 2009; pp. 875–886.
  • Wong et al. (2016) Wong, S.C.; Gatt, A.; Stamatescu, V.; McDonnell, M.D. Understanding Data Augmentation for Classification: When to Warp? Proc. Int. Conf. Digital Image Comp. Tech. App., 2016, pp. 1–6.
  • Bergstra et al. (2006) Bergstra, J.; Casagrande, N.; Erhan, D.; Eck, D.; Kegl, B. Meta-Features and AdaBoost for Music Classification. Machine Learning 2006, pp. 1–28.
  • Lukashevich et al. (2007) Lukashevich, H.; Gruhne, M.; Dittmar, C. Effective singing voice detection in popular music using arma filtering. Proc. 10th Int. Works. Digital Audio Effects, 2007, pp. 165–168.
  • Regnier and Peeters (2009) Regnier, L.; Peeters, G. Singing voice detection in music tracks using direct voice vibrato detection. Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2009, pp. 1685–1688.
  • Leglaive et al. (2015) Leglaive, S.; Hennequin, R.; Badeau, R.

    Singing voice detection with deep recurrent neural networks.

    Proc. 40th IEEE Int. Conf. Acoust. Speech Signal Process., 2015, pp. 121–125.
  • Lehner et al. (2015) Lehner, B.; Widmer, G.; Böck, S. A low-latency, real-time-capable singing voice detection method with LSTM recurrent neural networks. Proc. 23rd European Signal Process. Conf., 2015, pp. 21–25.
  • Nwe et al. (2004) Nwe, T.L.; Shenoy, A.; Wang, Y. Singing voice detection in popular music. Proc. 12th Annu. ACM Int. Conf. Multimedia, 2004, pp. 324–327.
  • Kereliuk et al. (2015) Kereliuk, C.; Sturm, B.L.; Larsen, J. Deep learning and music adversaries. IEEE Trans. Multimedia 2015, 17, 2059–2071.
  • Lidy and Schindler (2016) Lidy, T.; Schindler, A.

    CQT-based convolutional neural networks for audio scene classification and Domestic Audio Tagging.

    Proc. IEEE Audio Acoust. Signal Process. Challenge Works. Detect. Classif. Acoustic Scenes Events, 2016, pp. 60–64.
  • Pons et al. (2016) Pons, J.; Lidy, T.; Serra, X. Experimenting with musically motivated convolutional neural networks. Proc. 14th Int. Works. Content-Based Multimedia Indexing, 2016, pp. 1–6.
  • Woods and Bowyer (1997) Woods, K.; Bowyer, K.W. Generating ROC curves for artificial neural networks. IEEE Trans. Med. Imag. 1997, 16, 329–337.
  • Zhao et al. (2011) Zhao, P.; Jin, R.; Yang, T.; Hoi, S.C. Online AUC maximization. Proc. 28th Int. Conf. Mach. Learn., 2011, pp. 233–240.
  • Miron et al. (2017) Miron, M.; Janer Mestres, J.; Gómez Gutiérrez, E. Generating data to train convolutional neural networks for classical music source separation. Proc. 14th Sound Music Comp. Conf., 2017, pp. 227–234.
  • Oramas et al. (2017) Oramas, S.; Nieto, O.; Sordo, M.; Serra, X. A deep multimodal approach for cold-start music recommendation. Proc. 2nd Work. Deep Learn. Rec. Syst., 2017, pp. 32–37.