Bottom-up Broadcast Neural Network For Music Genre Classification

01/24/2019 ∙ by Caifeng Liu, et al. ∙ Dalian University of Technology 0

Music genre recognition based on visual representation has been successfully explored over the last years. Recently, there has been increasing interest in attempting convolutional neural networks (CNNs) to achieve the task. However, most of existing methods employ the mature CNN structures proposed in image recognition without any modification, which results in the learning features that are not adequate for music genre classification. Faced with the challenge of this issue, we fully exploit the low-level information from spectrograms of audios and develop a novel CNN architecture in this paper. The proposed CNN architecture takes the long contextual information into considerations, which transfers more suitable information for the decision-making layer. Various experiments on several benchmark datasets, including GTZAN, Ballroom, and Extended Ballroom, have verified the excellent performances of the proposed neural network. Codes and model will be available at "ttps://github.com/CaifengLiu/music-genre-classification".

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With the rapid development of multimedia technology, a tremendous number of digital audios are uploaded on the Internet. Except the benefits brought by these audio tasks, the explosive growth of these audios causes fatal effects on various aspects. Therefore, managing these audios appropriately is a burdensome task crying out for reliable solutions. Researchers all over the world have devoted plenty of efforts to deal with various audios. Music information retrieval (MIR) is one of those studies, which provides a significative attempt to deal with music data. Due to the urgent need of many applications, such as music recommendation, music search, MIR has attracted attentions widely. Classification as a basic understanding of the music field has become an essential tool for MIR to analyze and process the music information. As a core issue of MIR, genre classification focuses on assigning a specific genre (classical, rock, jazz, etc.) to an unknown music clip. Expert annotation is notoriously expensive and intractable for large catalogues, Therefore, content-based genre recognition is highly valuable to bootstrap MIR system.

Even though researchers have proposed various algorithms from different perspectives, most of them rely on excellent hand-crafted features and constructing appropriate classifiers for music data. Spectrograms of music data have been proved to be one effective tool to describe audio signals. Similar to the images, spectrograms are also visual representations which can adequately maintain the information of time-frequency from music data. It builds a bridge between the algorithms for both image data and audio signals. Some mature algorithms for image processing can directly be adopted by related methods for audio signals. For the stage of feature extraction, the spectrograms are able to store most time-frequency information in their texture, which is of vital importance for the representation of various audios. Furthermore, there are many descriptors which can exploit the texture information to some extent, including Local Binary Patterns(LBP), Gabor Filters, Local Phase Quantization, etc.. After feature extraction, classification on the extracted features can directly determine the performances of music genre recognition. Some of those classification algorithms are common choices in this field, such as Support Vector Machine(SVM), Gaussian Mixture Models(GMM), and Music Classifier Systems(MCS) with different fusion strategies, etc.

(Fu et al., 2011; Wang and Wu, 2018). Even though the feature representations and classification algorithms for music collections seem to be maturing, it’s difficult for those traditional hand-crafted methods to design appropriate features for a specific task automatically. Therefore, it is far more beneficial to adopt a data-driven method rather than designing those hand-crafted ones.

Over the last decade, we have witnessed a surge of Convolutional Neural Network (CNN) architectures which have achieved satisfying performances in many fields like image recognition (Wang et al., 2017a; Wu et al., 2019)

and natural language processing. Meanwhile, Music genre classification has also been inspired by the remarkable successes of CNNs. It is well known that CNNs can extract enough information from images due to their hierarchical structures

Wu et al. (2018). Low-level features, such as underlying texture, etc., are constructed into high-level semantic information through all layers of CNNs. Similar to images, music also consists of hierarchical structures, which inspires us to develop an appropriate CNN model to deal with the problem of music classification. For instance, pitch and loudness combine over time to form chords, melodies, and rhythms. And all these elements above can form the whole music layer by layer. Furthermore, it has been verified that CNNs are very sensitive to the textural information (Hafemann et al., 2014) of images. The special ability of CNNs can help the task of music classification to exploit abundant information from spectrogram which contains rich texture information from music signals.

Up to now, most of the CNN-based music classification models are constructed by directly utilizing those mature architectures to deal with this problem. For example, Choi et al. (Choi et al., 2017)

introduced a musical transfer learning system, in which a CNN model was trained on a large music dataset

(Bertin-Mahieux et al., 2011) as a feature extractor and an SVM classifier was stacked on it and Jakubik et al. (Jakubik, 2017)

introduced two Recurrent Neural Network(RNN) architectures from image domain with a different mechanism of gating: Long-Short Term Memory(LSTM) and Gated Recurrent Unit(GRU). They reported the accuracies of 89% and 92% respectively on GTZAN, which showed their potential for music content analysis.

Even though these architectures have achieved excellent performances in the music domain, the results have not been nearly as convincing as they have been in the visual realm. Most CNN-based models directly fed the visual spectrogram representations into these CNNs without any modification. Because traditional CNNs are constructed just for image processing tasks, it cannot achieve good performances without awareness of the difference between spectrogram and images. Stimulated by the problems above, two main motivations of our paper are listed as follows:

  • Even though the genres are positioned in different levels or time-scales in each hierarchy previous deep learning based works predict the genre from the same scale of time and level frequency, which is similar with the task of image classification. However, sound events are accumulated by frequency over time domain which causes the individual genre has different performance sensitivity to different time scales and levels of features. Therefore, it is necessary to design a specific CNN structure which can comprehensively handle multi-scale of audio features.

  • Previous network structures of music genre classification mainly focus on abstracting high-level semantic features layer by layer. It leads to a massive loss of lower-level features which include a large amount of critical information for making the decision. However, the low-level features tend to be more contributed for improving the genre classification performance (Choi et al., 2017). Therefore, how to construct an appropriate CNN structure to maximally abstract high-level information and preserve the lower-level features simultaneously, which is just for the task of music classification is of vital importance but challenging.

Based on the analysis above, the direct way to deal with these problems above is to exploit an appropriate CNN model which can make full use of both the high-level semantic information and the low-level features from various music. In this paper, we surveyed the problems in the field of music classification and proposed a novel architecture named Bottom-up Broadcast Neural Network (BBNN) which adopts a relatively wide and shallow structure. The main idea of the BBNN architecture is to develop effective block and connection manner between different blocks to fully exploit and preserve the low-level information to the higher layers. So that, low-level information of spectrogram is able to participate in the decision-making layer throughout the network, which is very important for the task of music classification. Therefore, BBNN is equipped with a novel Broadcast Module (BM) which consists of Inception blocks and connects them by dense connectivity. We have shown the architecture of BM in Figure 1(c). Because the Inception block can perceive the feature maps with different scales, it is able to extract information embedded in the time-frequency of the audio signal from different scales simultaneously. Moreover, BBNN densely connects those basic blocks and transforms the low-level information to the decision-making layers, which ensures that the low-level information can be maintained as much as possible. Most Deep CNN (DCNN) models have to adopt various data-augmentation pre-processings (Salamon and Bello, 2017) to enlarge the size of the training dataset. Compared with those traditional DCNN models, our proposed BBNN has few parameters to be learned. Therefore, a smaller dataset without any data-augmentation techniques, such as Ballroom, is enough for the training stage of BBNN.

(a) Dense Connectivity
(b) Inception Block
(c) Broadcast Module
Figure 1: Comparison of prior network structures (a, b) and proposed module Module(c). (a) Dense Block. (b) Inception Block. (c) Proposed Broadcast Module

2 Proposed Design And Approach

2.1 Broadcast Module

The widely accepted consensus is that individual genre has different performance sensitivity to various frequency bands and time intervals. Inspired by (Szegedy et al., 2015), we combine convolutions with different kernel size to form an Inception block (Figure 1(b)). The Inception blocks are stacked on top of each other as basic extraction unit to sufficiently learn features from multiple reception fields. It can decrease the network susceptibility to frequency-shifts in a spectrogram. To further strengthen feature propagation in the BM, we utilize dense connection paths to connect all Inception blocks in BM, which can bottom-up transmit the extracted feature maps to all subsequent blocks in BM. One of the main beneficial aspects of BM architecture is that it maximally transmit and preserve all extracted feature maps to higher-layers so that the decision layers make a prediction based on all feature-maps in the network. Another practically useful aspect of BM design is that it aligns with the intuition that audio information should be perceived at various time-frequency scales simultaneously.

As shown in Figure 2, BM consists of identical Inception blocks connected each other by dense connectivity, which allows each block to receive inputs directly from all its previous blocks. We denote outputted by shallow layers as the input of the BM, as the number of Inception blocks. In the music genre recognition task, we fix the . Thus, the input of - block, , can be represented as:

(1)

where refers to the concatenation of the feature maps produced by blocks and is a composite function of all operations in the Inception block. In each Inception block, the filter sizes of convolutions are mainly adopted , ,

with stride 2 and then,

convolutions are utilized to compute reductions before them. Before each convolution, the BN and rectified linear activation operations are implemented. An Inception block consists of layers of above types stacked upon each other, with occasional max-pooling layers restricted to the size

with stride 2 to halve the resolution of the grid. The use of the Inception block is based on (Szegedy et al., 2015), although our implementation differs in that we employ an extra BN layer before each convolution. This makes the network generalization ability significantly enhanced even trained on a small-scale dataset. As shown in the Table 1, the growth rate of the BM is 128. Each block has input feature maps, where is the number of channels in the input . The exact BM configurations used in the experiment are shown in Table 1. For illustration purpose, we divide the structure of Inception block into a top and bottom parts and list them respectively.

2.2 Network Structure

As shown in Figure 2

, The BBNN comprises 9 layers when counting only layers with parameters (or 12 if we also count pooling layers). Each of layers implements a non-linear transformation such as Convolution (Conv), Softmax, Batch Normalization (BN)

(Ioffe and Szegedy, 2015), and Pooling operation. Inspired by (Ioffe and Szegedy, 2015)

, We execute the BN transform immediately after each convolution operation and then use rectified linear activation (ReLU). A main beneficial aspect of BN is that it regularizes our model and reduces the need for Dropout.

All layers of the proposed network can be summarized in four parts to play different roles as follows: shallow feature extraction layers, BM, transition layers, and decision layers. The whole model aims to learn the all parameters of a composite function which maps input to the output (genre) :

(2)

where the index of represents a composite function of corresponding part of network. Specifically, the shallow layers (the ones close to the input) include a convolution, a BN, and a max pooling with 1 stride. A relatively small receptive field is used to extract the local frequency information in a short time span. After activating the local features with a BN and ReLU functions, we further add a max-pooling operation. It is considered that human is more concerned about the salient tempo in a short time when they recognizing the music genre. The max-pooling layer can filter out the dominant frequency in the short time interval of the mel-spectrogram. Furthermore, it makes the model possess some capacity of translation invariance. According to the above operations, the extracted local information is transmitted into each layer of BM and fused to gather evidence in support of contextual ”time-frequency signatures” that are indicative of recognizing different musical genres. The structural details about BM were given in Section 2.1.

Down-sampling layer is an essential part of convolutional networks. After extracting hierarchical features with BM, we further conduct several down-sampling layers to reduce the size of feature-maps and the number of channels which are significantly increased by the concatenation operations used in BM. These layers between BM and decision layers are referred to as transition layers, which do a BN, ReLU activation, convolution and average-pooling with stride 2 operations.

At the final decision stage of BBNN, instead of adding fully connected layers stacked on the feature maps (Jakubik, 2017), we utilize global average pooling (Lin et al., 2013)

layer to take the average of each feature map. It is easier to interpret the correspondence relations between feature maps and genres and less prone to overfitting than traditional fully connected layers. Then, the resulting vector of the global average pooling layer is fed into a softmax log-loss function which can produce a distribution over the genre labels (blues, classic, etc.).

The BBNN is designed with full consideration of computational efficiency and practicality. Here, the configurations of BBNN is described in Table 1 for demonstrating the specific architectural parameters, where the size of mel-spectrogram is

(30s music duration). In all convolution layers, we pad zeros to each side of the input to keep size fixed. As seen from Tabel 

1, the trained model has a tiny size, only 2.4M, which can be applied in individual devices including even those with limited computational resources.

Type Layers Output Size Filter Size/Stride (Number) Params
SL Convolution 320
Max Pool
BM Inception (a), top - , 3,168
Inception (a), bottom , 35,936
Inception (b), top - , 15,456
Inception (b), bottom , 40,032
Inception (c), top - , 27,744
Inception (c), bottom , 44,128
TL Convolution 13,344
Max Pool
DL Global Average Pool -
Softmax - 330
Total Params 180,458
Table 1: The configuration of BBNN. Note that each convolution layer shown in the table corresponds the sequence BN-ReLU-Conv.
Figure 2: The corresponding network architecture of BBNN

3 Experiments

3.1 Datasets

GTZAN. The dataset has been widely used in many studies with the aim of music genre classification. It was collected and proposed by Tzanetakis in (Tzanetakis and Cook, 2002). The genre labels and numbers of corresponding genres are given in Table 2.

Ballroom. The dataset (Cano et al., 2006) consists of clear and constant rhythmic patterns, which makes it suitable for recognition tasks. The specific genres and the corresponding number of each genre are listed in Table 2.

Extended Ballroom. The dataset (Marchand and Peeters, 2016a) was proposed in 2016 by Marchand, which extended the original Ballroom dataset. Comparing to the original one, the extended version contains six times more tracks of better audio quality. We show in Table 2 the genre class distribution of the dataset. The imbalance of this dataset poses vast challenges for genre classification.

GTZAN Ballroom Extended Ballroom
Genre Track Genre Track Genre Track
Classic 100 Cha Cha 111 Cha Cha 455
Jazz 100 Jive 60 Jive 350
Blues 100 Quickstep 82 Quickstep 497
Metal 100 Rumba 98 Rumba 470
Pop 100 Samba 86 Samba 468
Rock 100 Tango 86 Tango 464
Country 100 Viennese Waltz 65 Viennese Waltz 252
Disco 100 Slow Waltz 110 Waltz 529
Hiphop 100 Foxtrot 507
Reggae 100 Pasodoble 53
Salsa 47
Slow Walz 65
Wcswing 23
Total 1000 Total 698 Total 4180
Table 2: Datasets Description

3.2 Preprocessing

In this work, mel-spectrogram is utilized as input to the proposed network. Specifically, we use Librosa (McFee et al., 2015) to extract mel-spectrograms with 128 Mel-filters (bands) covering the audible frequency range (0-22050 Hz), setting a frame length of 2048 and a hop size of 1024. We can get mel-spectrogram of size .

3.3 Training and other details

All files for each dataset are transformed to mel-spectrograms by the preprocessing program presented in Section 3.2. The mel-spectrogram with size input to the BBNN. All the models are trained to minimize categorical cross-entropy between the predictions and truthful genre labels utilizing ADAM (Kingma and Ba, 2014)

optimizer. All the three datasets use batch size 8 for 100 epochs. We set initial learning rate is 0.01 and automatically decrease it by a factor of 0.5 when the loss has stopped improving after 3 epochs. In addition, we set up an early stop mechanism, that is, training stops when a monitored quantity has stopped improving even if the epoch does not reach 100. Figure 

3 shows the training and validation loss curves of BBNN network on GTZAN, Ballroom, and Extended Ballroom datasets. BBNN converges to a low loss whether training set or verification set. We further analyze the BBNN’s effect on the test sets of each dataset in more detail below.

Figure 3: Loss curves of training and validation on the different datasets: (a) GTZAN, (b) Ballroom, (c) Extended Ballroom

Metric. Following previous works (e.g. (Choi et al., 2017)), we perform a 10-fold cross validation to evaluate the classification accuracy across all experiments. The training, testing, and validating sets are randomly partitioned following proportion 8/1/1. The total classification accuracy is calculated as the average of 10-folds cross-validations.

Experiment Platform.

Our code is written by Python, based on the Keras

Chollet et al. (2015) and the publicly available toolbox of preprocessing Librosa McFee et al. (2015). All of our experiments are running on NVIDIA TITAN Xp GPU with 12 GB memory.

3.4 Classification results on GTZAN

In Table 3, we compare BBNN with some recent excellent models including 6 different deep learning models and 1 traditional method based on a hand-crafted feature descriptor. Audeep (Freitag et al., 2017)

was based on a recurrent sequence to sequence autoencoder, which took full consideration of temporal dynamics of audio data and yielded an accuracy of 85.4%. The transform learning framework

(Choi et al., 2017) obtained an accuracy of 89.8%, which is transplanted to the music genre classification task from the visual domain. Hybrid model, CVAF and MFMCNN relied on different strategies of feature fusion to improve classification accuracy and generated the accuracies of 88.3%, 90.9% and 91.0% respectively. Multi-DNN generated a slightly lower accuracy than BBNN by using a framework cascaded multiple DNN networks which consumes more resources and strongly depends on an additional database to train the model.

Methods Preprocessing Accuracy(%)
AuDeep (Freitag et al., 2017) mel-spectrogram 85.4
NNet2 (Zhang et al., 2016) STFT 87.4
Hybrid model (Karunakaran and Arya, 2018) MFCC, SSD, etc. 88.3
Transform learning (Choi et al., 2017) MFCC 89.8
CVAF (Nanni et al., 2017) mel-spectrogram, SSD, etc. 90.9
MFMCNN (Senac et al., 2017) STFT, ZCR, etc. 91.0
Multi-DNN (Dai et al., 2015) MFCC 93.4
Ours mel-spectrogram 93.9
Table 3: Classification accuracy (%) on GTZAN dataset is compared across recently proposed methods (the best result is marked in bold). the results of all methods have reported in the original papers or related literatures

Figure 4

(left) shows the confusion matrix of 10-folds results predicted by BBNN on GTZAN. The rows and columns of the matrix represent ground-truths and their predicted labels. The diagonal numbers of matrix respectively represent correct prediction per genre and the off-diagonal entries are confusions between different genres. Confusion matrix can give the individual discrimination relation between the ground-truth and predicted genre label, which provides a better view of the general classification performance of the BBNN model. Table 

4

lists the precision, recall rate and F-score of each genre corresponding the confusion matrix. From the Figure 

4

(left) we can see that the proposed model distinguishes most of the genres very well, but it is highly confused to distinguish Rock from Country and Metal. One explanation is that they might share more similar frequency information which makes it more difficult to classify them in nature. Nonetheless, expert advice probably is required to improve the classification accuracy on the Pop and Rock genres. It is further discussed in Section 

4. Overall, as shown by the Table 4, it appears to that most genres have been correctly classified and recall rate and precision of genre Jazz even reach 99%.

Genre Precision (%) Recall Rate (%) F-score (%)
Blues 90.8 97.0 93.8
Classical 98.9 97.9 98.4
Country 89.8 97.7 93.6
Disco 98.2 91.8 94.9
Hip-hop 93.0 94.9 93.9
Jazz 99.0 99.0 99.0
Metal 86.1 98.7 92.0
Pop 94.3 94.3 94.3
Reggae 94.2 88.1 91.1
Rock 93.3 80.7 86.6
Average 93.7 94.0 93.7
Table 4: Precision (%) and recall rate (%) of each genre obtained on the GTZAN dataset

3.5 Classification results on Ballroom

Table 5 shows the classification accuracy percentage results obtained on Ballroom dataset by BBNN framework and 5 novel methods including 3 deep learning frameworks (MMCNN, MCLNN, Pons et al. (Pons and Serra, 2017)) and two traditional methods based on different hand-crafted feature representations. The work (Pons and Serra, 2017) presented by Marchand et al.

is based on Modulation Scale Spectrum presentation of audio (called MSS) and a modified KNN classifier is used to perform the classification achieving an accuracy of 87.6%. Based on MSS, Marchand

et al. then proposed a Modulation Scale Spectrum with Auditory Statistic representation (SOTA) and used an SVM as the classifier, which boosted the recognition accuracy by about 3%. The MMCNN architecture has two layers (CNN feed-forward) and uses two different filter shapes in the CNN layers (1-by-60 and 32-by-1). It produces more parameters (196,816) than BBNN to build the classification model and generates relatively lower accuracy. To preserve the inter-frames relation of a temporal signal, Medhat et al. designed a masked conditional neural network (MCLNN) which obtained an accuracy of 90.4 %. Pons et al. paid attention to temporal features of audio to use wider kernels of convolution layers that span over the long time duration. They used fewer parameters (92,808) than BBNN for modelling the network but generated a relatively lower accuracy than SOTA and BBNN. For this dataset, our proposed BBNN network (accuracy of 97.1%) outperforms all the compared models.

Methods Preprocessing Accuracy(%)
MMCNN (Pons et al., 2016) mel-spectrogram 87.6
MCLNN (Medhat et al., 2017) mel-spectrogram 90.4
Pons et al. (Pons and Serra, 2017) mel-spectrogram 92.1
Marchand et al. (Marchand and Peeters, 2014) MSS 93.1
SOTA (Marchand and Peeters, 2016b) MASSS 96.0
Ours mel- spectrogram 96.7
Table 5: Classification accuracy (%) on Ballroom dataset is compared across recently proposed methods (the best result is marked in bold). the results of all methods have reported in the original papers or related literatures. Note the exception that SOTA is reported in recall rate.

Figure 4(middle) illustrates more detailed information about the BBNN performance in the form of a confusion matrix. The corresponding precision and recall rate for each genre is further listed in the Table 6. Through the confusion matrix, it is clear exhibited that the strong ability of BBNN to recognize the most genres, for example, Cha Cha and Quickstep. There are relatively easy confusions between the Rumba and Slow Waltz genres. This is due to the reason that the genre boundaries of Rumba and Slow Waltz are not clear cut as Cha Cha or Quickstep. Such confusion leads to their relatively lower precisions and recall rates than other genres as shown in the Table 6. It is noticeable that the BBNN model can achieve good performances in both precision and recall rate for almost genres. Average recall rate is 0.9 % higher than SOTA.

Genre Precision (%) Recall Rate (%) F-score (%)
Cha Cha 100 96.2 98.0
Jive 97.9 94.1 96.0
Quickstep 96.4 100 98.1
Rumba 97.0 94.2 95.6
Samba 95.0 97.4 96.2
Tango 98.8 98.8 98.8
Viennese Waltz 98.5 94.3 96.4
Slow Waltz 94.3 100 97.0
Average 97.2 96.9 97.0
Table 6: Precision (%) and recall rate (%) of each genre obtained on the Ballroom dataset

3.6 Classification results on Extended Ballroom

Table 7 reports the comparison of BBNN with new state-of-the-art music genre classification methods in term of accuracy. BBNN achieves an accuracy of 97.2% on this dataset, which surpasses the methods including CNN-based architectures (Choi et al., 2017; Jeong et al., 2017; Pons and Serra, 2018) and hand-crafted feature approach (Marchand and Peeters, 2016b). Different representations of audio signals including MFCC, mel-spectrogram and MASSS represent preprocessing programs of audio signals utilized in the corresponding methods. The first work (Choi et al., 2017) proposed by Choi et al., achieved an accuracy of 86.7% using a transfer learning framework. Specifically, a Vggnet was designed and trained for a source dataset including 244,224 music clips, then, the trained model was adopted to Extended Ballroom dataset as a feature extractor. DLR (Jeong et al., 2017) also is a transform learning framework as similar as the abovementioned work (Choi et al., 2017), but the difference is that DLR aims to learn a rhythmic representation for a source task which can be used as an input for musical genre recognition task. Compared with these transform learning frameworks, BBNN generates a higher accuracy and does not require pre-training on the larger dataset. In RWCNN, randomly weighted CNN architecture was utilized to extract features for a classifier (e.g. SVM and ELM). Its accuracy is relatively lower than BBNN. The comparative experiment results strongly validate the effectiveness of BBNN.

Methods Preprocessing Accuracy(%)
Transform learning (Choi et al., 2017) MFCC 86.7
RWCNN (Pons and Serra, 2018) MFCC 89.8
DLR (Jeong et al., 2017) mel-spectrogram 93.7
SOTA (Marchand and Peeters, 2016b) MASSS 94.9
Ours mel-spectrogram 97.2
Table 7: Classification accuracy (%) on Extended Ballroom dataset is compared across recently proposed methods (the best result is marked in bold). Note the exception that SOTA is reported in recall rate. the results of all methods have reported in the original papers or related literatures

Figure 4(right) presents the confusion matrix of the 10-folds results produced by BBNN model corresponding the accuracy in Table 7. Rows present the dataset ground-truths; Columns denote labels predicted by BBNN model. By analyzing the confusion matrix, it can be noticed that the severe occurrence of confusion is from Rumba and Slow Waltz with Waltz. These genres are difficult to distinguish since they contain similar patterns (Lykartsis and Lerch, 2015).

Table 8 reports the tested precision and recall rate of BBNN model for each genre. Since Slow Waltz is prone to misclassified as Waltz, it has a relatively lower precision and recall rate. Because the training samples of genre Wcswing are very few, only accounting for 0.5% of the total sample, the BBNN can only learn severely limited discriminative information. It makes the model’s generalization ability for recognizing genre Wcswing relatively poor resulting in the precision and recall rate of Wcswing are relatively low. The recognition ability of BBNN is still robust on the other unbalanced classes such as Pasodoble, Salsa, and Slow Walz, which have slightly more samples than genre Wcswing. On the whole, the recognition precisions of most genres are more than 90%, of which Quickstep and Tango are as high as 99%. Based on the results, BBNN has better classification performance even in the case of the incredibly unbalanced dataset.

Genre Precision (%) Recall Rate (%) F-score (%)
Cha Cha 98.1 98.5 98.3
Foxtrot 98.8 99.6 99.2
Jive 98.5 99.4 98.9
Pasodoble 96.0 94.2 95.1
Quickstep 99.6 99.0 99.3
Rumba 96.7 94.5 95.6
Salsa 94.9 91.8 93.3
Samba 97.5 98.7 98.1
Slow Walz 80.7 71.1 75.6
Tango 99.0 95.3 97.1
Viennese Walz 96.8 97.7 97.3
Walz 93.1 98.1 95.5
Wcswing 78.5 57.8 66.6
Average 94.5 92.0 93.1
Table 8: Precision (%) and recall rate (%) of each genre obtained on the Extend Ballroom dataset
Figure 4: Accuracies of training, validating, and Testing of each fold on the different datasets

4 Conclusions and future work

In this article, we present a specially designed network for accurately recognizing the music genre. The proposed model aims to take full advantage of low-level information of mel-spectrogram for making the classification decision. We have shown how our model is effective by comparing the state-of-the-art methods, including hand-crafted feature approaches and deep learning models with different architectures, trained on different benchmark datasets. Deep learning approaches usually rely on a large amount of data to train model. In practice, since the number of annotated music recordings per genre class is often limited (Fu et al., 2011), therefore, except for accuracy, another major challenge is to train a robust CNN model from few labelled data. In this work, the three common datasets (GTZAN, Ballroom, and Extended Ballroom) are employed to validate the proposed network structure from different data scale, especially the Ballroom dataset with only 698 tracks. Experiment results demonstrate that BBNN can overcome this challenging and achieve satisfactory accuracies.

In the future works, we will further improve the proposed model through the following ways. Firstly, we will explore the acoustic features (e.g. SSD, RH) adopting fusion strategies (Wang et al., 2017b, 2015, 2016) as also input into the network. Secondly, in the decision-making stage, we will adopt a new distance metric method (Wang et al., 2018) to compute the similarity between genres.

References

  • Bertin-Mahieux et al. (2011) Bertin-Mahieux, T., Ellis, D.P., Whitman, B., Lamere, P., 2011. The million song dataset., in: Ismir, p. 10.
  • Cano et al. (2006) Cano, P., Gómez Gutiérrez, E., Gouyon, F., Herrera Boyer, P., Koppenberger, M., Ong, B.S., Serra, X., Streich, S., Wack, N., 2006. Ismir 2004 audio description contest .
  • Choi et al. (2017) Choi, K., Fazekas, G., Sandler, M., Cho, K., 2017. Transfer learning for music classification and regression tasks. arXiv preprint arXiv:1703.09179 .
  • Chollet et al. (2015) Chollet, F., et al., 2015. Keras.
  • Dai et al. (2015) Dai, J., Liu, W., Ni, C., Dong, L., Yang, H., 2015. ”multilingual” deep neural network for music genre classification, in: Sixteenth Annual Conference of the International Speech Communication Association.
  • Freitag et al. (2017) Freitag, M., Amiriparian, S., Pugachevskiy, S., Cummins, N., Schuller, B., 2017.

    audeep: Unsupervised learning of representations from audio with deep recurrent neural networks.

    The Journal of Machine Learning Research 18, 6340–6344.

  • Fu et al. (2011) Fu, Z., Lu, G., Ting, K.M., Zhang, D., 2011. A survey of audio-based music classification and annotation. IEEE transactions on multimedia 13, 303–319.
  • Hafemann et al. (2014) Hafemann, L.G., Oliveira, L.S., Cavalin, P., 2014. Forest species recognition using deep convolutional neural networks, in: Pattern Recognition (ICPR), 2014 22nd International Conference on, IEEE. pp. 1103–1107.
  • Ioffe and Szegedy (2015) Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 .
  • Jakubik (2017) Jakubik, J., 2017. Evaluation of gated recurrent neural networks in music classification tasks, in: International Conference on Information Systems Architecture and Technology, Springer. pp. 27–37.
  • Jeong et al. (2017) Jeong, Y., Choi, K., Jeong, H., 2017. Dlr:toward a deep learned rhythmic representation for music content analysis. arXiv preprint arXiv:1712.05119 .
  • Karunakaran and Arya (2018) Karunakaran, N., Arya, A., 2018. A scalable hybrid classifier for music genre classification using machine learning concepts and spark, in: 2018 International Conference on Intelligent Autonomous Systems (ICoIAS), IEEE. pp. 128–135.
  • Kingma and Ba (2014) Kingma, D.P., Ba, J., 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
  • Lin et al. (2013) Lin, M., Chen, Q., Yan, S., 2013. Network in network. arXiv preprint arXiv:1312.4400 .
  • Lykartsis and Lerch (2015) Lykartsis, A., Lerch, A., 2015. Beat histogram features for rhythm-based musical genre classification using multiple novelty functions, in: Proceedings of the 16th ISMIR Conference, pp. 434–440.
  • Marchand and Peeters (2014) Marchand, U., Peeters, G., 2014. The modulation scale spectrum and its application to rhythm-content analysis, in: DAFX (Digital Audio Effects).
  • Marchand and Peeters (2016a) Marchand, U., Peeters, G., 2016a. The extended ballroom dataset .
  • Marchand and Peeters (2016b) Marchand, U., Peeters, G., 2016b. Scale and shift invariant time/frequency representation using auditory statistics: Application to rhythm description, in: Machine Learning for Signal Processing (MLSP), 2016 IEEE 26th International Workshop on, IEEE. pp. 1–6.
  • McFee et al. (2015) McFee, B., Raffel, C., Liang, D., Ellis, D.P., McVicar, M., Battenberg, E., Nieto, O., 2015. librosa: Audio and music signal analysis in python, in: Proceedings of the 14th python in science conference, pp. 18–25.
  • Medhat et al. (2017) Medhat, F., Chesmore, D., Robinson, J., 2017. Automatic classification of music genre using masked conditional neural networks, in: Data Mining (ICDM), 2017 IEEE International Conference on, IEEE. pp. 979–984.
  • Nanni et al. (2017) Nanni, L., Costa, Y.M., Lucio, D.R., Silla Jr, C.N., Brahnam, S., 2017. Combining visual and acoustic features for audio classification tasks. Pattern Recognition Letters 88, 49–56.
  • Pons et al. (2016) Pons, J., Lidy, T., Serra, X., 2016. Experimenting with musically motivated convolutional neural networks, in: Content-Based Multimedia Indexing (CBMI), 2016 14th International Workshop on, IEEE. pp. 1–6.
  • Pons and Serra (2017) Pons, J., Serra, X., 2017. Designing efficient architectures for modeling temporal features with convolutional neural networks, in: Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, IEEE. pp. 2472–2476.
  • Pons and Serra (2018) Pons, J., Serra, X., 2018. Randomly weighted cnns for (music) audio classification. arXiv preprint arXiv:1805.00237 .
  • Salamon and Bello (2017) Salamon, J., Bello, J.P., 2017. Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters 24, 279–283.
  • Senac et al. (2017) Senac, C., Pellegrini, T., Mouret, F., Pinquier, J., 2017. Music feature maps with convolutional neural networks for music genre classification, in: Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing, ACM. p. 19.
  • Szegedy et al. (2015) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A., 2015.

    Going deeper with convolutions, in: IEEE conference on computer vision and pattern recognition, pp. 1–9.

  • Tzanetakis and Cook (2002) Tzanetakis, G., Cook, P., 2002. Musical genre classification of audio signals. IEEE Transactions on speech and audio processing 10, 293–302.
  • Wang et al. (2017a) Wang, Y., Lin, X., Wu, L., Zhang, W., 2017a. Effective multi-query expansions: Collaborative deep networks for robust landmark retrieval. IEEE Transactions on Image Processing 26, 1393–1404.
  • Wang et al. (2015) Wang, Y., Lin, X., Wu, L., Zhang, W., Zhang, Q., Huang, X., 2015. Robust subspace clustering for multi-view data by exploiting correlation consensus. IEEE Transactions on Image Processing 24, 3939–3949.
  • Wang and Wu (2018) Wang, Y., Wu, L., 2018.

    Beyond low-rank representations: Orthogonal clustering basis reconstruction with optimized graph structure for multi-view spectral clustering.

    Neural Networks 103, 1–8.
  • Wang et al. (2018) Wang, Y., Wu, L., Lin, X., Gao, J., 2018. Multiview spectral clustering via structured low-rank matrix factorization. IEEE Transactions on Neural Networks and Learning Systems .
  • Wang et al. (2016) Wang, Y., Zhang, W., Wu, L., Lin, X., Fang, M., Pan, S., 2016.

    Iterative views agreement: An iterative low-rank based structured optimization method to multi-view spectral clustering, in: International Joint Conference on Artificial Intelligence, pp. 2153–2159.

  • Wang et al. (2017b) Wang, Y., Zhang, W., Wu, L., Lin, X., Zhao, X., 2017b. Unsupervised metric fusion over multiview data by graph random walk-based cross-view diffusion. IEEE transactions on neural networks and learning systems 28, 57–70.
  • Wu et al. (2018) Wu, L., Wang, Y., Li, X., Gao, J., 2018. Deep attention-based spatially recursive networks for fine-grained visual recognition. IEEE Transactions on Cybernetics .
  • Wu et al. (2019) Wu, L., Wang, Y., Shao, L., 2019. Cycle-consistent deep generative hashing for cross-modal retrieval. IEEE Transactions on Image Processing 28, 1602–1612.
  • Zhang et al. (2016) Zhang, W., Lei, W., Xu, X., Xing, X., 2016. Improved music genre classification with convolutional neural networks., in: INTERSPEECH, pp. 3304–3308.