A Deep Bag-of-Features Model for Music Auto-Tagging

08/20/2015 ∙ by Juhan Nam, et al. ∙ 0

Feature learning and deep learning have drawn great attention in recent years as a way of transforming input data into more effective representations using learning algorithms. Such interest has grown in the area of music information retrieval (MIR) as well, particularly in music audio classification tasks such as auto-tagging. In this paper, we present a two-stage learning model to effectively predict multiple labels from music audio. The first stage learns to project local spectral patterns of an audio track onto a high-dimensional sparse space in an unsupervised manner and summarizes the audio track as a bag-of-features. The second stage successively performs the unsupervised learning on the bag-of-features in a layer-by-layer manner to initialize a deep neural network and finally fine-tunes it with the tag labels. Through the experiment, we rigorously examine training choices and tuning parameters, and show that the model achieves high performance on Magnatagatune, a popularly used dataset in music auto-tagging.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In the recent past music has become ubiquitous as digital data. The scale of music collections that are readily accessible via online music services surpassed thirty million tracks111http://press.spotify.com/us/information/, accessed in Jan 23, 2015. The type of music content has been also diversified as social media services allow people to easily share their own original music, cover songs or other media sources. These significant changes in the music industry have prompted new strategies for delivering music content, for example, searching a large volume of songs with different query methods (e.g., text, humming or audio example) or recommending a playlist based on user preferences. A successful approach to these needs is using meta data, for example, finding similar songs based on analysis by music experts or collaborative filtering based on user data. However, the analysis by experts is costly and limited, given the large scale of available music tracks. User data are intrinsically biased by the popularity of songs or artists. As a way of making up for these limitations of meta data, the audio content itself have been exploited, i.e., by training a system to predict high-level information from the music audio files. This content-based approach has been actively explored in the area of music information retrieval (MIR). They are usually formed as an audio classification task that predicts a single label given categories (e.g. genre or emotion) or multiple labels in various aspects of music. The latter is often referred to as music annotation and retrieval, or simply called music auto-tagging.

These audio classification tasks are generally implemented through two steps; feature extraction and supervised learning. While the supervised learning step is usually handled by commonly used classifiers such as Gaussian mixture model (GMM) and support vector machines (SVM), the feature extraction step has been extensively studied based on domain knowledge. For example, Tzanetakis and Cook in their seminal work on music genre classification presented comprehensive signal processing techniques to extract audio features that represents timbral texture, rhythmic content and pitch content of music

[1]

. Specifically, they include low-level spectral summaries (e.g. centroid and roll-off), zero-crossings and mel-frequency cepstral coefficients (MFCC), a wavelet transform-based beat histogram and pitch/chroma histogram. McKinney and Breebaart suggested perceptual audio features based on psychoacoustic models, including estimates of roughness, loudness and sharpness, and auditory filterbank temporal envelopes

[2]. Similarly, a number of audio features have been proposed with different choices of time-frequency representations, psychoacoustic models and other signal processing techniques. Some of distinct audio features introduced in music classification include octave-based spectral contrast [3], Daubechies wavelet coefficient histogram [4], and auditory temporal modulation [5].

A common aspect of these audio features is that they are hand-engineered. In other words, individual computation steps to extract the features from audio signals are manually designed based on signal processing and/or acoustic knowledge. Although this hand-engineering approach has been successful to some degree, it has limitations in that, by nature, it may require numerous trial-and-error in the process of fine-tuning the computation steps. For this reason, many of previous work rather combine existing audio features, for example, by concatenating MFCC and other spectral features [6, 7, 8]

. However, they are usually heuristically chosen so that the combination can be redundant or still insufficient to explain music. Feature selection is a solution to finding an optimal combination but this is another challenge

[9].

Recently there have been increasing interest in finding feature representations using data-driven learning algorithms, as an alternative to the hand-engineering approach. Inspired by research in computational neuroscience [10, 11]

, the machine learning community has developed a variety of learning algorithms that discover underlying structures of image or audio, and utilized them to represent features. This approach made it possible to overcome the limitations of the hand-engineering approach by learning manifold patterns automatically from data. This learning-based approach is broadly referred to as

feature learning or representation learning [12]

. In particular, hierarchical representation learning based on deep neural network (DNN) or convolutional neural network (CNN), called

deep learning, achieved a remarkable series of successes in challenging machine recognition tasks, such as speech recognition [13] and image classification [14]. The overview and recent work are reviewed in [12, 15]. The learning-based approach has gained great interest in the MIR community as well. Leveraging advances in the machine learning community, MIR researchers have investigated better ways of representing audio features and furthermore envisioned the approach as a general framework to build hierarchical music feature representations [16]. In particular, the efforts have been made most actively for music genre classification or music auto-tagging. Using either unsupervised feature learning or deep learning, they have shown improved performance in the tasks.

In this paper, we present a two-stage learning model as an extension of our previous work [17]. The first stage learns local features from multiple frames of audio spectra using sparse restricted Boltzmann machine (RBM) as before. However, we add an onset detection module to select temporally-aligned frames in training data. This is intended to decrease the variation in input space against random sampling. We show that this helps improving performance given the same condition. The second stage continues the bottom-up unsupervised learning by applying RBMs (but without sparsity) to the bag-of-features in a layer-by-layer manner. We use the RBM parameters to initialize a DNN and finally fine-tunes the network with the labels. We show that this pretraining improves the performance as observed in image classification or speech recognition tasks.

The remainder of this paper is organized as follows. In Section II, we overview related work. In Section III, we describe the bag-of-features model. In Section IV

, we introduce datasets, evaluation metrics and experiment settings. In Section

V, we investigate the the evaluation results and compare them to those of state-of-the-arts algorithms in music auto-tagging. Lastly, we conclude by providing a summary of the work in Section VI.

Ii Related Work

In this section, we review previous work that exploited feature learning and deep learning for music classification and music auto-tagging. They can be divided into two groups, depending on whether the learning algorithm is unsupervised or supervised.

One group investigated unsupervised feature learning based on sparse representations, for example, using K-means

[18, 17, 19, 20], sparse coding [21, 22, 23, 17, 24] and restricted Boltzmann machine (RBM) [25, 17]. The majority of them focused on capturing local structures of music data over one or multiple audio frames to learn high-dimensional single-layer features. They summarized the locally learned features as a bag-of-features (also called a bag-of-frames, for example, in [26]) and fed them into a separate classifier. The advantage of this single-layer feature learning is that it is quite simple to learn a large-size of feature bases and they generally provide good performance [27]. In addition, it is easy to handle the variable length of audio tracks as they usually represent song-level features with summary statistics of the locally learned features (i.e. temporal pooling). However, this single-layer approach is limited to learning local features only. Some works worked on dual or multiple layers to capture segment-level features [28, 29]. Although they showed slight improvement by combining the local and segment-level features, learning hierarchical structures of music in an unsupervised way is highly challenging.

Fig. 1: The proposed deep bag-of-features model for music auto-tagging. The dotted lines indicate that the processing is conducted only in the training phase.

The second group used supervised learning that directly maps audio and labels via multi-layered neural networks. One approach was mapping single frames of spectrogram [30, 31, 32] or summarized spectrogram [33]

to labels via DNNs, where some of them pretrain the networks with deep belief networks

[30, 31, 33]. They used the hidden-unit activations of DNNs as local audio features. While this frame-level audio-to-label mapping is somewhat counter-intuitive, the supervised approach makes the learned features more discriminative for the given task, being directly comparable to hand-engineered features such as MFCC. The other approach in this group used CNNs where the convolution setting can take longer audio frames and the networks directly predict labels [34, 35, 36, 37]

. CNNs has become the de-facto standard in image classification since the break-through in ImageNet challenge

[14]. As such, the CNN-based approach has shown great performance in music auto-tagging [35, 37]. However, in order to achieve high performance with CNNs, the model needs to be trained with a large dataset along with a huge number of parameters. Otherwise, CNNs is not necessarily better than the bag-of-features approach [19, 37].

Our approach is based on the bag-of-features in a single-layer unsupervised learning but extend it to a deep structure for song-level supervised learning. The idea behind this deep bag-of-features model is, while taking the simplicity and flexibility of the bag-of-features approach in unsupervised single-layer feature learning, improving the discriminative power using deep neural networks. Similar models were suggested using a different combination of algorithms, for example, K-means and multi-layer perceptrons (MLP) in

[19, 20]. However, our proposed model performs unsupervised learning through all layers consistently using RBMs.

Iii Learning Model

Figure 1 illustrates the overall data processing pipeline of our proposed model. In this section, we describe the individual processing blocks in details.

Iii-a Preprocessing

Musical signals are characterized well by note onsets and ensuing spectral patterns. We perform several steps of front-end processing to help learning algorithms effectively capture the features.

Iii-A1 Automatic Gain Control

Musical signals are highly dynamic in amplitude. Being inspired by the dynamic-range compression mechanism in human ears, we control the amplitude as a first step. We adopt time-frequency automatic gain control which adjusts the levels of sub-band signals separately [38]. Since we already showed the effectiveness in music auto-tagging [17], we use the automatic gain control as a default setting here.

Iii-A2 Mel-frequency Spectrogram

We use mel-frequency spectrogram as a primary input representation. The mel-frequency spectrogram is computed by mapping 513 linear frequency bins from FFT to 128 mel-frequency bins. This mapping reduces input dimensionality sufficiently so as to take multiple frames as input data while preserving distinct patterns of spectrogram well.

Iii-A3 Amplitude Compression

The mel-frequency spectrogram is additionally compressed with a log-scale function, , where is the mel-frequency spectrogram and controls the degree of compression [39].

Iii-A4 Onset Detection Function

The local feature learning stage takes multiple frames as input data so that learning algorithms can capture spectro-temporal patterns, for example, sustaining or chirping harmonic overtones, or transient changes over time. We already showed that using multiple frames for feature learning improves the performance in music auto-tagging [17]. We further develop this data selection scheme by considering where to take multiple frames on the time axis. In the previous work, we sampled multiple frames at random positions on the mel-frequency spectrogram without considering the characteristics of musical sounds. Therefore, given a single note, it could sample audio frames such that the note onset is located at arbitrary positions within the sampled frames or only sustain part of a note is taken. This may increase unnecessary variations or lose the chance of capturing important temporal dependency in the view of learning algorithm. In order to address this problem, we suggest sampling multiple frames based on the guidance of note onset. That is, we compute an onset detection function as a separate path and take a sample of multiple frames at the positions that the onset detection function has high values for a short segment. As illustrated in Figure 2, local spectral structures of musical sounds tend to be more distinctive when the onset strength is high. Sampled audio frames this way are likely to be aligned to each other with regard to notes, which may encourage learning algorithms to learn features more effectively. We term this sampling scheme onset-based sampling and will evaluate it in our experiment. The onset detection function is computed on a separate path by mapping the spectrogram on to 40 sub-bands and summing the half-wave rectified spectral flux over the sub-bands.

Iii-B Local Feature Learning and Summarization

This stage first learns feature bases using the sampled data and learning algorithms. Then, it extracts the feature activations in a convolutional manner for each audio track and summarizes them as a bag-of-features using max-pooling and averaging.

Iii-B1 PCA Whitening

PCA whitening is often used as a preprocessing step to remove pair-wise correlations (i.e. second-order dependence) or reduce the dimensionality before applying algorithms that capture higher-order dependencies [40]. The PCA whitening matrix is computed by applying PCA to the sampled data and normalizing the output in the PCA space. Note that we locate PCA whitening as part of local feature learning in Figure 1 because the whitening matrix is actually “learned” by the sampled data.

Fig. 2: Onset-based sampling. This data sampling scheme takes multiple frames at the positions that the onset strength is high.

Iii-B2 Sparse Restricted Boltzmann Machine (RBM)

Sparse RBM is the core algorithm that performs local feature learning in the bag-of-features model. In our previous work, we compared K-means, sparse coding and sparse RBM in terms of performance of music auto-tagging [17]. Although there was not much difference, sparse RBM worked slightly better than others and the feed-forward computation for the hidden units in the RBM allows fast prediction in the testing phase. Thus, we focus only the sparse RBM here and more formally review the algorithm in the following paragraphs.

Sparse RBM is a variation of RBM which is a bipartite undirected graphical model that consists of visible nodes v and hidden nodes h [41]. The visible nodes correspond to input vectors in a training set and the hidden nodes correspond to represented features. The basic form of RBM has binary units for both visible and hidden nodes, termed binary-binary

RBM. The joint probability of

v and h is defined by an energy function :

(1)
(2)

where b and c are bias terms, and W is a weight matrix. The normalization factor is called the partition function, which is obtained by summing all possible configurations of v and h. For real-valued data such as spectrogram, Gaussian units are frequently used for the visible nodes. Then, the energy function in Equation 1 is modified to:

(3)

where the additional quadratic term,

is associated with covariance between input units assuming that the Gaussian units have unit variances. This form is called

Gaussian-binary RBM [42].

The RBM has symmetric connections between the two layers but no connections within the hidden nodes or visible nodes. This conditional independence makes it easy to compute the conditional probability distributions, when nodes in either layer are observed:

(4)
(5)

where is the logistic function and

is the Gaussian distribution. These can be directly derived from Equation

1 and 2.

The model parameters of RBM are estimated by taking derivative of the log-likelihood with regard to each parameter and then updating them using gradient descent. The update rules for weight matrix and bias terms are obtained from Equation 1:

(6)
(7)
(8)

where is the learning rate and the angle brackets denote expectation with respect to the distributions from the training data and the model. While can be easily obtained, exact computation of is intractable. In practice, the learning rules in Equation 6 converges well only with a single iteration of block Gibbs sampling when it starts by setting the states of the visible units to the training data. This approximation is called the contrastive-divergence [43].

This parameter estimation is solely based on maximum likelihood and so it is prone to overfitting to the training data. As a way of improving generalization to new data [44], the maximum likelihood estimation is penalized with additional terms called weight-decay. The typical choice is norm, which is half of the sum of the squared weights. Taking the derivative, the weight update rule in Equation 6 is modified to:

(9)

where is called weight-cost and controls the strength of the weight-decay.

Sparse RBM is trained with an additional constraint on the update rules, which we call sparsity. We impose the sparsity on hidden units of a Gaussian-binary RBM based on the technique in [41]. The idea is to add a penalty term that minimizes a deviation of the mean activation of hidden units from a target sparsity level. Instead of directly applying the gradient descent to that, they exploit the contrastive-divergence update rule and so simply added it to the update rule of bias term

. This controls the hidden-unit activations as a shift term of the sigmoid function in Equation

4. As a result, the bias update rule in Equation 8 is modified to:

(10)

where is the training set, determines the target sparsity of the hidden-unit activations and controls the strength.

Iii-B3 Max-Pooling and Averaging

Once we train a sparse RBM from the sampled data, we fix the learned parameters and extract the hidden-unit activations in a convolutional manner for an audio track. Following our previous work, we summarize the local features via max-pooling and averaging. Max-pooling has proved to be an effective choice to summarize local features [34, 19]. It works as a form of temporal masking because it discards small activations around high peaks. We further summarize the max-pooled feature activations with averaging. This produces a bag-of-features that represents a histogram of dominant local feature activations.

Fig. 3:

The architecture of bag-of-features model. The three fully connected layers are first pretrained with song-level bag-of-features data using stacked RBMs with ReLU and then fine-tuned with labels.

Iii-C Song-Level Learning

This stage performs supervised learning to predict tags from the bag-of-features. Using a deep neural network (DNN), we build a deep bag-of-features

representation that maps the complex relations between the summarized acoustic features and semantic labels. We configure the DNN to have up to three hidden layers and rectified linear units (ReLUs) for the nonlinear function as shown in Figure

3. The ReLUs have proved to be highly effective in the DNN training when used with the dropout regularization [45, 46, 32], and also much faster than other nonlinear functions such as sigmoid in computation. We first pretrain the DNN by a stack of RBMs and then fine-tune it using tag labels. The output layer works as multiple independent binary classifiers. Each output unit corresponds to a tag label and predicts whether the audio track is labeled with it or not.

Iii-C1 Pretraining

Pretraining is an unsupervised approach to better initialize the DNN [43]. Although recent advances have shown that pretraining is not necessary when the number of labeled training samples is sufficient [14, 47], we conduct the pretraining to verify the necessity in our experiment setting.

We perform the pretraining by greedy layer-wise learning of RBMs with ReLUs to make learned parameters compatiable with the nonlinearity in the DNN. The ReLUs in the RBMs can be viewed as the sum of an infinite number of binary units that share weights and have shifted versions of the same bias [48]. This can be approximated to a single unit with the nonlinearity. Furthermore, Gibbs sampling for the ReLUs during the training can be performed by taking samples from where is Gaussian noise with zero mean and variance [48]. We use the ReLU for both visible and hidden nodes of the stacked RBMs. However, for the bottom RBM that takes the bag-of-features as input data, we use binary units for visible nodes and ReLU for hidden notes to make them compatible with the scale of the bag-of-features.

Iii-C2 Fine-tuning

After initializing the DNN with the weight and bias learned from the RBMs, we fine-tune them with tag labels using the error back-propagation. We predict the output by adding the output layer (i.e. weight and bias) to the last hidden layer and taking the sigmoid function to define the error as cross-entropy between the prediction and ground truth for bag-of-features and tag :

(11)

We update a set of parameters using AdaDelta. The method requires no manual tuning for the learning rate and is robust to noisy gradient information and variations in model architecture [49]. In addition, we use dropout, a powerful technique that improves the generalization error of large neural networks by setting zeros to hidden and input layers randomly [50]. We find AdaDelta and dropout essential to achieve good performance.

Iv Experiments

In the section, we introduce the dataset and evaluation metrics used in our experiments. Also, we describe experiment settings for the proposed model.

Iv-a Datasets

We use the Magnatagatune dataset, which contains 29-second MP3 files with annotations collected from an online game [51]. The dataset is the MIREX 2009 version used in [34, 35]. It is split into 14660, 1629 and 6499 clips, respectively for training, validation and test, following the prior work. The clips are annotated with a set of 160 tags.

Iv-B Evaluation Metrics

Following the evaluation metrics in [34, 35]

, we use the area under the receiver operating characteristic curve over tag (AUC-T or shortly AUC), the same measure over clip (AUC-C) and top-K precision where K is 3, 6, 9, 12 and 15.

Iv-C Preprocessing Parameters

We first convert the MP3 files to the WAV format and resample them to 22.05kHz. We then compute their spectrogram with a 46ms Hann window and 50% overlap, on which the time-frequency automatic gain control using the technique in [38] is applied. This equalizes the spectrogram using spectral envelopes computed over 10 sub-bands. We convert the equalized spectrogram to mel-frequency spectrogram with 128 bins and finally compress the magnitude by fixing the strength to 10.

Iv-D Experiment Settings

Iv-D1 Local Feature Learning and Summarization

The first step in this stage is to train PCA (for whitening) and sparse RBM. Each training sample is a spectral block comprised of multiple consecutive frames from the mel-frequency spectrogram. We gather training data (total 200,000 samples) by taking one spectral block every second at a random position or using the onset detection function. The number of frames in the spectral block varies from 2, 4, 6, 8 to 10 and we evaluate them separately. We obtain the PCA whitening matrix retaining 90% of the variance to reduce the dimensionality and then train the sparse RBM with a learning rate of 0.03, a hidden-layer size of 1024 and different values of target sparsity from 0.007, 0.01, 0.02 to 0.03. Once we learn the PCA whitening matrix and RBM weight, we extract hidden-unit activations from an audio track in a convolutional manner and summarize them into a bag-of-features with max-pooling over segments with 0.25, 0.5, 1, 2 and 4 seconds.

Since this stage creates a large number of possibilities in obtaining a bag-of-features, we reduce the number of adjustable parameters before proceeding with song-level supervised learning. Among others, we fix the number of frames in the spectral block and data sampling scheme, which are related to collecting the sample data. We find a reasonable setting for them using a simple linear classifier that minimizes the same cross-entropy in Equation 11

(i.e. logistic regression).

Fig. 4: Results for number of frames in the input spectral block and data sampling scheme. Each box contains the statistics of AUC for different sparsity and max-pooling sizes.
Fig. 5: Results for different number of hidden layers. Each boxplot contains the statistics of AUC for different target sparsity and max-pooling sizes in the bag-of-features.

Iv-D2 Song-Level Supervised Learning

We first pretrain the DNN with RBMs and then fine-tune the networks. We fix the hidden-layer size to 512 and adjust the number of hidden layers from 1 to 3 to verify the effect of larger networks. In training ReLU-ReLU RBMs, we set the learning rate to a small value (0.003) in order to avoid unstable dynamics in the weight updates [44]. We also adjust the weight-cost in training RBMs from 0.001, 0.01 to 0.1, separately for each hidden layer. We fine-tune the pretrained networks, using Deepmat, a Matlab library for deep learning222https://github.com/kyunghyuncho/deepmat. This library includes an implementation of AdaDelta and dropout, and supports GPU processing. In order to validate the proposed model, we compare it to DNNs with random initialization and also the same model but with ReLU units for the visible layer of the bottom RBM333Our experiment code is available at https://github.com/juhannam/deepbof

V Results

In this section, we examine training choices and tuning parameters in the experiments, and finally compare them to state-of-the-art results.

V-a Onset Detection Function and Number of Frames

We compare random sampling with onset-based sampling in the context of finding an optimal number of frames in local feature learning. In order to prevent the experiment from being too exhausting, we chose logistic regression as a classifier instead of the DNN. Figure 4 shows the evaluation results. In random sampling, the AUC increases up to 6 frames and then slowly decays. A similar trend is shown in onset-based sampling. However, the AUC saturates in a higher level, indicating that onset-based sampling is more effective for the local feature learning. In the following experiments, we fix the number of frames to 8 as it provides the highest AUC in terms of median.

V-B Pretraining by ReLU RBMs

Figure 5 shows the evaluation results for different numbers of hidden layers when the DNN is randomly initialized or pretrained with ReLU RBMs. When the networks has a single hidden layer, there is no significant difference in AUC level. As the number of hidden layers increases in the DNN, however, pretrained networks apparently outperform randomly initialized networks. This result is interesting, when recalling recent observations that pretraining is not necessary when the number of labeled training samples is sufficient. Thus, the result may indicate that the size of labeled data is not large enough in our experiment. However, we need to note that the auto-tagging task is formed as a multiple binary classification problem, which is different from choosing one label exclusively, and furthermore the levels of abstraction in the tag labels are not homogenous (e.g. including mood and instruments). In addition, there is some recent work that pretraining is still useful [46].

Fig. 6: Results for different target sparsity and max-pooling sizes in the bag-of-features when we use a pretrained DNN with three hidden layers.

V-C Sparsity and Max-pooling

Figure 6 shows the evaluation results for different target sparsity and max-pooling sizes in the bag-of-features when we use a pretrained DNN with three hidden layers. The best results are achieved when target sparsity is 0.02 and max-pooling size is 1 or 2 second. Compared to our previous work [17], the optimal target sparsity has not changed whereas the optimal max-pooling size is significantly reduced. Considering we used 30 second segments in the Maganatagatune dataset against the full audio tracks in the CAL500 datasets (typically 3-4 minute long), the optimal max-pooling size seems to be proportional to the length of audio tracks.

V-D Weight-Cost

We adjust weight-cost in training the RBM with three different values. Since this exponentially increases the number of networks to train as we stack up RBMs, a brute-force search for an optimal setting of weight-costs becomes very time-consuming. For example, when we have three hidden layers, we should fine-tune 27 different instances of pretrained networks. From our experiments, however, we observed that the good results tend to be obtained when the bottom layer has a small weight-cost and upper layers have progressively greater weight-costs. In order to validate the observation, we plot the statistics of AUC for a given weight-cost at each layer in Figure 7. For example, the left-most boxplot is computed from all combinations of weight-costs when the weight-cost in the first-layer RBM (L1) is fixed to 0.001 (this includes 9 combinations of weight-cost for three hidden layers. We count them for all different target sparsity and max-pooling size). For the first layer, the AUC goes up when the weight-cost is smaller. However, the trend becomes weaker through the second layer (L2) and goes opposite for the third layer (L3); the best AUC in median is obtained when the weight-cost is 0.1 for the third layer, even though the difference it slight. This result implies that it is better to encourage “maximum likelihood” for the first layer by having a small weight-cost and regulate it for upper layers by having progressively greater weight-costs. This is plausible when considering the level of abstraction in the DNN that goes from acoustic feature summaries to semantic words.

Based on this observation, we suggest a special condition for the weight-cost setting to reduce the number of pretraining instances. That is, we set the weight-cost to a small value (=0.001) for the first layer and an equal or increasing value for upper layers. Figure 8 compares the special condition denoted as “WC_Inc” to the best result and fixed settings for all layers. “WC_Inc” achieves the best result in three out of four and it always outperforms the three fixed setting. This shows that, with the special condition for the weight-cost setting, we can save significant amount of training time while achieving high performance.

Fig. 7: Results for a fixed weight-cost at each layer. Each boxplot contains the statistics of AUC for all weight-cost combinations in three hidden layers given the fixed weight-cost.
Fig. 8: Results for different settings of weight-costs in training RBMs. “Best” is the best result among all pretrained networks (27 instances). “WC=0.1”, “WC=0.01” and “WC=0.001” indicate when the weight-cost is fixed to the value for all hidden layers. “WC_Inc” means the best result among instances where the weight-cost is 0.001 for the bottom layer and it is greater than or equal to the value for upper layers (this includes 6 combinations of weight-costs for three hidden layers). The max-pooling size is fixed to 1 second here.

V-E Comparison with State-of-the-art Algorithms

We lastly compare our proposed model to previous state-of-the-art algorithms in music auto-tagging. Since we use the MIREX 2009 version of Magnatagatune dataset for which Hamel et. al. achieved the best performance [34, 35], we place their evaluation results only in Table I. They also used deep neural networks with a special preprocessing of mel-frequency spectrogram. However, our deep bag-of-features model outperforms them for all evaluation metrics.

Vi Conclusion

We presented a deep bag-of-feature model for music auto-tagging. The model learns a large dictionary of local feature bases on multiple frames selected by onset-based sampling and summarizes an audio track as a bag of learned audio features via max-pooling and averaging. Furthermore, it pre-trains and fine-tunes the DNN to predict the tags. The deep bag-of-feature model can be seen as a special case of deep convolutional neural networks as it has a convolution and pooling layer, where the local features are extracted and summarized, and has three fully connected layers. As future work, we will move on more general CNN models used in computer vision and train them with large-scale datasets.

 

Methods AUC-T AUC-C P3 P6 P9 P12 P15

 

PMSC+PFC [34] 0.845 0.938 0.449 0.320 0.249 0.205 0.175
PSMC+MTSL [34] 0.861 0.943 0.467 0.327 0.255 0.211 0.181
Multi PMSCs [35] 0.870 0.949 0.481 0.339 0.263 0.216 0.184
Deep-BoF 0.888 0.956 0.511 0.358 0.275 0.225 0.190

 

TABLE I: Performance comparison with Hamel et. al.’s results on the Magnatagatune dataset.

Acknowledgment

This work was supported by Korea Advanced Institute of Science and Technology (Project No. G04140049).

References

  • [1] G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” IEEE Transaction on Speech and Audio Processing, 2002.
  • [2] M. F. McKinney and J. Breebaart, “Features for audio and music classification,” in Proceedings of the 4th International Conference on Music Information Retrieval (ISMIR), 2003.
  • [3] D.-N. Jiang, L. Lu, H.-J. Zhang, and J.-H. Tao, “Music type classification by spectral contrast feature,” in Proceedings of International Conference on Multimedia Expo (ICME), 2002.
  • [4] T. Li, M. Ogihara, and Q. Li, “A comparative study of content-based music genre classification,” in Proceedings of the 26th international ACM SIGIR conference on Research and development in informaion retrieval, 2003.
  • [5]

    Y. Panagakis, C. Kotropoulos, and G. R. Arce, “Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification,”

    IEEE Transaction on Audio, Speech and Language Processing, 2010.
  • [6] J. Bergstra, N. Casagrande, D. Erhan, D. Eck, and B. Kegl, “Aggregate features and adaboost for music classification,” Machine Learning, 2006.
  • [7] T. Bertin-Mahieux, D. Eck, F. Maillet, and P. Lamere, “Autotagger: a model for predicting social tags from acoustic features on large music databases,” in Journal of New Music Research, 2010.
  • [8] K. K. C. J.-S. R. Jang and C. S. Iliopoulos, “Music genre classification via compressive sampling,” in Proceedings of the 11th International Conference on Music Information Retrieval (ISMIR), 2010.
  • [9] C. N. Silla, A. Koerich, and C. Kaestner, “A feature selection approach for automatic music genre classification,” International Journal of Semantic Computing, 2008.
  • [10] D. J. F. Bruno. A. Olshausen, “Emergence of simple-cellreceptive field properties by learning a sparse code for natural images,” Nature, pp. 607–609, 1996.
  • [11] M. S. Lewicki, “Efficient coding of natural sounds,” Nature Neuroscience, 2002.
  • [12] Y. Bengio, A. Courville, and P. Vincent, “Representation learning:a review and new perspective,” IEEE Transaction on Pattern Analysis and Machine Intelligennce, vol. 35, no. 8, pp. 1798–1828, 2013.
  • [13] G. Hinton, L. Deng, D. Yu, G. Dahl, A. rahman Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition,” IEEE Signal Processing Magazine, 2012.
  • [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classfication with deep convolutional neural networks,” in Proceedings of the 25th Conference on Neural Information Processing Systems (NIPS), 2012.
  • [15] Y. Bengio, “Learning deep architectures for ai,” Foundations and trends in Machine Learning, 2009.
  • [16] E. J. Humphrey, J. P. Bello, and Y. LeCun, “Moving beyond feature design: Deep architectures and automatic feature learning in music informatics,” in Proceedings of the 13th International Conference on Music Information Retrieval (ISMIR), 2012.
  • [17] J. Nam, J. Herrera, M. Slaney, and J. O. Smith, “Learning sparse feature representations for music annotation and retrieval,” in Proceedings of the 13th International Conference on Music Information Retrieval (ISMIR), 2012.
  • [18] J. Wülfing and M. Riedmiller, “Unsupervised learning of local features for music classification,” in Proceedings of the 13th International Conference on Music Information Retrieval (ISMIR), 2012.
  • [19] S. Dieleman and B. Schrauwen, “Multiscale approaches to music audio feature learning,” in Proceedings of the 14th International Conference on Music Information Retrieval (ISMIR), 2013.
  • [20]

    A. van den Oord, S. Dieleman, and B. Schrauwen, “Transfer learning by supervised pre-training for audio-based music classification,” in

    Proceedings of the 15th International Conference on Music Information Retrieval (ISMIR), 2014.
  • [21] R. Grosse, R. Raina, H. Kwong, and A. Y. Ng, “Shift-invariant sparse coding for audio classification,” in Proceedings of the Conference on Uncertainty in AI, 2007.
  • [22] P.-A. Manzagol, T. Bertin-Mahieux, and D. Eck, “On the use of sparse time-relative auditory codes for music,” in Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR), 2008.
  • [23] M. Henaff, K. Jarrett, K. Kavukcuoglu, and Y. LeCun, “Unsupervised learning of sparse features for scalable audio classification,” in Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR), 2011.
  • [24] Y. Vaizman, B. McFee, and G. Lanckriet, “Codebook based audio feature representation for music information retrieval,” IEEE Transactions on Acoustics, Speech and Signal Processing, 2014.
  • [25] J. Schlüter and C. Osendorfer, “Music Similarity Estimation with the Mean-Covariance Restricted Boltzmann Machine,” in Proceedings of the 10th International Conference on Machine Learning and Applications, 2011.
  • [26] L. Su, C.-C. M. Yeh, J.-Y. Liu, J.-C. Wang, and Y.-H. Yang, “A systematic evaluation of the bag-of-frames representation for music information retrieval,” IEEE Transactions on Acoustics, Speech and Signal Processing, 2014.
  • [27] A. Coates, H. Lee, and A. Ng, “An analysis of single-layer networks in unsupervised feature learning,” Journal of Machine Learning Research, 2011.
  • [28] H. Lee, Y. Largman, P. Pham, and A. Y. Ng, “Unsupervised feature learning for audio classification using convolutional deep belief networks,” in Advances in Neural Information Processing Systems 22, 2009, pp. 1096–1104.
  • [29] C.-C. M. Yeh, L. Su, and Y.-H. Yang, “Dual-layer bag-of-frames model for music genre classification,” in Proceedings of the 37th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013.
  • [30] P. Hamel and D. Eck, “Learning features from music audio with deep belief networks,” in In Proceedings of the 11th International Conference on Music Information Retrieval (ISMIR), 2010.
  • [31] E. M. Schmidt and Y. E. Kim, “Learning emotion-based acoustic features with deep belief networks,” in Proceedings of the 2011 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2011.
  • [32] S. Sigtia and S. Dixon, “Improved music feature learning with deep neural networks,” in Proceedings of the 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2014.
  • [33] E. M. Schmidt, J. Scott, and Y. E. Kim, “Feature learning in dynamic environments:modeling the acoustic structure of musical emotion,” in Proceedings of the 13th International Conference on Music Information Retrieval (ISMIR), 2012.
  • [34] P. Hamel, S. Lemieux, Y. Bengio, and D. Eck, “Temporal pooling and multiscale learning for automatic annotation and ranking of music audio,” in Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR), 2011.
  • [35] P. Hamel, Y. Bengio, and D. Eck, “Building musically-relevant audio features through multiple timescale representations,” in Proceedings of the 13th International Conference on Music Information Retrieval (ISMIR), 2012.
  • [36] A. van den Oord, S. Dieleman, and B. Schrauwen, “Deep content-based music recommendation,” in Proceedings of the 27th Conference on Neural Information Processing Systems (NIPS), 2013.
  • [37] S. Dieleman and B. Schrauwen, “End-to-end learning for music audio,” in Proceedings of the 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2014.
  • [38] D. Ellis, “Time-frequency automatic gain control,” web resource, available, http://labrosa.ee.columbia.edu/matlab/tf_agc/, 2010.
  • [39] M. Müller, D. Ellis, A. Klapuri, and G. Richard, “Signal processing for music analysis,” IEEE Journal on Selected Topics in Signal Processing, 2011.
  • [40] A. Hyvärinen, J. Hurri, and P. O. Hoyer, Natural Image Statistics.   Springer-Verlag, 2009.
  • [41] H. Lee, C. Ekanadham, and A. Y. Ng, “Sparse deep belief net model for visual area V2,” in Advances in Neural Information Processing Systems 20, 2008, pp. 873–880.
  • [42] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, “Greedy layer-wise training of deep networks,” Advances in Neural Information Processing Systems 19, 2007.
  • [43] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, pp. 1527–1554, 2006.
  • [44] G. E. Hinton, “A practical guide to training restricted boltzmann machines,” UTML Technical Report, vol. 2010-003, 2010.
  • [45] M. D. Zeiler, M. Ranzato, R. Monga, M. Mao, K. Yang, Q. V. Le, P. Nguyen, A. Senior, V. Vanhoucke, J. Dean, and G. E. Hinton, “On rectified linear units for speech processing,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013.
  • [46] G. Dahl, T. N. Sainath, and G. Hinton, “Improving deep neural networks for lvcsr using rectified linear units and dropout,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013.
  • [47] D. Yu and L. Deng, Automatic Speech Recognition-A Deep Learning Approach.   Springer, 2015.
  • [48] V. Nair and G. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML), 2010.
  • [49] M. D. Zeiler, “Adadelta: An adaptive learning rate method,” in arXiv:1212.5701v1, 2012.
  • [50] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, 2014.
  • [51] E. Law and L. V. Ahn, “Input-agreement: a new mechanism for collecting data using human computation games,” in Proc. Intl. Conf. on Human factors in computing systems, CHI. ACM, 2009.