Music Auto-tagging Using CNNs and Mel-spectrograms With Reduced Frequency and Time Resolution

11/12/2019
by   Andres Ferraro, et al.
0

Automatic tagging of music is an important research topic in Music Information Retrieval achieved improvements with advances in deep learning. In particular, many state-of-the-art systems use Convolutional Neural Networks and operate on mel-spectrogram representations of the audio. In this paper, we compare commonly used mel-spectrogram representations and evaluate model performances that can be achieved by reducing the input size in terms of both lesser amount of frequency bands and larger frame rates. We use the MagnaTagaTune dataset for comprehensive performance comparisons and then compare selected configurations on the larger Million Song Dataset. The results of this study can serve researchers and practitioners in their trade-off decision between accuracy of the models, data storage size and training and inference times.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

03/06/2017

Sample-level Deep Convolutional Neural Networks for Music Auto-tagging Using Raw Waveforms

Recently, the end-to-end approach that learns hierarchical representatio...
09/06/2017

A Comparison on Audio Signal Preprocessing Methods for Deep Neural Networks on Music Tagging

Deep neural networks (DNN) have been successfully applied for music clas...
06/01/2020

Evaluation of CNN-based Automatic Music Tagging Models

Recent advances in deep learning accelerated the development of content-...
06/20/2019

Adversarial Learning for Improved Onsets and Frames Music Transcription

Automatic music transcription is considered to be one of the hardest pro...
01/30/2021

Melon Playlist Dataset: a public dataset for audio-based playlist generation and music tagging

One of the main limitations in the field of audio signal processing is t...
06/07/2017

The Effects of Noisy Labels on Deep Convolutional Neural Networks for Music Tagging

Deep neural networks (DNN) have been successfully applied to music class...
04/09/2021

Larger-Context Tagging: When and Why Does It Work?

The development of neural networks and pretraining techniques has spawne...

1 Introduction

Current state-of-the-art systems for music auto-tagging using audio are based on deep learning, in particular convolutional neural networks (CNNs), following two different approaches, one directly using the audio as input (end-to-end models) [12] and the other using the spectrograms as input [9, 4]. Previous works [15] suggest that two approaches can have a comparative performance when they are applied on large datasets.

We can distinguish two architectures for the spectrogram-based CNN solutions, depending on whether they use multiple convolutional layers of small filters [6, 3] or if they use multiple filter shapes [17, 16, 15]

. The former is borrowed from the computer vision field (VGG 

[18]) and gives a good performance without prior domain knowledge, while the latter is based on such a knowledge and employs filters designed to capture information relevant for music auto-tagging such as timbre or rhythm. Commonly mel-spectrograms are used with such architectures although constant-Q [13, 4]

and raw short-time Fourier transform (STFT) 

[7] can be also applied.

In this paper, we compare the performance of two state-of-the-art CNN approaches to music auto-tagging [3, 15] using different mel-spectrogram representations as an input. We study how reducing the size of the input spectrograms in terms of both lesser amount of frequency bands and larger frame rates affects the performance. We show that by reducing the frequency and time resolution we can train the network faster with a small decrease in the performance. The results of this study can help to build faster CNN models as well as reduce the amount of data to be stored and transferred optimizing resources when handling large collections of music.

2 Related work

Only few previous studies compared different spectrogram representations for CNN architectures. Instead, it is common to focus on tuning model hyper-parameters with a fixed chosen input. The choice of the spectrogram input is done empirically and often follows approaches previously reported in literature. Very few information comparing different inputs is available as the authors tend to only report the most successful approaches. Also, as the existing studies on music auto-tagging focus on optimizing accuracy metrics, there is a lack of works that intend to simplify networks and their inputs for computational efficiency and consider practical aspects of the efficient ways to store spectrogram representations.

To the best of our knowledge, there is no systematic comparison of mel-spectrogram representations. The only work we are aware of in this direction has been done by Choi et al. [7], where the authors compare model performances under different pre-processing strategies such as scaling, log-compression, and frequency weighting. The same authors provide an overview of different inputs that can be used for the auto-tagging task in [4]. In relation to mel-spectrograms, they suggest that one can optimize the input to the network by changing some of the signal processing parameters such as sampling rate, window size, hop size or mel bins resolution. These optimizations can help to minimize data size and train the networks more efficiently, however, no quantitative evaluations are provided.

3 Datasets

Researchers in music auto-tagging commonly use the MagnaTagATune dataset [11] to evaluate multiple settings and then repeat some settings on Million Song Dataset [1] to validate differences in performances on a larger scale [9, 3, 15]. It is important to note that both datasets contain unbalanced and noisy and/or weakly-labeled annotations [5] and therefore are challenging to work with, as the reliability of conducted evaluations may be affected [20]. Still, these are the two mostly used datasets for benchmarking due to the availability of audio.

3.1 MagnaTagATune (MTAT)

MagnaTagATune dataset contains multi-label annotations of genre, mood and instrumentation for 25,877 audio segments. Each segment is 30 seconds long, and the dataset contains multiple segments per song. All the audio is in MP3 format with 32 Kbps bitrate and 16 KHz sample rate. The dataset is split into 16 folders, and researchers commonly use the first 12 folders for training, the 13th for validation and the last three for testing. Also, only 50 most frequent tags are typically used for evaluation. These tags include genre and instrumentation labels, as well as eras (e.g., ’80s’ and ’90s’) and moods.

3.2 Million Song Dataset (MSD)

The MSD [1] is a large dataset of audio features, expanded by the MIR community with additional information including tags, lyrics and other annotations. It also contains a subset mapped by researchers to 30 seconds audio previews available at 7digital and collaborative tags from Lastfm. This subset contains 241,904 annotated track fragments and is commonly used as another larger scale benchmark for music auto-tagging systems. The tags cover genre, instrumentation, moods and eras. Audio fragments vary in their quality, encoded as MP3 with a bitrate ranging from 64 to 128 Kbps and the sample rates of 22 KHz or 44 KHz.

4 Baseline architectures

In this work, we reproduce two CNN architectures applying them on mel-spectrograms with reduced frequency and time resolution. These architectures are among the best performing according to the existing evaluations on the MTAT and MSD datasets:

  • VGG applied for music (VGG-CNN) [3]. This architecture contains multiple layers of small-size 2D-filters as it has been adapted from the computer vision field  [18]. It is a fully-convolutional network consisting of four convolutional layers with small 33 filters111Number of mel bands number of frames.

    and max pooling (MP) settings presented in Table 

    1. The network operates on 96-bands mel-spectrograms for 29.1s audio segments, 12 KHz sample rate, 512 samples frame size and the hop size of 256 samples.

  • Musically-motivated CNN (MUSICNN) [15]. The architecture contains more filters of different shapes designed with an intention to capture musically relevant information such as timbre (381, 383, 387, 861, 863, 867) and temporal patterns (132, 164, 1128, 1

    165) in the first layer. The convolution results are concatenated and passed to three additional convolutional layers including residual connections.

    222We refer the readers to the original paper for all architecture details. Original network operates on 96-bands mel-spectrograms computed on smaller 15s audio segments with 16 KHz sample rate, 512 samples frame size and 256 samples hop size.333Frame and hop size settings are confirmed in personal communication with the author. It then averages tag activation scores across multiple segments of the same audio input.

For evaluation on MTAT and MSD, we use batch normalization, Adam

[10]

as optimization method with a learning rate of 0.001 and binary cross-entropy as loss function for both architectures following their authors.

Input Mel-spectrogram (961366 1)
Layer 1 Conv 33128
MP (2, 4) (output: 48341128)
Layer 2 Conv 33384
MP (4, 5) (output: 2485384)
Layer 3 Conv 33768
MP (3, 8) (output: 1221768)
Layer 4 Conv 332048
MP (4, 8) (output: 112048)
Output 501 (sigmoid)
Table 1: The baseline VGG CNN model architecture.

5 Mel-spectrograms

We computed mel-spectrograms using typical setting for the MTAT dataset in the state of the art [3, 15]. The most common settings are 12 KHz or 16 KHz sample rate, frame and hop size of 512 and 256 samples, respectively, and Hann window function. Commonly, 96 or 128 mel bands are used, covering all frequency range below Nyquist (6 KHz and 8 KHz, respectively) and computed using Slaney’s mel scale implementation [19]. To normalize the mel-spectrograms we considered two log-compression alternatives denominated as “dB” for  [7] and “log” for  [9].

sample rate # mel hop size log type
12 KHz 128 log, dB
12 KHz 96 log, dB
12 KHz 48 log, dB
12 KHz 32 log, dB
12 KHz 24 log, dB
12 KHz 16 log, dB
12 KHz 8 log, dB
16 KHz 128 log, dB
16 KHz 96 log, dB
16 KHz 48 log, dB
16 KHz 32 log, dB
16 KHz 24 log, dB
16 KHz 16 log, dB
16 KHz 8 log, dB
Table 2: Mel-spectrograms configurations evaluated on the MTAT dataset. Hop sizes are reported relative to the reference hop size of 256 samples (e.g., 5 stands for a 5 times longer hop size).

Starting with these settings, we then considered different variations in frequency and time resolutions (smaller number of mel bands and larger hop sizes). Table 2 shows all different spectrogram configurations that we evaluated on the MTAT dataset. Each configuration results in a different dimension of the resulting feature matrix (the number of mel bands the number of frames). An audio segment of 29.1 seconds corresponds to 1366 and 1820 frames in the case of no temporal reduction () and the 12 KHz and 16kHz sample rate, respectively. In turn, the maximum reduction we considered () results in 137 and 182 frames.

All spectrograms were computed using Essentia444https://essentia.upf.edu music audio analysis library [2]. It was configured to reproduce mel-spectrograms from another analysis library used by the state of the art, LibROSA,555https://librosa.github.io for compatibility. As a matter of interest, to have a better understanding of what information these spectrograms are able to capture, we provide a number of examples sonifying the resulting mel-spectrograms for all considered frequency and time resolutions online.666https://andrebola.github.io/ICASSP2020/demos

6 Baseline architecture adjustments

In this section we explain the changes introduced to the original model architectures presented in Section 4.

6.1 Vgg-Cnn

We try to preserve the original architecture defined in [3] in terms of the size and number of filters in each layer, but we need to adjust max pooling settings since we are reducing the dimensions of the mel-spectrogram input. We report all such modifications for the VGG-CNN architecture in Table 3

. It reports the sizes of square max-pooling windows in each layer selected accordingly to the number of mel bands and the hop size. We prioritize changes in max pooling in the latter layers when possible. We adjust the pooling size to match the input dimensions when possible, otherwise padding is applied. In the case of 16 KHz sample rate, more adjustments to VGG-CNN are necessary because, having a fixed reference hop size of 256 samples, the higher sample rate implies better temporal resolution and the larger mel-spectrograms (1820 frames).

It is important to note that if we change the resolution of the input, the 33 filters in VGG-CNN capture different ranges of frequency and temporal information. For example, they cover twice the mel-frequency range and a doubled time interval when using 48 mel bands and 2 hop size. This can be an advantage, because it reduces the amount of information that the network needs to learn.

hop size max-pooling size (time)
12 KHz 16 KHz
1 4, 5, 8, 8 4, 5, 9, 10
2 4, 5, 8, 4 4, 5, 9, 5
3 4, 5, 8, 2 4, 5, 9, 3
4 4, 5, 8, 2 4, 5, 9, 2
5 4, 5, 8, 1 4, 5, 9, 2
10 4, 5, 4, 1 4, 5, 9, 1
# mel max-pooling size (frequency)
128 2, 4, 4, 4
96 2, 4, 3, 4
48 2, 4, 3, 2
32 2, 2, 3, 2
24 2, 2, 3, 2
16 2, 2, 2, 2
8 2, 2, 2, 1
Table 3: Adjusted sizes for max-pooling windows (time and frequency) in the four consecutive layers of the VGG CNN model with respect to the hop size, sample rate and the number of mel bands. The original sizes are highlighted in bold.

6.2 Musicnn

In the original model, timbre filters’ sizes in frequency are computed relative to the number of mel bands (90% and 40%). We preserve the same relation when we change this number. In our implementations we modified the segment size to 3 seconds, as we obtained slightly better results in our preliminary evaluation.777Similar to suggestions by other researchers reproducing this model. We keep the temporal dimension of the filters (the number of frames) intact for all considered mel-spectrograms settings.

7 Evaluation Metrics

CNN models for auto-tagging output continuous activation values within for each tag, and therefore we can study the performance of binary classifications under different activation thresholds. To this end, following previous works [14, 15, 3] we use Receiver Operating Characteristic Area Under Curve (ROC AUC) averaged across tags as our performance metric. We also report Precision-Recall Area Under Curve (PR AUC), because previous studies [8] have shown that ROC AUC can give over-optimistic scores when the data is unbalanced, which is our case. Both ROC AUC and PR AUC are single value measures characterizing the overall performance, which allows to easily compare multiple systems.

To measure the computational cost of models’ training and inference we use an estimate of the number of multiply-accumulate operations required by a network to process one batch (1 GMAC is equal to 1 Giga MAC operations). This metric is related to the time a model requires for training and inference. We use an online tool

888https://dgschwend.github.io/netscope/quickstart.html to compute approximate MAC values for our architectures.

8 Results

We evaluated the considered mel-spectrogram settings on the adjusted CNN models. Full results for all evaluated configurations are available online.999https://andrebola.github.io/ICASSP2020/results In Figure 1 we show the results of the evaluation for VGG-CNN on the MTAT dataset, repeated three times for each configuration. The first two plots show the ROC AUC results for the 12 KHz and 16 KHz sample rate using the log and dB scaling. Similarly, the third and forth plots show the PR ROC results under the same conditions. The last plot shows GMAC.

Figure 1:

Mean and standard deviation of ROC AUC and PR AUC of the VGG-CNN model computed on three runs for each mel-spectrogram configuration (# mel, hop size, sample rate, and log type) and the associated GMAC values.

The results show that using some of the settings we can reduce the size of the input in frequency and time without affecting much the performance of VGG-CNN on the MTAT dataset. For example, if we reduce the frequency resolution from 96 to 48 mel bands we can reduce the MAC operations near 50% without affecting the performance in all configurations. Similarly, we can also reduce time resolution by 50% without affecting performance, and in this case we also reduce the MAC operations by 50% in all configurations. We can further reduce the number of operations by the cost of some performance decrease. This can be especially useful for applications requiring lightweight models, as we can get a model 10 faster by sacrificing between 1.4 and 1,8% of the performance depending on the configuration. Interestingly enough, both ROC AUC and PR AUC slightly improve when using 48 mel bands compared to 96 bands in most of the cases, however no statistically significant difference was found (

for all corresponding configurations in an independent samples t-test).

For the MUSICNN model, we have tested some of the configurations reported in Table 4. We only considered the frequency resolution reduction to 48 mel bands and no hop size increments due to significantly slower training time (see Section 4). The results show comparable performance of 96- and 48-band mel-spectrograms and are consistent with the above mentioned findings for the VGG-CNN model. Overall, using 128 mel bands resolution provided the best performance. Also, according to the results, the MUSICNN architecture outperforms VGG-CNN, which is consistent with the reports from the authors.

To check how our findings scale, we selected a number of configurations and re-evaluated the models on the MSD dataset. The results are reported in Table 5. In the case of VGG-CNN we can see that the performance of the baseline architectures is slightly superior to the ones working with lower-resolution mel-spectrograms, which comes by cost of a significantly larger computational effort. For example, for the 12 KHz sample rate, 1 hop size and dB compression settings, reducing the number of mel bands from 96 to 48 results in the decrease is 0.16% in the ROC AUC performance and 50% reduction in GMACs. For a similar 16 KHz/dB case the reduced model has the same performance with the benefit of twice as low computational speed.

In the case of the MUSICNN architecture we see a reduction of the performance of 0.19% if we compare 96 vs 48 mel bands using 12 KHz sample rate and 0.11% for 16 KHz.

# mels sample rate ROC AUC PR AUC
128 12 KHz 90.40 38.54
96 12 KHz 90.50 37.70
48 12 KHz 90.33 37.80
128 16 KHz 90.83 38.92
96 16 KHz 90.60 38.09
48 16 KHz 90.50 37.70
Table 4: ROC AUC and PR AUC of the MUSICNN model on the MTAT dataset for a selection of configurations using dB log-compression and the reference hop size ( 1).
# mels hop size sample rate ROC AUC PR AUC
128 1 12 KHz 86.48 27.56
96 1 12 KHz 86.67 27.70
48 1 12 KHz 86.53 27.27
128 2 12 KHz 86.28 27.24
96 2 12 KHz 86.18 26.93
48 2 12 KHz 85.86 26.42
128 1 16 KHz 86.84 28.10
96 1 16 KHz 86.71 28.06
48 1 16 KHz 86.73 27.78
128 2 16 KHz 86.34 27.06
96 2 16 KHz 86.63 27.70
48 2 16 KHz 86.41 26.83
(a) VGG-CNN
# mels hop size sample rate ROC AUC PR AUC
128 1 12 KHz 87.10 26.97
96 1 12 KHz 87.16 27.10
48 1 12 KHz 86.99 26.66
128 1 16 KHz 87.21 26.91
96 1 16 KHz 87.21 26.96
48 1 16 KHz 87.10 26.64
(b) MUSICNN
Table 5: ROC AUC and PR AUC of the models on the MSD dataset for a selection of configurations using dB log-compression.

9 Conclusions

In this paper we have studied how different mel-spectrogram representations affect the performance of CNN architectures for music auto-tagging. We have compared the performances of two state-of-the-art models when reducing the mel-spectrogram resolution in terms of amount of frequency bands and frame rates. We used the MagnaTagaTune dataset for comprehensive performance comparisons and then compared selected configurations on the larger Million Song Dataset. The results suggest that is possible to preserve a similar performance while reducing the size of the input. They can help researchers and practitioners to make trade-off decision between accuracy of the models, data storage size and training and inference time, that may be crucial in a number of applications.

As a future work, other approaches for the reduction of the input data dimensionality and size will be considered, for example quantization of mel-spectrogram values. The conducted evaluation can be also extended to other state-of-the-art architectures such as [6]

. It is also promising to conduct a similar evaluation on other audio auto-tagging tasks. All the code to reproduce this study is open-source and available online.

101010https://andrebola.github.io/ICASSP2020/

References

  • [1] T. Bertin-Mahieux, D. P. Ellis, B. Whitman, and P. Lamere (2011) The million song dataset. In Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR), Cited by: §3.2, §3.
  • [2] D. Bogdanov, N. Wack, E. Gómez, S. Gulati, P. Herrera, O. Mayor, G. Roma, J. Salamon, J. R. Zapata, and X. Serra (2013) ESSENTIA: an audio analysis library for music information retrieval. In Proceedings of the 14th International Society for Music Information Retrieval Conference (ISMIR), pp. 493–498. Cited by: §5.
  • [3] K. Choi, G. Fazekas, and M. Sandler (2016) Automatic tagging using deep convolutional neural networks. In Proceedings of the 17th International Society for Music Information Retrieval Conference (ISMIR), pp. 805–811. Cited by: §1, §1, §3, 1st item, §5, §6.1, §7.
  • [4] K. Choi, G. Fazekas, K. Cho, and M. Sandler (2017) A tutorial on deep learning for music information retrieval. arXiv preprint arXiv:1709.04396. Cited by: §1, §1, §2.
  • [5] K. Choi, G. Fazekas, K. Cho, and M. Sandler (2018) The effects of noisy labels on deep convolutional neural networks for music tagging. IEEE Transactions on Emerging Topics in Computational Intelligence 2 (2), pp. 139–149. Cited by: §3.
  • [6] K. Choi, G. Fazekas, M. Sandler, and K. Cho (2017)

    Convolutional recurrent neural networks for music classification

    .
    In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2392–2396. Cited by: §1, §9.
  • [7] K. Choi, G. Fazekas, M. Sandler, and K. Cho (2018) A comparison of audio signal preprocessing methods for deep neural networks on music tagging. In 2018 26th European Signal Processing Conference (EUSIPCO), pp. 1870–1874. Cited by: §1, §2, §5.
  • [8] J. Davis and M. Goadrich (2006) The relationship between precision-recall and roc curves. In

    Proceedings of the 23rd international conference on Machine learning

    ,
    pp. 233–240. Cited by: §7.
  • [9] S. Dieleman and B. Schrauwen (2014) End-to-end learning for music audio. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6964–6968. Cited by: §1, §3, §5.
  • [10] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.
  • [11] E. Law, K. West, M. I. Mandel, M. Bay, and J. S. Downie (2009) Evaluation of algorithms using games: the case of music tagging. In Proceedings of the 10th International Society for Music Information Retrieval Conference (ISMIR), pp. 387–392. Cited by: §3.
  • [12] J. Lee, J. Park, K. L. Kim, and J. Nam (2017) Sample-level deep convolutional neural networks for music auto-tagging using raw waveforms. In Proceedings of the 14th Sound and Music Computing Conference (SMC), Cited by: §1.
  • [13] S. Oramas, F. Barbieri, O. Nieto, and X. Serra (2018) Multimodal deep learning for music genre classification. Transactions of the International Society for Music Information Retrieval. 2018; 1 (1): 4-21.. Cited by: §1.
  • [14] S. Oramas, O. Nieto, F. Barbieri, and X. Serra (2017)

    Multi-label music genre classification from audio, text and images using deep features

    .
    In Proceedings of the 18th International Society for Music Information Retrieval Conference (ISMIR), pp. 23–30. Cited by: §7.
  • [15] J. Pons, O. Nieto, M. Prockup, E. M. Schmidt, A. F. Ehmann, and X. Serra (2018) End-to-end learning for music audio tagging at scale. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), pp. 637–644. Cited by: §1, §1, §1, §3, 2nd item, §5, §7.
  • [16] J. Pons and X. Serra (2017) Designing efficient architectures for modeling temporal features with convolutional neural networks. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2472–2476. Cited by: §1.
  • [17] J. Pons, O. Slizovskaia, R. Gong, E. Gómez, and X. Serra (2017) Timbre analysis of music audio signals with convolutional neural networks. In 2017 25th European Signal Processing Conference (EUSIPCO), pp. 2744–2748. Cited by: §1.
  • [18] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §1, 1st item.
  • [19] M. Slaney (1998) Auditory toolbox. Interval Research Corporation, Tech. Rep 10 (1998). Cited by: §5.
  • [20] B. L. Sturm (2012) A survey of evaluation in music genre recognition. In International Workshop on Adaptive Multimedia Retrieval, pp. 29–66. Cited by: §3.