DeepAI AI Chat
Log In Sign Up

musicnn: Pre-trained convolutional neural networks for music audio tagging

by   Jordi Pons, et al.
Universitat Pompeu Fabra

Pronounced as "musician", the musicnn library contains a set of pre-trained musically motivated convolutional neural networks for music audio tagging: This repository also includes some pre-trained vgg-like baselines. These models can be used as out-of-the-box music audio taggers, as music feature extractors, or as pre-trained models for transfer learning. We also provide the code to train the aforementioned models: This framework also allows implementing novel models. For example, a musically motivated convolutional neural network with an attention-based output layer (instead of the temporal pooling layer) can achieve state-of-the-art results for music audio tagging: 90.77 ROC-AUC / 38.61 PR-AUC on the MagnaTagATune dataset — and 88.81 ROC-AUC / 31.51 PR-AUC on the Million Song Dataset.


page 1

page 2


Evaluation of CNN-based Automatic Music Tagging Models

Recent advances in deep learning accelerated the development of content-...

ImageNet pre-trained models with batch normalization

Convolutional neural networks (CNN) pre-trained on ImageNet are the back...

TensorFlow Audio Models in Essentia

Essentia is a reference open-source C++/Python library for audio and mus...

Automatic Embedding of Stories Into Collections of Independent Media

We look at how machine learning techniques that derive properties of ite...

HarmoF0: Logarithmic Scale Dilated Convolution For Pitch Estimation

Sounds, especially music, contain various harmonic components scattered ...

Pop Music Highlighter: Marking the Emotion Keypoints

The goal of music highlight extraction is to get a short consecutive seg...