Tracing Back Music Emotion Predictions to Sound Sources and Intuitive Perceptual Qualities

by   Shreyan Chowdhury, et al.

Music emotion recognition is an important task in MIR (Music Information Retrieval) research. Owing to factors like the subjective nature of the task and the variation of emotional cues between musical genres, there are still significant challenges in developing reliable and generalizable models. One important step towards better models would be to understand what a model is actually learning from the data and how the prediction for a particular input is made. In previous work, we have shown how to derive explanations of model predictions in terms of spectrogram image segments that connect to the high-level emotion prediction via a layer of easily interpretable perceptual features. However, that scheme lacks intuitive musical comprehensibility at the spectrogram level. In the present work, we bridge this gap by merging audioLIME – a source-separation based explainer – with mid-level perceptual features, thus forming an intuitive connection chain between the input audio and the output emotion predictions. We demonstrate the usefulness of this method by applying it to debug a biased emotion prediction model.



There are no comments yet.


page 1

page 2

page 3

page 4


Two-level Explanations in Music Emotion Recognition

Current ML models for music emotion recognition, while generally working...

A Perceptual Model of Musical Mix Clarity using Decomposition and Masking Thresholds

Objective measurement of perceptually motivated music attributes has app...

A data-driven approach to mid-level perceptual musical feature modeling

Musical features and descriptors could be coarsely divided into three le...

Towards Musically Meaningful Explanations Using Source Separation

Deep neural networks (DNNs) are successfully applied in a wide variety o...

Towards Explainable Music Emotion Recognition: The Route via Mid-level Features

Emotional aspects play an important part in our interaction with music. ...

Towards Explaining Expressive Qualities in Piano Recordings: Transfer of Explanatory Features via Acoustic Domain Adaptation

Emotion and expressivity in music have been topics of considerable inter...

LoopNet: Musical Loop Synthesis Conditioned On Intuitive Musical Parameters

Loops, seamlessly repeatable musical segments, are a cornerstone of mode...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.