Modeling Majorness as a Perceptual Property in Music from Listener Ratings

06/27/2018
by   Anna Aljanaki, et al.
0

For the tasks of automatic music emotion recognition, genre recognition, music recommendation it is helpful to be able to extract mode from any section of a musical piece as a perceived amount of major or minor mode (majorness) inside that section, perceived as a whole (one or several melodies and any harmony present). In this paper we take a data-driven approach (modeling directly from data without giving an explicit definition or explicitly programming an algorithm) towards modeling this property. We collect annotations from musicians and show that majorness can be understood by musicians in an intuitive way. We model this property from the data using deep learning.

READ FULL TEXT
research
07/28/2021

On Perceived Emotion in Expressive Piano Performance: Further Experimental Evidence for the Relevance of Mid-level Perceptual Features

Despite recent advances in audio content-based music emotion recognition...
research
03/03/2023

Decoding and Visualising Intended Emotion in an Expressive Piano Performance

Expert musicians can mould a musical piece to convey specific emotions t...
research
05/28/2019

Two-level Explanations in Music Emotion Recognition

Current ML models for music emotion recognition, while generally working...
research
06/11/2020

Perceiving Music Quality with GANs

Assessing perceptual quality of musical audio signals usually requires a...
research
06/13/2018

A data-driven approach to mid-level perceptual musical feature modeling

Musical features and descriptors could be coarsely divided into three le...
research
06/27/2021

Use of Variational Inference in Music Emotion Recognition

This work was developed aiming to employ Statistical techniques to the f...

Please sign up or login with your details

Forgot password? Click here to reset