Multilabel Automated Recognition of Emotions Induced Through Music

by   Fabio Paolizzo, et al.

Achieving advancements in automatic recognition of emotions that music can induce require considering multiplicity and simultaneity of emotions. Comparison of different machine learning algorithms performing multilabel and multiclass classification is the core of our work. The study analyzes the implementation of the Geneva Emotional Music Scale 9 in the Emotify music dataset and the data distribution. The research goal is the identification of best methods towards the definition of the audio component of a new a new multimodal dataset for music emotion recognition.


Personalized musically induced emotions of not-so-popular Colombian music

This work presents an initial proof of concept of how Music Emotion Reco...

EmotionBox: a music-element-driven emotional music generation system using Recurrent Neural Network

With the development of deep neural networks, automatic music compositio...

Neural Network architectures to classify emotions in Indian Classical Music

Music is often considered as the language of emotions. It has long been ...

The emotions that we perceive in music: the influence of language and lyrics comprehension on agreement

In the present study, we address the relationship between the emotions p...

Enabling Embodied Analogies in Intelligent Music Systems

The present methodology is aimed at cross-modal machine learning and use...

Musical Prosody-Driven Emotion Classification: Interpreting Vocalists Portrayal of Emotions Through Machine Learning

The task of classifying emotions within a musical track has received wid...

An empirical approach to the relationship between emotion and music production quality

In music production, the role of the mix engineer is to take recorded mu...