A Multimodal Approach towards Emotion Recognition of Music using Audio and Lyrical Content

10/10/2018
by   Aniruddha Bhattacharya, et al.
0

We propose MoodNet - A Deep Convolutional Neural Network based architecture to effectively predict the emotion associated with a piece of music given its audio and lyrical content.We evaluate different architectures consisting of varying number of two-dimensional convolutional and subsampling layers,followed by dense layers.We use Mel-Spectrograms to represent the audio content and word embeddings-specifically 100 dimensional word vectors, to represent the textual content represented by the lyrics.We feed input data from both modalities to our MoodNet architecture.The output from both the modalities are then fused as a fully connected layer and softmax classfier is used to predict the category of emotion.Using F1-score as our metric,our results show excellent performance of MoodNet over the two datasets we experimented on-The MIREX Multimodal dataset and the Million Song Dataset.Our experiments reflect the hypothesis that more complex models perform better with more training data.We also observe that lyrics outperform audio as a better expressed modality and conclude that combining and using features from multiple modalities for prediction tasks result in superior performance in comparison to using a single modality as input.

READ FULL TEXT

page 2

page 4

research
04/13/2021

Comparison and Analysis of Deep Audio Embeddings for Music Emotion Recognition

Emotion is a complicated notion present in music that is hard to capture...
research
09/18/2017

Continuous Multimodal Emotion Recognition Approach for AVEC 2017

This paper reports the analysis of audio and visual features in predicti...
research
05/02/2018

Investigating Audio, Visual, and Text Fusion Methods for End-to-End Automatic Personality Prediction

We propose a tri-modal architecture to predict Big Five personality trai...
research
03/30/2019

Learning Affective Correspondence between Music and Image

We introduce the problem of learning affective correspondence between au...
research
09/23/2020

Cosine Similarity of Multimodal Content Vectors for TV Programmes

Multimodal information originates from a variety of sources: audiovisual...
research
06/17/2019

Modeling Music Modality with a Key-Class Invariant Pitch Chroma CNN

This paper presents a convolutional neural network (CNN) that uses input...

Please sign up or login with your details

Forgot password? Click here to reset