EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation

08/03/2021
∙
by   Hsiao-Tzu Hung, et al.
∙
0
∙

While there are many music datasets with emotion labels in the literature, they cannot be used for research on symbolic-domain music analysis or generation, as there are usually audio files only. In this paper, we present the EMOPIA (pronounced `yee-mò-pi-uh') dataset, a shared multi-modal (audio and MIDI) database focusing on perceived emotion in pop piano music, to facilitate research on various tasks related to music emotion. The dataset contains 1,087 music clips from 387 songs and clip-level emotion labels annotated by four dedicated annotators. Since the clips are not restricted to one clip per song, they can also be used for song-level analysis. We present the methodology for building the dataset, covering the song list curation, clip selection, and emotion annotation processes. Moreover, we prototype use cases on clip-level music emotion classification and emotion-based symbolic music generation by training and evaluating corresponding models using the dataset. The result demonstrates the potential of EMOPIA for being used in future exploration on piano emotion-related MIR tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
∙ 02/26/2023

Multi-Modality in Music: Predicting Emotion in Music from High-Level Audio Features and Lyrics

This paper aims to test whether a multi-modal approach for music emotion...
research
∙ 11/14/2022

YM2413-MDB: A Multi-Instrumental FM Video Game Music Dataset with Emotion Annotations

Existing multi-instrumental datasets tend to be biased toward pop and cl...
research
∙ 08/28/2023

Symbolic Acoustic: Multi-domain Music Emotion Modeling for Instrumental Music

Music Emotion Recognition involves the automatic identification of emoti...
research
∙ 04/13/2021

Comparison and Analysis of Deep Audio Embeddings for Music Emotion Recognition

Emotion is a complicated notion present in music that is hard to capture...
research
∙ 03/30/2019

Learning Affective Correspondence between Music and Image

We introduce the problem of learning affective correspondence between au...
research
∙ 06/27/2021

Use of Variational Inference in Music Emotion Recognition

This work was developed aiming to employ Statistical techniques to the f...
research
∙ 10/17/2022

Modelling Emotion Dynamics in Song Lyrics with State Space Models

Most previous work in music emotion recognition assumes a single or a fe...

Please sign up or login with your details

Forgot password? Click here to reset