MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

10/05/2018
by   Soujanya Poria, et al.
0

Emotion recognition in conversations is a challenging Artificial Intelligence (AI) task. Recently, it has gained popularity due to its potential applications in many interesting AI tasks such as empathetic dialogue generation, user behavior understanding, and so on. To the best of our knowledge, there is no multimodal multi-party conversational dataset available, which contains more than two speakers in a dialogue. In this work, we propose the Multimodal EmotionLines Dataset (MELD), which we created by enhancing and extending the previously introduced EmotionLines dataset. MELD contains 13,708 utterances from 1433 dialogues of Friends TV series. MELD is superior to other conversational emotion recognition datasets SEMAINE and IEMOCAP as it consists of multiparty conversations and number of utterances in MELD is almost twice as these two datasets. Every utterance in MELD is associated with an emotion and a sentiment label. Utterances in MELD are multimodal encompassing audio and visual modalities along with the text. We have also addressed several shortcomings in EmotionLines and proposed a strong multimodal baseline. The baseline results show that both contextual and multimodal information play important role in emotion recognition in conversations.

READ FULL TEXT
research
10/15/2021

Multimodal Emotion-Cause Pair Extraction in Conversations

Emotion cause analysis has received considerable attention in recent yea...
research
04/09/2021

AdCOFE: Advanced Contextual Feature Extraction in Conversations for emotion classification

Emotion recognition in conversations is an important step in various vir...
research
07/17/2022

A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for Debiasing in Multimodal Conversational Emotion Recognition

Multimodal emotion recognition in conversations (mERC) is an active rese...
research
05/13/2022

Multimodal Conversational AI: A Survey of Datasets and Approaches

As humans, we experience the world with all our senses or modalities (so...
research
12/10/2020

Look Before you Speak: Visually Contextualized Utterances

While most conversational AI systems focus on textual dialogue only, con...
research
05/25/2022

Empathic Conversations: A Multi-level Dataset of Contextualized Conversations

Empathy is a cognitive and emotional reaction to an observed situation o...
research
06/02/2020

Situated and Interactive Multimodal Conversations

Next generation virtual assistants are envisioned to handle multimodal i...

Please sign up or login with your details

Forgot password? Click here to reset