DeepAI AI Chat
Log In Sign Up

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

by   Soujanya Poria, et al.
University of Michigan
National University of Singapore
Nanyang Technological University

Emotion recognition in conversations is a challenging Artificial Intelligence (AI) task. Recently, it has gained popularity due to its potential applications in many interesting AI tasks such as empathetic dialogue generation, user behavior understanding, and so on. To the best of our knowledge, there is no multimodal multi-party conversational dataset available, which contains more than two speakers in a dialogue. In this work, we propose the Multimodal EmotionLines Dataset (MELD), which we created by enhancing and extending the previously introduced EmotionLines dataset. MELD contains 13,708 utterances from 1433 dialogues of Friends TV series. MELD is superior to other conversational emotion recognition datasets SEMAINE and IEMOCAP as it consists of multiparty conversations and number of utterances in MELD is almost twice as these two datasets. Every utterance in MELD is associated with an emotion and a sentiment label. Utterances in MELD are multimodal encompassing audio and visual modalities along with the text. We have also addressed several shortcomings in EmotionLines and proposed a strong multimodal baseline. The baseline results show that both contextual and multimodal information play important role in emotion recognition in conversations.


Multimodal Emotion-Cause Pair Extraction in Conversations

Emotion cause analysis has received considerable attention in recent yea...

AdCOFE: Advanced Contextual Feature Extraction in Conversations for emotion classification

Emotion recognition in conversations is an important step in various vir...

Multimodal Conversational AI: A Survey of Datasets and Approaches

As humans, we experience the world with all our senses or modalities (so...

Look Before you Speak: Visually Contextualized Utterances

While most conversational AI systems focus on textual dialogue only, con...

Empathic Conversations: A Multi-level Dataset of Contextualized Conversations

Empathy is a cognitive and emotional reaction to an observed situation o...

Situated and Interactive Multimodal Conversations

Next generation virtual assistants are envisioned to handle multimodal i...

Code Repositories


MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation

view repo