DeepAI AI Chat
Log In Sign Up

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

10/05/2018
by   Soujanya Poria, et al.
University of Michigan
National University of Singapore
Nanyang Technological University
SenticNet
0

Emotion recognition in conversations is a challenging Artificial Intelligence (AI) task. Recently, it has gained popularity due to its potential applications in many interesting AI tasks such as empathetic dialogue generation, user behavior understanding, and so on. To the best of our knowledge, there is no multimodal multi-party conversational dataset available, which contains more than two speakers in a dialogue. In this work, we propose the Multimodal EmotionLines Dataset (MELD), which we created by enhancing and extending the previously introduced EmotionLines dataset. MELD contains 13,708 utterances from 1433 dialogues of Friends TV series. MELD is superior to other conversational emotion recognition datasets SEMAINE and IEMOCAP as it consists of multiparty conversations and number of utterances in MELD is almost twice as these two datasets. Every utterance in MELD is associated with an emotion and a sentiment label. Utterances in MELD are multimodal encompassing audio and visual modalities along with the text. We have also addressed several shortcomings in EmotionLines and proposed a strong multimodal baseline. The baseline results show that both contextual and multimodal information play important role in emotion recognition in conversations.

READ FULL TEXT
10/15/2021

Multimodal Emotion-Cause Pair Extraction in Conversations

Emotion cause analysis has received considerable attention in recent yea...
04/09/2021

AdCOFE: Advanced Contextual Feature Extraction in Conversations for emotion classification

Emotion recognition in conversations is an important step in various vir...
05/13/2022

Multimodal Conversational AI: A Survey of Datasets and Approaches

As humans, we experience the world with all our senses or modalities (so...
12/10/2020

Look Before you Speak: Visually Contextualized Utterances

While most conversational AI systems focus on textual dialogue only, con...
05/25/2022

Empathic Conversations: A Multi-level Dataset of Contextualized Conversations

Empathy is a cognitive and emotional reaction to an observed situation o...
06/02/2020

Situated and Interactive Multimodal Conversations

Next generation virtual assistants are envisioned to handle multimodal i...

Code Repositories

MELD

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation


view repo