资源论文MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

2019-09-23 | |  102 |   58 |   0 0 0
Abstract Emotion recognition in conversations (ERC) is a challenging task that has recently gained popularity due to its potential applications. Until now, however, there has been no largescale multimodal multi-party emotional conversational database containing more than two speakers per dialogue. To address this gap, we propose the Multimodal EmotionLines Dataset (MELD), an extension and enhancement of EmotionLines. MELD contains about 13,000 utterances from 1,433 dialogues from the TV-series Friends. Each utterance is annotated with emotion and sentiment labels, and encompasses audio, visual, and textual modalities. We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations. The full dataset is available for use at http:// affective-meld.github.io

上一篇:Learning from Dialogue after Deployment: Feed Yourself, Chatbot!

下一篇:Memory Consolidation for Contextual Spoken Language Understanding with Dialogue Logistic Inference

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...