资源论文Multimodal and Multi-view Models for Emotion Recognition

Multimodal and Multi-view Models for Emotion Recognition

2019-09-24 | |  103 |   49 |   0

Abstract Studies on emotion recognition (ER) show that combining lexical and acoustic information results in more robust and accurate models. The majority of the studies focus on settings where both modalities are available in training and evaluation. However, in practice, this is not always the case; getting ASR output may represent a bottleneck in a deployment pipeline due to computational complexity or privacyrelated constraints. To address this challenge, we study the problem of effificiently combining acoustic and lexical modalities during training while still providing a deployable acoustic model that does not require lexical inputs. We fifirst experiment with multimodal models and two attention mechanisms to assess the extent of the benefifits that lexical information can provide. Then, we frame the task as a multi-view learning problem to induce semantic information from a multimodal model into our acoustic-only network using a contrastive loss function. Our multimodal model outperforms the previous state of the art on the USC-IEMOCAP dataset reported on lexical and acoustic information. Additionally, our multi-view-trained acoustic network signifificantly surpasses models that have been exclusively trained with acoustic features

上一篇:Entity-Centric Contextual Affective Analysis

下一篇:Progressive Self-Supervised Attention Learning for Aspect-Level Sentiment Analysis

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...