资源论文Memory Consolidation for Contextual Spoken Language Understanding with Dialogue Logistic Inference

Memory Consolidation for Contextual Spoken Language Understanding with Dialogue Logistic Inference

2019-09-23 | |  91 |   42 |   0 0 0
Abstract Dialogue contexts are proven helpful in the spoken language understanding (SLU) system and they are typically encoded with explicit memory representations. However, most of the previous models learn the context memory with only one objective to maximizing the SLU performance, leaving the context memory under-exploited. In this paper, we propose a new dialogue logistic inference (DLI) task to consolidate the context memory jointly with SLU in the multi-task framework. DLI is defined as sorting a shuffled dialogue session into its original logical order and shares the same memory encoder and retrieval mechanism as the SLU model. Our experimental results show that various popular contextual SLU models can benefit from our approach, and improvements are quite impressive, especially in slot filling

上一篇:MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

下一篇:Modeling Semantic Relationship in Multi-turn Conversations with Hierarchical Latent Variables

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...