资源论文Personalizing Dialogue Agents via Meta-Learning

Personalizing Dialogue Agents via Meta-Learning

2019-09-23 | |  123 |   54 |   0 0 0
Abstract Existing personalized dialogue models use human designed persona descriptions to improve dialogue consistency. Collecting such descriptions from existing dialogues is expensive and requires hand-crafted feature designs. In this paper, we propose to extend Model-Agnostic Meta-Learning (MAML) (Finn et al., 2017) to personalized dialogue learning without using any persona descriptions. Our model learns to quickly adapt to new personas by leveraging only a few dialogue samples collected from the same user, which is fundamentally different from conditioning the response on the persona descriptions. Empirical results on Persona-chat dataset (Zhang et al., 2018) indicate that our solution outperforms non-metalearning baselines using automatic evaluation metrics, and in terms of human-evaluated fluency and consistency

上一篇:OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs

下一篇:Persuasion for Good: Towards a Personalized Persuasive Dialogue System for Social Good

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...