资源论文Boosting Dialog Response Generation

Boosting Dialog Response Generation

2019-09-20 | |  107 |   52 |   0

Abstract Neural models have become one of the most important approaches to dialog response generation. However, they still tend to generate the most common and generic responses in the corpus all the time. To address this problem, we designed an iterative training process and ensemble method based on boosting. We combined our method with different training and decoding paradigms as the base model, including mutual-information-based decoding and reward-augmented maximum likelihood learning. Empirical results show that our approach can signifificantly improve the diversity and relevance of the responses generated by all base models, backed by objective measurements and human evaluation.

上一篇:BAM! Born-Again Multi-Task Networks for Natural Language Understanding

下一篇:Coherent Comment Generation for Chinese Articles with a Graph-to-Sequence Model

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...