资源论文Generating Informative and Diverse ConversationalResponses via Adversarial Information Maximization

Generating Informative and Diverse ConversationalResponses via Adversarial Information Maximization

2020-02-13 | |  83 |   37 |   0

Abstract

 Responses generated by neural conversational models tend to lack informativeness and diversity. We present a novel adversarial learning method, called Adversarial Information Maximization (AIM) model, to address these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, we explicitly optimize a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.

上一篇:Gradient Descent Meets Shift-and-Invert Preconditioning for Eigenvector Computation

下一篇:TADAM: Task dependent adaptive metric for improved few-shot learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...