资源论文T-CVAE: Transformer-Based Conditioned Variational Autoencoder for Story Completion

T-CVAE: Transformer-Based Conditioned Variational Autoencoder for Story Completion

2019-10-10 | |  80 |   47 |   0
Abstract Story completion is a very challenging task of generating the missing plot for an incomplete story, which requires not only understanding but also inference of the given contextual clues. In this paper, we present a novel conditional variational autoencoder based on Transformer for missing plot generation. Our model uses shared attention layers for encoder and decoder, which make the most of the contextual clues, and a latent variable for learning the distribution of coherent story plots. Through drawing samples from the learned distribution, diverse reasonable plots can be generated. Both automatic and manual evaluations show that our model generates better story plots than stateof-the-art models in terms of readability, diversity and coherence

上一篇:Subgoal-Based Temporal Abstraction in Monte-Carlo Tree Search

下一篇:Triplet Enhanced AutoEncoder: Model-free Discriminative Network Embedding

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...