资源论文A Cross-Sentence Latent Variable Model for Semi-Supervised Text Sequence Matching

A Cross-Sentence Latent Variable Model for Semi-Supervised Text Sequence Matching

2019-09-24 | |  149 |   55 |   0

 Abstract We present a latent variable model for predicting the relationship between a pair of text sequences. Unlike previous auto-encoding– based approaches that consider each sequence separately, our proposed framework utilizes both sequences within a single model by generating a sequence that has a given relationship with a source sequence. We further extend the cross-sentence generating framework to facilitate semi-supervised training. We also defifine novel semantic constraints that lead the decoder network to generate semantically plausible and diverse sequences. We demonstrate the effectiveness of the proposed model from quantitative and qualitative experiments, while achieving state-of-the-art results on semi-supervised natural language inference and paraphrase identifification

上一篇:Tree Communication Models for Sentiment Analysis

下一篇:Combating Adversarial Misspellings with Robust Word Recognition

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...