资源论文Variational Pretraining for Semi-supervised Text Classification

Variational Pretraining for Semi-supervised Text Classification

2019-09-24 | |  237 |   56 |   0

Abstract We introduce VAMPIRE,1 a lightweight pretraining framework for effective text classi- fification when data and computing resources are limited. We pretrain a unigram document model as a variational autoencoder on in-domain, unlabeled data and use its internal states as features in a downstream classi- fifier. Empirically, we show the relative strength of VAMPIRE against computationally expensive contextual embeddings and other popular semi-supervised baselines under low resource settings. We also fifind that fifine-tuning to indomain data is crucial to achieving decent performance from contextual embeddings when working with limited supervision. We accompany this paper with code to pretrain and use VAMPIRE embeddings in downstream tasks.

上一篇:Unsupervised Neural Text Simplification

下一篇:Bilingual Lexicon Induction with Semi-supervision in Non-Isometric Embedding Spaces

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...