资源论文GENERALIZATION THROUGH MEMORIZATION :N EAREST NEIGHBOR LANGUAGE MODELS

GENERALIZATION THROUGH MEMORIZATION :N EAREST NEIGHBOR LANGUAGE MODELS

2019-12-31 | |  76 |   38 |   0

Abstract

We introduce kNN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a k-nearest neighbors (kNN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this augmentation to a strong W IKITEXT-103 LM, with neighbors drawn from the original training set, our kNN-LM achieves a new stateof-the-art perplexity of 15.79 – a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail.

上一篇:AM UTUAL INFORMATION MAXIMIZATION PERSPEC -TIVE OF LANGUAGE REPRESENTATION LEARNING

下一篇:IMPROVING NEURAL LANGUAGE GENERATION WITHS PECTRUM CONTROL

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...