资源论文Poly-encoders: architectures and pre-trainingstrategies for fast and accurate multi-sentence scoring

Poly-encoders: architectures and pre-trainingstrategies for fast and accurate multi-sentence scoring

2020-01-02 | |  77 |   45 |   0

Abstract

The use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on four tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.

上一篇:DEEP 3D PAN VIA ADAPTIVE “T- SHAPED ”CONVO -LUTIONS WITH GLOBAL AND LOCAL ADAPTIVE DILA -TIONS

下一篇:DETECTING EXTRAPOLATION WITH LOCALE NSEMBLES

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...