资源论文Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers

Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers

2019-09-20 | |  96 |   41 |   0 0 0
Abstract The state-of-the-art solutions for extracting multiple entity-relations from an input paragraph always require a multiple-pass encoding on the input. This paper proposes a new solution that can complete the multiple entityrelations extraction task with only one-pass encoding on the input corpus, and achieve a new state-of-the-art accuracy performance, as demonstrated in the ACE 2005 benchmark. Our solution is built on top of the pre-trained self-attentive models (Transformer). Since our method uses a single-pass to compute all relations at once, it scales to larger datasets easily; which makes it more usable in real-world applications

上一篇:Exploring Author Context for Detecting Intended vs Perceived Sarcasm

下一篇:GCDT: A Global Context Enhanced Deep Transition Architecture for Sequence Labeling

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...