资源论文TRANSFORMER -XH: MULTI -HOP QUESTION AN -SWERING WITH EX TRA HOP ATTENTION

TRANSFORMER -XH: MULTI -HOP QUESTION AN -SWERING WITH EX TRA HOP ATTENTION

2020-01-02 | |  127 |   59 |   0

Abstract

Transformers have obtained significant success modeling natural language as a sequence of text tokens. However, in many real world scenarios, textual data inherently exhibits structures beyond a linear sequence such as trees and graphs; an important one being multi-hop question answering, where evidence required to answer questions are scattered across multiple related documents. This paper presents Transformer-XH, which uses eXtra Hop attention to enable the intrinsic modeling of structured texts in a fully data-driven way. Its new attention mechanism naturally “hops” across the connected text sequences in addition to attending over tokens within each sequence. Thus, Transformer-XH better answers multi-hop questions by propagating information between multiple documents, constructing global contextualized representations, and jointly reasoning over multiple pieces of evidence. This leads to a simpler multi-hop QA system which outperforms previous state-of-the-art on the HotpotQA FullWiki setting by large margins.

上一篇:REINFORCED GENETIC ALGORITHM LEARNING FORO PTIMIZING COMPUTATION GRAPHS

下一篇:RESIDUAL ENERGY-BASED MODELS FORT EXT GENERATION

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...