Token-level Dynamic Self-Attention Network for Multi-Passage Reading
Comprehension
Abstract
Multi-passage reading comprehension requires the ability to combine cross-passage information and reason over multiple passages
to infer the answer. In this paper, we introduce
the Dynamic Self-attention Network (DynSAN) for multi-passage reading comprehension task, which processes cross-passage information at token-level and meanwhile avoids
substantial computational costs. The core
module of the dynamic self-attention is a proposed gated token selection mechanism, which
dynamically selects important tokens from a
sequence. These chosen tokens will attend to
each other via a self-attention mechanism to
model long-range dependencies. Besides, convolutional layers are combined with the dynamic self-attention to enhance the model’s
capacity of extracting local semantic. The
experimental results show that the proposed
DynSAN achieves new state-of-the-art performance on the SearchQA, Quasar-T and WikiHop datasets. Further ablation study also validates the effectiveness of our model components.