资源论文Hierarchical Transformers for Multi-Document Summarization

Hierarchical Transformers for Multi-Document Summarization

2019-09-23 | |  102 |   56 |   0 0 0
Abstract In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill abstractive summaries. Our model augments a previously proposed Transformer architecture (Liu et al., 2018) with the ability to encode documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information as opposed to simply concatenating text spans and processing them as a flat sequence. Our model learns latent dependencies among textual units, but can also take advantage of explicit graph representations focusing on similarity or discourse relations. Empirical results on the WikiSum dataset demonstrate that the proposed architecture brings substantial improvements over several strong baselines

上一篇:HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization

下一篇:HIGHRES: Highlight-based Reference-less Evaluation of Summarization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...