资源论文Self-Supervised Learning for Contextualized Extractive Summarization

Self-Supervised Learning for Contextualized Extractive Summarization

2019-09-23 | |  98 |   46 |   0 0 0
Abstract Existing models for extractive summarization are usually trained from scratch with a crossentropy loss, which does not explicitly capture the global context at the document level. In this paper, we aim to improve this task by introducing three auxiliary pre-training tasks that learn to capture the document-level context in a self-supervised fashion. Experiments on the widely-used CNN/DM dataset validate the effectiveness of the proposed auxiliary tasks. Furthermore, we show that after pretraining, a clean model with simple building blocks is able to outperform previous state-ofthe-art that are carefully designed

上一篇:Searching for Effective Neural Extractive Summarization: What Works and What’s Next

下一篇:Sentence Centrality Revisited for Unsupervised Summarization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...