Abstract
Existing models for extractive summarization
are usually trained from scratch with a crossentropy loss, which does not explicitly capture the global context at the document level.
In this paper, we aim to improve this task by
introducing three auxiliary pre-training tasks
that learn to capture the document-level context in a self-supervised fashion. Experiments
on the widely-used CNN/DM dataset validate the effectiveness of the proposed auxiliary
tasks. Furthermore, we show that after pretraining, a clean model with simple building
blocks is able to outperform previous state-ofthe-art that are carefully designed