Abstract
Most existing text summarization datasets are
compiled from the news domain, where summaries have a flattened discourse structure.
In such datasets, summary-worthy content often appears in the beginning of input articles. Moreover, large segments from input articles are present verbatim in their respective
summaries. These issues impede the learning and evaluation of systems that can understand an article’s global content structure as
well as produce abstractive summaries with
high compression ratio. In this work, we
present a novel dataset, BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive
summaries. Compared to existing summarization datasets, BIGPATENT has the following properties: i) summaries contain a richer
discourse structure with more recurring entities, ii) salient content is evenly distributed in
the input, and iii) lesser and shorter extractive
fragments are present in the summaries. Finally, we train and evaluate baselines and popular learning models on BIGPATENT to shed
light on new challenges and motivate future directions for summarization research