资源论文Ranking Generated Summaries by Correctness: An Interesting butChallenging Application for Natural Language Inference

Ranking Generated Summaries by Correctness: An Interesting butChallenging Application for Natural Language Inference

2019-09-18 | |  106 |   63 |   0 0 0
Abstract While recent progress on abstractive summarization has led to remarkably fluent summaries, factual errors in generated summaries still severely limit their use in practice. In this paper, we evaluate summaries produced by state-of-the-art models via crowdsourcing and show that such errors occur frequently, in particular with more abstractive models. We study whether textual entailment predictions can be used to detect such errors and if they can be reduced by reranking alternative predicted summaries. That leads to an interesting downstream application for entailment models. In our experiments, we find that outof-the-box entailment models trained on NLI datasets do not yet offer the desired performance for the downstream task and we therefore release our annotations as additional test data for future extrinsic evaluations of NLI

上一篇:Multi-Task Semantic Dependency Parsing with Policy Gradient forLearning Easy-First Strategies

下一篇:Reranking for Neural Semantic Parsing

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...