资源论文Probing Neural Network Comprehension of Natural Language Arguments

Probing Neural Network Comprehension of Natural Language Arguments

2019-09-20 | |  87 |   33 |   0 0 0
Abstract We are surprised to find that BERT’s peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them. This analysis informs the construction of an adversarial dataset on which all models achieve random accuracy. Our adversarial dataset provides a more robust assessment of argument comprehension and should be adopted as the standard in future work

上一篇:Poetry to Prose Conversion in Sanskrit as a Linearisation Task: A case for Low-Resource Languages

下一篇:SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...