资源论文A Surprisingly Robust Trick for the Winograd Schema Challenge

A Surprisingly Robust Trick for the Winograd Schema Challenge

2019-09-22 | |  112 |   53 |   0 0 0
Abstract The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 consistently and robustly improves when finetuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSClike dataset. By fine-tuning the BERT language model both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-theart solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more accurate on the “complex” subsets of WSC273, introduced by Trichelair et al. (2018).

上一篇:A Just and Comprehensive Strategy for Using NLP to Address Online Abuse

下一篇:Adaptive Attention Span in Transformers

用户评价
全部评价

热门资源

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...

  • dynamical system ...

    allows to preform manipulations of heavy or bul...