Abstract
The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart
WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273
consistently and robustly improves when finetuned on a similar pronoun disambiguation
problem dataset (denoted WSCR). We additionally generate a large unsupervised WSClike dataset. By fine-tuning the BERT language model both on the introduced and on
the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and
WNLI, improving the previous state-of-theart solutions by 8.8% and 9.6%, respectively.
Furthermore, our fine-tuned models are also
consistently more accurate on the “complex”
subsets of WSC273, introduced by Trichelair
et al. (2018).