Combining Knowledge Hunting and Neural Language Models to Solve the
Winograd Schema Challenge
Abstract
Winograd Schema Challenge (WSC) is a pronoun resolution task which seems to require
reasoning with commonsense knowledge. The
needed knowledge is not present in the given
text. Automatic extraction of the needed
knowledge is a bottleneck in solving the challenge. The existing state-of-the-art approach
uses the knowledge embedded in their pretrained language model. However, the language models only embed part of the knowledge, the ones related to frequently co-existing
concepts. This limits the performance of such
models on the WSC problems. In this work,
we build-up on the language model based
methods and augment them with a commonsense knowledge hunting (using automatic extraction from text) module and an explicit reasoning module. Our end-to-end system built
in such a manner improves on the accuracy
of two of the available language model based
approaches by 5.53% and 7.7% respectively.
Overall our system achieves the state-of-theart accuracy of 71.06% on the WSC dataset, an
improvement of 7.36% over the previous best