Abstract
Open book question answering is a type of natural language based QA (NLQA) where questions are expected to be answered with respect
to a given set of open book facts, and common
knowledge about a topic. Recently a challenge
involving such QA, OpenBookQA, has been
proposed. Unlike most other NLQA tasks
that focus on linguistic understanding, OpenBookQA requires deeper reasoning involving
linguistic understanding as well as reasoning
with common knowledge. In this paper we
address QA with respect to the OpenBookQA
dataset and combine state of the art language
models with abductive information retrieval
(IR), information gain based re-ranking, passage selection and weighted scoring to achieve
72.0% accuracy, an 11.6% improvement over
the current state of the art.