Abstract
Although current evaluation of questionanswering systems treats predictions in isolation, we need to consider the relationship between predictions to measure true understanding. A model should be penalized for answering “no” to “Is the rose red?” if it answers
“red” to “What color is the rose?”. We propose
a method to automatically extract such implications for instances from two QA datasets,
VQA and SQuAD, which we then use to evaluate the consistency of models. Human evaluation shows these generated implications are
well formed and valid. Consistency evaluation provides crucial insights into gaps in existing models, and retraining with implicationaugmented data improves consistency on both
synthetic and human-generated implications.