Transparency by Design: Closing the Gap Between Performance and
Interpretability in Visual Reasoning
Abstract
Visual question answering requires high-order reasoning about an image, which is a fundamental capability
needed by machine systems to follow complex directives.
Recently, modular networks have been shown to be an effective framework for performing visual reasoning tasks.
While modular networks were initially designed with a degree of model transparency, their performance on complex
visual reasoning benchmarks was lacking. Current stateof-the-art approaches do not provide an effective mechanism for understanding the reasoning process. In this paper,
we close the performance gap between interpretable models
and state-of-the-art visual reasoning methods. We propose
a set of visual-reasoning primitives which, when composed,
manifest as a model capable of performing complex reasoning tasks in an explicitly-interpretable manner. The fidelity
and interpretability of the primitives’ outputs enable an unparalleled ability to diagnose the strengths and weaknesses
of the resulting model. Critically, we show that these primitives are highly performant, achieving state-of-the-art accuracy of 99.1% on the CLEVR dataset. We also show that
our model is able to effectively learn generalized representations when provided a small amount of data containing
novel object attributes. Using the CoGenT generalization
task, we show more than a 20 percentage point improvement over the current state of the art