Abstract
Visual question answering is fundamentally composi-tional in nature—a question like where is the dog? sharessubstructure with questions like what color is the dog? and where is the cat? This paper seeks to simultaneously exploitthe representational capacity of deep networks and the compositional linguistic structure of questions. We describe aprocedure for constructing and learning neural module net-works, which compose collections of jointly-trained neural “modules” into deep networks for question answering. Ourapproach decomposes questions into their linguistic sub-structures, and uses these structures to dynamically instan-tiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex ques-tions about abstract shapes.