Abstract
We propose Dual Attention Networks (DANs) which
jointly leverage visual and textual attention mechanisms
to capture fine-grained interplay between vision and language. DANs attend to specific regions in images and words
in text through multiple steps and gather essential information from both modalities. Based on this framework, we
introduce two types of DANs for multimodal reasoning and
matching, respectively. The reasoning model allows visual
and textual attentions to steer each other during collaborative inference, which is useful for tasks such as Visual Question Answering (VQA). In addition, the matching model exploits the two attention mechanisms to estimate the similarity between images and sentences by focusing on their
shared semantics. Our extensive experiments validate the
effectiveness of DANs in combining vision and language,
achieving the state-of-the-art performance on public benchmarks for VQA and image-text matching