Abstract
Recent insights on language and vision with neural networks have been successfully applied to simple singleimage visual question answering. However, to tackle reallife question answering problems on multimedia collections
such as personal photos, we have to look at whole collections with sequences of photos or videos. When answering
questions from a large collection, a natural problem is to
identify snippets to support the answer. In this paper, we
describe a novel neural network called Focal Visual-Text
Attention network (FVTA) for collective reasoning in visual
question answering, where both visual and text sequence information such as images and text metadata are presented.
FVTA introduces an end-to-end approach that makes use of
a hierarchical process to dynamically determine what media and what time to focus on in the sequential data to answer the question. FVTA can not only answer the questions
well but also provides the justifications which the system results are based upon to get the answers. FVTA achieves
state-of-the-art performance on the MemexQA dataset and
competitive results on the MovieQA dataset