Abstract
Vision and language understanding has emerged as a
subject undergoing intense study in Artificial Intelligence.
Among many tasks in this line of research, visual question
answering (VQA) has been one of the most successful ones,
where the goal is to learn a model that understands visual
content at region-level details and finds their associations
with pairs of questions and answers in the natural language
form. Despite the rapid progress in the past few years, most
existing work in VQA have focused primarily on images. In
this paper, we focus on extending VQA to the video domain
and contribute to the literature in three important ways.
First, we propose three new tasks designed specifically for
video VQA, which require spatio-temporal reasoning from
videos to answer questions correctly. Next, we introduce
a new large-scale dataset for video VQA named TGIF-QA
that extends existing VQA work with our new tasks. Finally,
we propose a dual-LSTM based approach with both spatial
and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.