Abstract
This paper presents a new model for visual dialog, Recurrent Dual Attention Network (ReDAN), using multi-step reasoning to
answer a series of questions about an image. In each question-answering turn of a dialog, ReDAN infers the answer progressively
through multiple reasoning steps. In each step
of the reasoning process, the semantic representation of the question is updated based
on the image and the previous dialog history,
and the recurrently-refined representation is
used for further reasoning in the subsequent
step. On the VisDial v1.0 dataset, the proposed ReDAN model achieves a new state-ofthe-art of 64.47% NDCG score. Visualization
on the reasoning process further demonstrates
that ReDAN can locate context-relevant visual and textual clues via iterative refinement,
which can lead to the correct answer step-bystep.