Abstract
We present FLIPDIAL, a generative model for Visual
Dialogue that simultaneously plays the role of both participants in a visually-grounded dialogue. Given context in the
form of an image and an associated caption summarising
the contents of the image, FLIPDIAL learns both to answer
questions and put forward questions, capable of generating
entire sequences of dialogue (question-answer pairs) which
are diverse and relevant to the image. To do this, FLIPDIAL
relies on a simple but surprisingly powerful idea: it uses
convolutional neural networks (CNNs) to encode entire dialogues directly, implicitly capturing dialogue context, and
conditional VAEs to learn the generative model. FLIPDIAL
outperforms the state-of-the-art model in the sequential answering task (1VD) on the VisDial dataset by 5 points in
Mean Rank using the generated answers. We are the first to
extend this paradigm to full two-way visual dialogue (2VD),
where our model is capable of generating both questions and
answers in sequence based on a visual input, for which we
propose a set of novel evaluation measures and metrics