Two can play this Game: Visual Dialog with Discriminative Question Generation
and Answering
Abstract
Human conversation is a complex mechanism with subtle
nuances. It is hence an ambitious goal to develop artificial
intelligence agents that can participate fluently in a conversation. While we are still far from achieving this goal, recent progress in visual question answering, image captioning, and visual question generation shows that dialog systems may be realizable in the not too distant future. To this
end, a novel dataset was introduced recently and encouraging results were demonstrated, particularly for question
answering. In this paper, we demonstrate a simple symmetric discriminative baseline, that can be applied to both
predicting an answer as well as predicting a question. We
show that this method performs on par with the state of the
art, even memory net based methods. In addition, for the
first time on the visual dialog dataset, we assess the performance of a system asking questions, and demonstrate how
visual dialog can be generated from discriminative question
generation and question answering