Abstract. Recent breakthroughs in computer vision and natural language processing have spurred interest in challenging multi-modal tasks
such as visual question-answering and visual dialogue. For such tasks, one
successful approach is to condition image-based convolutional network
computation on language via Feature-wise Linear Modulation (FiLM)
layers, i.e., per-channel scaling and shifting. We propose to generate the
parameters of FiLM layers going up the hierarchy of a convolutional network in a multi-hop fashion rather than all at once, as in prior work.
By alternating between attending to the language input and generating
FiLM layer parameters, this approach is better able to scale to settings
with longer input sequences such as dialogue. We demonstrate that multihop FiLM generation significantly outperforms prior state-of-the-art on
the GuessWhat?! visual dialogue task and matches state-of-the art on
the ReferIt object retrieval task, and we provide additional qualitative
analysis