Abstract
Previous work on multimodal machine translation has shown that visual information is only
needed in very specific cases, for example in
the presence of ambiguous words where the
textual context is not sufficient. As a consequence, models tend to learn to ignore this information. We propose a translate-and-refine
approach to this problem where images are
only used by a second stage decoder. This approach is trained jointly to generate a good first
draft translation and to improve over this draft
by (i) making better use of the target language
textual context (both left and right-side contexts) and (ii) making use of visual context.
This approach leads to the state of the art results. Additionally, we show that it has the
ability to recover from erroneous or missing
words in the source language