Abstract
An image caption should fluently present the
essential information in a given image, including informative, fine-grained entity mentions
and the manner in which these entities interact.
However, current captioning models are usually trained to generate captions that only contain common object names, thus falling short
on an important “informativeness” dimension.
We present a mechanism for integrating image
information together with fine-grained labels
(assumed to be generated by some upstream
models) into a caption that describes the image in a fluent and informative manner. We
introduce a multimodal, multi-encoder model
based on Transformer that ingests both image
features and multiple sources of entity labels.
We demonstrate that we can learn to control
the appearance of these entity labels in the output, resulting in captions that are both fluent
and informative