Abstract
Recent captioning models are limited in their ability to
scale and describe concepts unseen in paired image-text
corpora. We propose the Novel Object Captioner (NOC),
a deep visual semantic captioning model that can describe
a large number of object categories not present in existing image-caption datasets. Our model takes advantage of
external sources – labeled images from object recognition
datasets, and semantic knowledge extracted from unannotated text. We propose minimizing a joint objective which
can learn from these diverse data sources and leverage
distributional semantic embeddings, enabling the model to
generalize and describe novel objects outside of imagecaption datasets. We demonstrate that our model exploits
semantic information to generate captions for hundreds of
object categories in the ImageNet object recognition dataset
that are not observed in MSCOCO image-caption training
data, as well as many categories that are observed very
rarely. Both automatic evaluations and human judgements
show that our model considerably outperforms prior work
in being able to describe many more categories of objects