Abstract. Image captioning is a multimodal task involving computer
vision and natural language processing, where the goal is to learn a mapping from the image to its natural language description. In general, the
mapping function is learned from a training set of image-caption pairs.
However, for some language, large scale image-caption paired corpus
might not be available. We present an approach to this unpaired image captioning problem by language pivoting. Our method can effectively
capture the characteristics of an image captioner from the pivot language
(Chinese) and align it to the target language (English) using another
pivot-target (Chinese-English) sentence parallel corpus. We evaluate our
method on two image-to-English benchmark datasets: MSCOCO and
Flickr30K. Quantitative comparisons against several baseline approaches
demonstrate the effectiveness of our method.