Show, Tell and Discriminate: Image Captioning
by Self-retrieval with Partially Labeled Data
Abstract. The aim of image captioning is to generate captions by machine to describe image contents. Despite many efforts, generating discriminative captions for images remains non-trivial. Most traditional
approaches imitate the language structure patterns, thus tend to fall
into a stereotype of replicating frequent phrases or sentences and neglect unique aspects of each image. In this work, we propose an image
captioning framework with a self-retrieval module as training guidance,
which encourages generating discriminative captions. It brings unique
advantages: (1) the self-retrieval guidance can act as a metric and an
evaluator of caption discriminativeness to assure the quality of generated captions. (2) The correspondence between generated captions and
images are naturally incorporated in the generation process without human annotations, and hence our approach could utilize a large amount
of unlabeled images to boost captioning performance with no additional
annotations. We demonstrate the effectiveness of the proposed retrievalguided method on COCO and Flickr30k captioning datasets, and show
its superior captioning performance with more discriminative captions