Look, Imagine and Match:
Improving Textual-Visual Cross-Modal Retrieval with Generative Models
Abstract
Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language
processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal
retrieval performance. Unlike existing image-text retrieval
approaches that embed image-text pairs as single feature
vectors in a common representational space, we propose to
incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only
the global abstract features but also the local grounded features. Extensive experiments show that our framework can
well match images and sentences with complex content, and
achieve the state-of-the-art cross-modal retrieval results on
MSCOCO dataset