资源论文Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models

Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models

2019-10-18 | |  92 |   47 |   0
Abstract Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset

上一篇:Logo Synthesis and Manipulation with Clustered Generative Adversarial Networks

下一篇:Making Convolutional Networks Recurrent for Visual Sequence Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...