资源论文Tagging like Humans: Diverse and Distinct Image Annotation

Tagging like Humans: Diverse and Distinct Image Annotation

2019-10-17 | |  105 |   43 |   0
Abstract In this work we propose a new automatic image annotation model, dubbed diverse and distinct image annotation (D2 IA). The generative model D2 IA is inspired by the ensemble of human annotations, which create semantically relevant, yet distinct and diverse tags. In D2 IA, we generate a relevant and distinct tag subset, in which the tags are relevant to the image contents and semantically distinct to each other, using sequential sampling from a determinantal point process (DPP) model. Multiple such tag subsets that cover diverse semantic aspects or diverse semantic levels of the image contents are generated by randomly perturbing the DPP sampling process. We leverage a generative adversarial network (GAN) model to train D2 IA. Extensive experiments including quantitative and qualitative comparisons, as well as human subject studies, on two benchmark datasets demonstrate that the proposed model can produce more diverse and distinct tags than the state-of-the-arts

上一篇:Synthesizing Images of Humans in Unseen Poses

下一篇:Temporal Hallucinating for Action Recognition with Few Still Images

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...