资源论文Dynamically Visual Disambiguation of Keyword-based Image Search

Dynamically Visual Disambiguation of Keyword-based Image Search

2019-10-08 | |  52 |   34 |   0

Abstract Due to the high cost of manual annotation, learning directly from the web has attracted broad attention. One issue that limits their performance is the problem of visual polysemy. To address this issue, we present an adaptive multi-model framework that resolves polysemy by visual disambiguation. Compared to existing methods, the primary advantage of our approach lies in that our approach can adapt to the dynamic changes in the search results. Our proposed framework consists of two major steps: we fifirst discover and dynamically select the text queries according to the image search results, then we employ the proposed saliency-guided deep multi-instance learning network to remove outliers and learn classifification models for visual disambiguation. Extensive experiments demonstrate the superiority of our proposed approach

上一篇:Densely Connected Attention Flow for Visual Question Answering

下一篇:Dynamic Feature Fusion for Semantic Edge Detection

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...