资源论文Fast Zero-Shot Image Tagging

Fast Zero-Shot Image Tagging

2019-12-26 | |  61 |   38 |   0

Abstract

The well-known word analogy experiments show that therecent word vectors capture fine-grained linguistic regular-ities in words by linear vector offsets, but it is unclear howwell the simple vector offsets can encode visual regularitiesover words. We study a particular image-word relevancerelation in this paper. Our results show that the word vec-tors of relevant tags for a given image rank ahead of theirrelevant tags, along a principal direction in the word vec-tor space. Inspired by this observation, we propose to solveimage tagging by estimating the principal direction for animage. Particularly, we exploit linear mappings and nonlinear deep neural networks to approximate the principal direction from an input image. We arrive at a quite versatile tagging model. It runs fast given a test image, in constant time w.r.t. the training set size. It not only gives superior performance for the conventional tagging task on the NUSWIDE dataset, but also outperforms competitive baselines on annotating images with previously unseen tags.

上一篇:CNN-RNN: A Unified Framework for Multi-label Image Classification

下一篇:Single-Image Crowd Counting via Multi-Column Convolutional Neural Network

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...