资源论文Gaze Embeddings for Zero-Shot Image Classification

Gaze Embeddings for Zero-Shot Image Classification

2019-12-03 | |  52 |   34 |   0

Abstract

Zero-shot image classifification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even nonexpert users have a natural ability to judge class membership. We present a data collection paradigm that involves a discrimination task to increase the information content obtained from gaze data. Our method extracts discriminative descriptors from the data and learns a compatibility function between image and gaze using three novel gaze embeddings: Gaze Histograms (GH), Gaze Features with Grid (GFG) and Gaze Features with Sequence (GFS). We introduce two new gaze-annotated datasets for fifine-grained image classifification and show that human gaze data is indeed class discriminative, provides a competitive alternative to expertannotated attributes, and outperforms other baselines for zero-shot image classifification.

上一篇:G2DeNet: Global Gaussian Distribution Embedding Network and Its Application to Visual Recognition

下一篇:Generalized Rank Pooling for Activity Recognition

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...