资源论文Kernel Square-Loss Exemplar Machines for Image Retrieval

Kernel Square-Loss Exemplar Machines for Image Retrieval

2019-12-09 | |  60 |   34 |   0
Abstract Zepeda and Perez [41] have recently demonstrated the ´ promise of the exemplar SVM (ESVM) as a feature encoder for image retrieval. This paper extends this approach in several directions: We first show that replacing the hinge loss by the square loss in the ESVM cost function signifi- cantly reduces encoding time with negligible effect on accuracy. We call this model square-loss exemplar machine, or SLEM. We then introduce a kernelized SLEM which can be implemented efficiently through low-rank matrix decomposition, and displays improved performance. Both SLEM variants exploit the fact that the negative examples are fixed, so most of the SLEM computational complexity is relegated to an offline process independent of the positive examples. Our experiments establish the performance and computational advantages of our approach using a large array of base features and standard image retrieval datasets

上一篇:Image-to-Image Translation with Conditional Adversarial Networks

下一篇:Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...