资源论文Nearest Neighbors Using Compact Sparse Codes

Nearest Neighbors Using Compact Sparse Codes

2020-03-04 | |  61 |   44 |   0

Abstract

In this paper, we propose a novel scheme for approximate nearest neighbor (ANN) retrieval based on dictionary learning and sparse coding. Our key innovation is to build compact codes, dubbed SpANN codes, using the active set of sparse coded data. These codes are then used to index an inverted file table for fast retrieval. Th active sets are often found to be sensitive to smal differences among data points, resulting in only near duplicate retrieval. We show that this sensitivity is related to the coherence of the dictionar small coherence resulting in better retrieval. To this end, we propose a novel dictionary learning formulation with incoherence constraints and an efficient method to solve it. Experiments are conducted on two state-of-the-art computer vision datasets with 1M data points and show an order of magnitude improvement in retrieval accuracy without sacrificing memory and query time compared to the state-of-the-art methods.

上一篇:Admixture of Poisson MRFs: A Topic Model with Word Dependencies

下一篇:Multiresolution Matrix Factorization

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...