资源论文Sparse Quantization for Patch Description

Sparse Quantization for Patch Description

2019-11-28 | |  67 |   51 |   0

Abstract The representation of local image patches is crucial for the good performance and effificiency of many vision tasks. Patch descriptors have been designed to generalize towards diverse variations, depending on the application, as well as the desired compromise between accuracy and effificiency. We present a novel formulation of patch description, that serves such issues well. Sparse quantization lies at its heart. This allows for effificient encodings, leading to powerful, novel binary descriptors, yet also to the generalization of existing descriptors like SIFT or BRIEF. We demonstrate the capabilities of our formulation for both keypoint matching and image classifification. Our binary descriptors achieve state-of-the-art results for two keypoint matching benchmarks, namely those by Brown [6] and Mikolajczyk [18]. For image classifification, we propose new descriptors that perform similar to SIFT on Caltech101 [10] and PASCAL VOC07 [9].

上一篇:Supervised Semantic Gradient Extraction Using Linear-Time Optimization

下一篇:Illumination Estimation based on Bilayer Sparse Coding

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...