资源论文Compact Representation for Image Classification: To Choose or to Compress

Compact Representation for Image Classification: To Choose or to Compress

2019-12-16 | |  63 |   41 |   0

Abstract

In large scale image classifification, features such as Fisher vector or VLAD have achieved state-of-the-art results. However, the combination of large number of examples and high dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper argues that feature selection is a better choice than feature compression. We show that strong multicollinearity among feature dimensions may not exist, which undermines feature compressions effectiveness and renders feature selection a natural choice. We also show that many dimensions are noise and throwing them away is helpful for classifification. We propose a supervised mutual information (MI) based importance sorting algorithm to choose features. Combining with 1-bit quantization, MI feature selection has achieved both higher accuracy and less computational cost than feature compression methods such as product quantization and BPBC.

上一篇:NMF-KNN: Image Annotation using Weighted Multi-view Non-negative Matrix Factorization

下一篇:Finding Vanishing Points via Point Alignments in Image Primal and Dual Domains

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...