资源论文Locally Aligned Feature Transforms across Views

Locally Aligned Feature Transforms across Views

2019-11-28 | |  101 |   37 |   0

Abstract In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person reidentifification. It jointly partitions the image spaces of two camera views into different confifigurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are fifirst locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsityinducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identifification methods show the superior performance of our approach.

上一篇:Detection of Manipulation Action Consequences (MAC)

下一篇:Seeking the strongest rigid detector

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...