资源论文Robust Visual Tracking Based on an Effective Appearance Model

Robust Visual Tracking Based on an Effective Appearance Model

2020-03-30 | |  47 |   32 |   0

Abstract

Most existing appearance models for visual tracking usually construct a pixel-based representation of object appearance so that they are incapable of fully capturing both global and local spatial layout information of object ap- pearance. In order to address this problem, we propose a novel spatial Log- Euclidean appearance model (referred as SLAM ) under the recently introduced Log-Euclidean Riemannian metric [23]. SLAM is capable of capturing both the global and local spatial layout information of object appearance by constructing a block-based Log-Euclidean eigenspace representation. Speci fically, the process of learning the proposed SLAM consists of five steps—appearance block division, online Log-Euclidean eigenspace learning, local spatial weighting, global spatial weighting, and likelihood evaluation. Furthermore, a novel online Log-Euclidean Riemannian subspace learning algorithm (IRSL) [14] is applied to incrementally update the proposed SLAM. Tracking is then led by the Bayesian state inference framework in which a particle filter is used for propagating sample distributions over the time. Theoretic analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed SLAM.

上一篇:Linking Pose and Motion*

下一篇:Co-recognition of Image Pairs by Data-Driven Monte Carlo Image Exploration

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...