资源论文Visual Tracking Using Pertinent Patch Selection and Masking

Visual Tracking Using Pertinent Patch Selection and Masking

2019-12-16 | |  67 |   60 |   0

Abstract

A novel visual tracking algorithm using patch-based appearance models is proposed in this paper. We fifirst divide the bounding box of a target object into multiple patches and then select only pertinent patches, which occur repeatedly near the center of the bounding box, to construct the foreground appearance model. We also divide the input image into non-overlapping blocks, construct a background model at each block location, and integrate these background models for tracking. Using the appearance models, we obtain an accurate foreground probability map. Finally, we estimate the optimal object position by maximizing the likelihood, which is obtained by convolving the foreground probability map with the pertinence mask. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art tracking algorithms signifificantly in terms of center position errors and success rates

上一篇:6 Seconds of Sound and Vision: Creativity in Micro-Videos

下一篇:Quasi Real-Time Summarization for Consumer Videos

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...