资源论文Meta-Tracker: Fast and Robust Online Adaptation for Visual Object Trackers

Meta-Tracker: Fast and Robust Online Adaptation for Visual Object Trackers

2019-10-23 | |  82 |   69 |   0
Abstract. This paper improves state-of-the-art visual object trackers that use online adaptation. Our core contribution is an offline metalearning-based method to adjust the initial deep networks used in online adaptation-based tracking. The meta learning is driven by the goal of deep networks that can quickly be adapted to robustly model a particular target in future frames. Ideally the resulting models focus on features that are useful for future frames, and avoid overfitting to background clutter, small parts of the target, or noise. By enforcing a small number of update iterations during meta-learning, the resulting networks train significantly faster. We demonstrate this approach on top of the high performance tracking approaches: tracking-by-detection based MDNet [1] and the correlation based CREST [2]. Experimental results on standard benchmarks, OTB2015 [3] and VOT2016 [4], show that our meta-learned versions of both trackers improve speed, accuracy, and robustness

上一篇:Geolocation Estimation of Photos using a Hierarchical Model and Scene Classification

下一篇:Value-aware Quantization for Training and Inference of Neural Networks

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...