资源论文STCT: Sequentially Training Convolutional Networks for Visual Tracking

STCT: Sequentially Training Convolutional Networks for Visual Tracking

2019-12-26 | |  41 |   47 |   0

Abstract

Due to the limited amount of training samples, fine-tuning pre-trained deep models online is prone to over-fitting. In this paper, we propose a sequential trainingmethod for convolutional neural networks (CNNs) to effec-tively transfer pre-trained deep features for online applica-tions. We regard a CNN as an ensemble with each chan-nel of the output feature map as an individual base learner.Each base learner is trained using different loss criterionsto reduce correlation and avoid over-training. To achievethe best ensemble online, all the base learners are sequen-tially sampled into the ensemble via important sampling. Tofurther improve the robustness of each base learner, we pro-pose to train the convolutional layers with random binarymasks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to signif-icantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.

上一篇:Online Collaborative Learning for Open-Vocabulary Visual Classifiers

下一篇:Anticipating Visual Representations from Unlabeled Video

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...