Abstract. Recently, Siamese networks have drawn great attention in visual tracking community because of their balanced accuracy and speed.
However, features used in most Siamese tracking approaches can only
discriminate foreground from the non-semantic backgrounds. The semantic backgrounds are always considered as distractors, which hinders
the robustness of Siamese trackers. In this paper, we focus on learning
distractor-aware Siamese networks for accurate and long-term tracking.
To this end, features used in traditional Siamese trackers are analyzed
at first. We observe that the imbalanced distribution of training data
makes the learned features less discriminative. During the off-line training phase, an effective sampling strategy is introduced to control this
distribution and make the model focus on the semantic distractors. During inference, a novel distractor-aware module is designed to perform
incremental learning, which can effectively transfer the general embedding to the current video domain. In addition, we extend the proposed
approach for long-term tracking by introducing a simple yet effective
local-to-global search region strategy. Extensive experiments on benchmarks show that our approach significantly outperforms the state-of-thearts, yielding 9.6% relative gain in VOT2016 dataset and 35.9% relative
gain in UAV20L dataset. The proposed tracker can perform at 160 FPS
on short-term benchmarks and 110 FPS on long-term benchmarks.