Abstract
Discriminative correlation filters (DCF) with deep convolutional features have achieved favorable performance in
recent tracking benchmarks. However, most of existing DCF trackers only consider appearance features of current frame, and hardly benefit from motion and inter-frame
information. The lack of temporal information degrades
the tracking performance during challenges such as partial occlusion and deformation. In this paper, we propose
the FlowTrack, which focuses on making use of the rich
flow information in consecutive frames to improve the feature representation and the tracking accuracy. The FlowTrack formulates individual components, including optical
flow estimation, feature extraction, aggregation and correlation filters tracking as special layers in network. To
the best of our knowledge, this is the first work to jointly
train flow and tracking task in deep learning framework.
Then the historical feature maps at predefined intervals are
warped and aggregated with current ones by the guiding of
flow. For adaptive aggregation, we propose a novel spatialtemporal attention mechanism. In experiments, the proposed method achieves leading performance on OTB2013,
OTB2015, VOT2015 and VOT2016