Abstract. We use large amounts of unlabeled video to learn models for
visual tracking without manual human supervision. We leverage the natural temporal coherency of color to create a model that learns to colorize
gray-scale videos by copying colors from a reference frame. Quantitative
and qualitative experiments suggest that this task causes the model to
automatically learn to track visual regions. Although the model is trained
without any ground-truth labels, our method learns to track well enough
to outperform the latest methods based on optical flow. Moreover, our
results suggest that failures to track are correlated with failures to colorize, indicating that advancing video colorization may further improve
self-supervised visual tracking