What do I Annotate Next? An Empirical Study
of Active Learning for Action Localization
Abstract. Despite tremendous progress achieved in temporal action localization, state-of-the-art methods still struggle to train accurate models
when annotated data is scarce. In this paper, we introduce a novel active
learning framework for temporal localization that aims to mitigate this
data dependency issue. We equip our framework with active selection
functions that can reuse knowledge from previously annotated datasets.
We study the performance of two state-of-the-art active selection functions as well as two widely used active learning baselines. To validate
the effectiveness of each one of these selection functions, we conduct
simulated experiments on ActivityNet. We find that using previously acquired knowledge as a bootstrapping source is crucial for active learners
aiming to localize actions. When equipped with the right selection function, our proposed framework exhibits significantly better performance
than standard active learning strategies, such as uncertainty sampling.
Finally, we employ our framework to augment the newly compiled Kinetics action dataset with ground-truth temporal annotations. As a result,
we collect Kinetics-Localization, a novel large-scale dataset for temporal
action localization, which contains more than 15K YouTube videos.