Abstract
This paper addresses the problem of estimating and tracking human body keypoints in complex, multi-person video.
We propose an extremely lightweight yet highly effective approach that builds upon the latest advancements in human
detection [17] and video understanding [5]. Our method operates in two-stages: keypoint estimation in frames or short
clips, followed by lightweight tracking to generate keypoint
predictions linked over the entire video. For frame-level
pose estimation we experiment with Mask R-CNN, as well as
our own proposed 3D extension of this model, which leverages temporal information over small clips to generate more
robust frame predictions. We conduct extensive ablative experiments on the newly released multi-person video pose
estimation benchmark, PoseTrack, to validate various design
choices of our model. Our approach achieves an accuracy
of 55.2% on the validation and 51.8% on the test set using
the Multi-Object Tracking Accuracy (MOTA) metric, and
achieves state of the art performance on the ICCV 2017
PoseTrack keypoint tracking challenge [1]