Abstract
The most recent trend in estimating the 6D pose of rigid
objects has been to train deep networks to either directly
regress the pose from the image or to predict the 2D locations of 3D keypoints, from which the pose can be obtained
using a PnP algorithm. In both cases, the object is treated
as a global entity, and a single pose estimate is computed.
As a consequence, the resulting techniques can be vulnerable to large occlusions.
In this paper, we introduce a segmentation-driven 6D
pose estimation framework where each visible part of the
objects contributes a local pose prediction in the form of
2D keypoint locations. We then use a predicted measure of
confidence to combine these pose candidates into a robust
set of 3D-to-2D correspondences, from which a reliable
pose estimate can be obtained. We outperform the state-ofthe-art on the challenging Occluded-LINEMOD and YCBVideo datasets, which is evidence that our approach deals
well with multiple poorly-textured objects occluding each
other. Furthermore, it relies on a simple enough architecture to achieve real-time performance.