Abstract
In this paper, we strive to answer two questions: What
is the current state of 3D hand pose estimation from depth
images? And, what are the next challenges that need to
be tackled? Following the successful Hands In the Million
Challenge (HIM2017), we investigate the top 10 state-ofthe-art methods on three tasks: single frame 3D pose estimation, 3D hand tracking, and hand pose estimation during
object interaction. We analyze the performance of different
CNN structures with regard to hand shape, joint visibility,
view point and articulation distributions. Our findings include: (1) isolated 3D hand pose estimation achieves low
mean errors (10 mm) in the view point range of [70, 120]
degrees, but it is far from being solved for extreme view
points; (2) 3D volumetric representations outperform 2D
CNNs, better capturing the spatial structure of the depth
data; (3) Discriminative methods still generalize poorly to
unseen hand shapes; (4) While joint occlusions pose a challenge for most methods, explicit modeling of structure constraints can significantly narrow the gap between errors on
visible and occluded joints