资源论文BigHand2.2M Benchmark: Hand Pose Dataset and State of the Art Analysis

BigHand2.2M Benchmark: Hand Pose Dataset and State of the Art Analysis

2019-11-28 | |  105 |   41 |   0

Abstract In this paper we introduce a large-scale hand pose dataset, collected using a novel capture method. Existing datasets are either generated synthetically or captured using depth sensors: synthetic datasets exhibit a certain level of appearance difference from real depth images, and real datasets are limited in quantity and coverage, mainly due to the diffificulty to annotate them. We propose a tracking system with six 6D magnetic sensors and inverse kinematics to automatically obtain 21-joints hand pose annotations of depth maps captured with minimal restriction on the range of motion. The capture protocol aims to fully cover the natural hand pose space. As shown in embedding plots, the new dataset exhibits a signifificantly wider and denser range of hand poses compared to existing benchmarks. Current state-of-the-art methods are evaluated on the dataset, and we demonstrate signifificant improvements in cross-benchmark performance. We also show signifificant improvements in egocentric hand pose estimation with a CNN trained on the new dataset

上一篇:Beyond triplet loss: a deep quadruplet network for person re-identification

下一篇:Binarized Mode Seeking for Scalable Visual Pattern Discovery

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...